I created and prepositioned approximately a 725Mb share. The preposition status reports that the directive took 15 minutes to complete. We then connected to a PC at the remote location and mapped to the share here at corporate. We then copied the files from this share to the desktop and it took about 2 minutes. We then copied these same files from a second non-prepositioned share and the transfer time was about 2-3 minutes. I would have expected the second copy from the non-prepositioned share to be longer. I am taking these results to be that the DRE and CIFS cache are both populated because of the prepositioning. Is this a valid test to compare prepositioning -vs- non-prepositioning? Tonight I am going to update the prepositioing and see what it's reduced time will be with modified copies of the original files. We will then see what the copy results will be tomorrow.
We are trying to establish a base line and extrapolate times and things before we attempt an 80Gb prepositioning. Is there a different approach we should be taking for this test? Do these results seem about normal?
Start with the basics... Check the speed and duplex on your interfaces and ensure that you don't have any errors or drops. Also, is the File Server using SMB Signing? If it is, then CIFS copies won't use the CIFS AO. You can see the connection as you are copying and can tell if you are using the AO or not via the CLI or the CM GUI. If it's using basic optimization, it's not using the CIFS AO, and thus not using your prepositioning.
I would check and see that you inserted content into the cache via prepositioning (look and see that the disk size increased). Also, validate that the files copied from the server 2nd and 3rd time are either faster or the same.
DRE would have definitely been populated when you did the first prepositioning and/or copy. So the 2nd server copy would have used that for the same files, however the CIFS cache would not have been used. Try clearing the DRE cache and copying from the 2nd server again, that should be MUCH longer.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...