Live Chat Software by Kayako
Discovery CPU with VMware
Posted by Thang Le Toan on 03 March 2020 05:21 AM
Multi-threaded CPU seconds for Discoveryand Single-threaded CPU seconds for Discoverycan be used to roughly estimate the discovery time. Discovery must be constrained by Central Processing Unit (CPU), both in the multi- and single-threaded components. The single-threaded components do not overlap each other during discovery. Discovery time can therefore be roughly estimated using the following formula:
Total single-threaded CPU + Total multi-threaded CPU / Number of CPUs
Then, adjust for relative CPU speed from using the Standard Performance Evaluation Corporation (SPEC) rating for the proposed hardware. Appendix A, “Defining a CPU,”provides details on how to adjust the specifications depending upon your hardware.
Discovery will proceed faster with the addition of more and faster CPUs, but additional processing power is not an absolute requirement. There is a limit to the amount of CPU processing power that may be profitably applied for discovery. In the lab environment (with very low network latency) eight discovery threads provided optimal discovery time. It was observed that adding more threads, beyond the optimal number of discovery threads, increases CPU consumption, but does not necessarily improve the discovery time.
More than four CPUs will provide little additional benefit, but this will vary substantially depending on the platform. Sometimes, more than two CPUs for discovery is of little benefit as the amount of parallelism we can achieve varies, making discovery times difficult to predict. The data in Multi-threaded CPU seconds for Discoveryand Single-threaded CPU seconds for Discoverycame from servers running 10 discovery threads. This data reflects the contention from polling and correlation, which normally occurs in discoveries subsequent to the first one. As explained in “Discovery threads” on page 50, the CPU required for additional threads and processors may vary depending on your platform.
CPU estimates for single-threaded tasks
Post Processing (includes Reconfigure) through Topology Sync reflect values observed in a laboratory test environment using the hardware listed in Appendix B, “Hardware Specifications,”. Use this data as a comparison tool when deploying your own system to estimate the expected processing time for each task.
The test scenario had VMware Smart Assurance Service Assurance Manager (SAM) running on the same machine. Topology synchronization may take longer time if there is significant latency between SAM and IP servers. Appendix B, “Hardware Specifications,” provides specifications of servers measured.
Determine memory requirements for network objects
To determine the memory requirements for your network objects, use Memory requirements by IP Availability Managercomponent with the number of ports and interfaces that you either obtained or estimated.
Memory requirements by IP Availability Managercomponent presents the memory requirements per network object for the IP Availability Manager.
Table on page presents the memory requirements per network object for the combined deployment of the IP Availability Managerand IP Performance Manager(AM-PM).
The values in the tables were obtained by observing memory requirements of customer topologies and applying linear regression to the results. The values represent VMware’s best compromise between accuracy and simplicity.
Memory is based on UNIX ps RSS working set size. These values measure the amount of physical memory consumed rather than the amount of address space consumed. The memory observations were arrived at using the following commands on the various platforms:
Unix: "ps -opid,ppid,rss,comm,args [PID]"
RSS is reported in kiloBytes.
PerfMon is reported in bytes.
The per managed port measures vary in the regression results as managed ports tended to be a relatively smaller percentages of the topologies.
The amount of discovery network traffic varies depending upon the types of devices being discovered. The estimates in Discovery traffic in bytes reflect a regression around ports (managed + unmanaged) and interfaces from four topologies. The values mentioned should be regarded as an estimate.
Accuracy (predicted/actual): 99%, 129%, 94%, 103%, 98%
The percentages reflect the accuracy of the predictor values presented in Discovery traffic in bytes against the five sample topologies, compared to the actual values observed. The bandwidth depends on the speed at which discovery progresses, which largely depends on the mix of interfaces and ports.
The expected bandwidth is:
(Total bytes from Discovery traffic in bytes)*8/estimated discovery time (from GUID-E4D3FB5F-3434-4489-AEFF-F413C0D6DC1B.html#GUID-E4D3FB5F-3434-4489-AEFF-F413C0D6DC1B___IP_DEPLOY_DESIGN_39125 and GUID-E4D3FB5F-3434-4489-AEFF-F413C0D6DC1B.html#GUID-E4D3FB5F-3434-4489-AEFF-F413C0D6DC1B___IP_DEPLOY_DESIGN_34189 bits per second.
Here, 8 refers to the number of bits in a byte.