The following counters all have to
do with the management of memory issues. In addition, there will counters that
assist in the determination of whether the problem you are having is really a
memory issue.
- Memory : Page Faults/sec. This counter gives a general idea of how many times information being requested is not where the application (and VMM) expects it to be. The information must either be retrieved from another location in memory or from the pagefile. Recall that while a sustained value may indicate trouble here, you should be more concerned with hard page faults that represent actual reads or writes to the disk. Remember that the disk access is much slower than RAM.
- Memory : Pages Input/sec. Use this counter in comparison with the Page Faults/sec counter to determine the percentage of the page faults that are hard page faults.
Thus,
Pages Input/sec / Page Faults/sec = % Hard Page Faults. Sustained values
surpassing 40% are generally indicative of memory shortages of some kind. While
you might know at this point that there is memory shortage of some kind on the
system, this is not necessarily an indication that the system is in need of an
immediate memory upgrade.
- Memory : Pages Output/sec. As memory becomes more in demand, you can expect to see that the amount of information being removed from memory is increasing. This may even begin to occur prior to the hard page faults becoming a problem. As memory begins to run short, the system will attempt to first start reducing the applications to their minimum working set. This means moving more information out to the pagefiles and disk. Thus, if your system is on the verge of being truly strained for memory you may begin to see this value climb. Often the first pages to be removed from memory are data pages. The code pages experience more repetitive reuse.
- Memory : Pages/sec. This value is often confused with Page Faults/sec. The Pages/sec counter is a combination of Pages Input/sec and Pages Output/sec counters. Recall that Page Faults/sec is a combination of hard page faults and soft page faults. This counter, however, is a general indicator of how often the system is using the hard drive to store or retrieve memory associated data.
- Memory : Page Reads/sec. This counter is probably the best indicator of a memory shortage because it indicates how often the system is reading from disk because of hard page faults. The system is always using the pagefile even if there is enough RAM to support all of the applications. Thus, some number of page reads will always be encountered. However, a sustained value over 5 Page Reads/sec is often a strong indicator of a memory shortage. You must be careful about viewing these counters to understand what they are telling you. This counter again indicates the number of reads from the disk that were done to satisfy page faults. The amount of pages read each time the system went to the disk may indeed vary. This will be a function of the application and the proximity of the data on the hard drive. Irrelevant of these facts, a sustained value of over 5 is still a strong indicator of a memory problem. Remember the importance of "sustained." System operations often fluctuate, sometimes widely. So, just because the system has a Page Reads/sec of 24 for a couple of seconds does not mean you have a memory shortage.
- Memory : Page Writes/sec. Much like the Page Reads/sec, this counter indicates how many times the disk was written to in an effort to clear unused items out of memory. Again, the numbers of pages per read may change. Increasing values in this counter often indicate a building tension in the battle for memory resources.
- Memory : Available Memory. This counter indicates the amount of memory that is left after nonpaged pool allocations, paged pool allocations, process' working sets, and the file system cache have all taken their piece. In general, NT attempts to keep this value around 4 MB. Should it drop below this for a sustained period, on the order of minutes at a time, there may be a memory shortage. Of course, you must always keep an eye out for those times when you are simply attempting to perform memory intensive tasks or large file transfers.
- Memory : Nonpageable memory pool bytes. This counter provides an indication of how NT has divided up the physical memory resource. An uncontrolled increase in this value would be indicative of a memory leak in a Kernel level service or driver.
- Memory : Pageable memory pool bytes. An uncontrolled increase in this counter, with the corresponding decrease in the available memory, would be indicative of a process taking more memory than it should and not giving it back.
- Memory : Committed Bytes. This counter indicates the total amount of memory that has been committed for the exclusive use of any of the services or processes on Windows NT. Should this value approach the committed limit, you will be facing a memory shortage of unknown cause, but of certain severe consequence.
- Process : Page Faults/sec. This is an indication of the number of page faults that occurred due to requests from this particular process. Excessive page faults from a particular process are an indication usually of bad coding practices. Either the functions and DLLs are not organized correctly, or the data set that the application is using is being called in a less than efficient manner.
- Process : Pool Paged Bytes. This is the amount of memory that the process is using in the pageable memory region. This information can be paged out from physical RAM to the pagefile on the hard drive.
- Process : Pool NonPaged Bytes. This is the amount of memory that the process is using that cannot be moved out to the pagefile and thus will remain in physical RAM. Most processes do not use this, however, some real-time applications may find it necessary to keep some DLLs and functions readily available in order to function at the real-time mode.
- Process : Working Set. This is the current size of the memory area that the process is utilizing for code, threads, and data. The size of the working set will grow and shrink as the VMM can permit. When memory is becoming scarce the working sets of the applications will be trimmed. When memory is plentiful the working sets are allowed to grow. Larger working sets mean more code and data in memory making the overall performance of the applications increase. However, a large working set that does not shrink appropriately is usually an indication of a memory leak.
No comments:
Post a Comment