advertisement

Print

Top Six FAQs on Windows 2000 Disk Performance
Pages: 1, 2

3. How was the problem with the % Disk Time counter fixed in Windows 2000?



It may not be fixed exactly, but ultimately, this problem is addressed quite nicely in Windows 2000 (although it would arguably be better had the older, now obsolete % Disk Time counters not been retained).

Windows 2000 adds a new counter to the Logical and Physical Disk objects called % Idle Time. Disk idle time accumulates in diskperf when there are no outstanding requests for a volume.

Having a measure of disk idle time permits you to calculate % Disk Busy equals 100 minus % Idle Time, which is a valid measure of disk utilization.

Then you can calculate Disk Service Time equals % Disk Busy divided by Disk transfers/sec. This is an application of the Utilization Law, namely:

u = service time * arrival rate

Finally, calculate Disk Queue Time equals Avg. Disk secs/transfer minus Disk Service Time, which follows from the definition of response time equals service time plus queue time.

So, measuring Logical and Physical Disk % Idle Time solves a lot of problems. It allows us to calculate disk utilization and derive both disk-service time and queue-time measurements for disks in Windows 2000.

4. Why are the Logical Disk counters zero?

The answer is because you never issued the diskperf yv command to enable the Logical Disk measurements. When diskperf is not active, the corresponding counters in the System Monitor are zero. In Windows 2000, only the Physical Disk counters are enabled by default (this is equivalent to issuing the diskperf yd command).

In Windows NT, neither Logical or Physical Disk counters are enabled by default. To enable both sets of Disk counters, issue the diskperf y command in NT 4.0. You must reboot the system in both Windows 2000 and NT 4.0 in order to activate the new diskperf settings.

5. In Windows NT 4.0, when is it appropriate to issue the diskperf ye command?

Almost never. I recommend that you use the diskperf ye option only if you are using the software RAID functions (these include creating extendable volume sets and establishing disk striping, disk mirroring, and RAID 5 Logical volumes) in the Disk Administrator. Setting diskperf ye allows you to collect accurate Physical Disk statistics when you are using software RAID functions in NT 4.

The diskperf ye command loads the diskperf.sys filter driver beneath the optional fault tolerant ftdisk.sys disk driver that provides software RAID functions in Windows NT 4.0. When striped, mirrored, or RAID 5 Logical Disks are defined using Disk Administrator functions, the ftdisk.sys module that is responsible for remapping Logical Disk I/O requests to the appropriate Physical Disk is loaded in the I/O driver stack below the NTFS file system driver and before the SCSI Physical Disk driver. When the normal diskperf y command is issued, diskperf.sys is loaded in front of ftdisk.sys. This allows diskperf to capture information about Logical Disk requests accurately. But because Logical Disk requests are transformed by the ftdisk.sys layer immediately below it, the Physical Disk statistics reported are inaccurate. To see accurate Physical Disk statistics, issue the diskperf ye command to load diskperf.sys below ftdisk.sys.

Creating extendable volume sets is by far the most common use of the software RAID functions in the NT 4.0 Disk Administrator. You may prefer loading diskperf above ftdisk.sys (using the normal diskperf y command) to obtain accurate Logical Disk statistics for a volume set.

This problem is addressed in Windows 2000 by allowing diskperf to be loaded twice, once above ftdisk.sys to collect Logical Disk statistics and once below it to collect Physical Disk stats. In Windows 2000, diskperf is loaded below ftdisk.sys by default. To load it a second time, issue the diskperf yv command to activate the Logical Disk measurements.

6. I am concerned about the overhead of the diskperf measurements. What does this feature cost?

Not much. I strongly recommend that you enable all disk performance data collection on any system where you care about performance.

Even if you don't care that much about performance, you should turn on Logical Disk reporting at a minimum. The Logical Disk Object contains two counters, Free Megabytes and % Free Space, which will alert you in advance to potential out-of-disk space conditions.

The diskperf measurement layer does add some code to the I/O Manager stack, so there is added latency associated with each I/O request that accesses a Physical Disk when measurement is turned on. However, the overhead of running the diskperf measurement layer, even twice, on Windows 2000 machines, is trivial. In a benchmark environment where a 550MHz, four-way Windows 2000 Server was handling 40,000 I/Os per second, enabling the diskperf measurements reduced its I/O capacity by about 5 percent to 38,000 I/Os per second. In that environment, we estimated that the diskperf measurement layer added about 3 to 4 microseconds to the I/O Manager path length for each I/O operation. (On a faster processor, the delay is proportionally less.) For a disk I/O request that you would normally expect to require a minimum of 3 to 5 milliseconds, this additional latency is hardly noticeable.

Besides, if you do not have disk-performance statistics enabled and a performance problem occurs that happens to be disk-related (and many are), you won't be able to gather data about the problem because loading the diskperf measurement layer requires a reboot.

In my view, you can only justify turning off the disk performance stats in a benchmark environment where you are attempting to wring out the absolute highest performance level from your hardware configuration. Of course, you will need to have the diskperf measurements enabled initially to determine how to optimize the configuration in the first place. It is standard practice to disable disk performance monitoring prior to making your final measurement runs.


O'Reilly & Associates recently released (January 2002) Windows 2000 Performance Guide.