advertisement

Print

Top Six FAQs on Windows 2000 Disk Performance

by Mark Friedman, author of Windows 2000 Performance Guide
01/18/2002

I respond to a lot of questions about Windows 2000 disk performance. In this article I've provided answers to the most frequently asked questions I've received from experienced computer performance professionals. What I've found is when these professionals first start to look seriously at the data available on disk performance on a Windows NT/2000/XP machine, they usually ask one or more of the six questions raised here. These questions arise because something doesn't seem right when they look closely at the data. I have tried to answer the questions succinctly and in such a way that a person who already knows their way around disk performance issues can make immediate sense of the Windows 2000 environment.

1. The Physical Disk % Disk Time counters look wrong. What gives?

Often when you add the % Disk Read Time and % Disk Write Time counters together, they do not add up to % Disk Time. The % Disk Time counters are capped in the System Monitor at 100 percent because it would be confusing to report disk utilization greater than 100 percent. This occurs because the % Disk Time counters do not actually measure disk utilization. The Explain text that implies that they do represent disk utilization is very misleading.

What the % Disk Time counters actually do measure is a little complicated to explain.

Related Reading

Windows 2000 Performance GuideWindows 2000 Performance Guide
By Mark Friedman, Odysseas Pentakalos
Table of Contents
Index
Sample Chapter
Full Description

The %Disk Time counter is not measured directly. It is a value derived by the diskperf filter driver that provides disk performance statistics. diskperf is a layer of software sitting in the disk driver stack. As I/O Request packets (IRPs) pass through this layer, diskperf keeps track of the time I/O's start and the time they finish. On the way to the device, diskperf records a timestamp for the IRP. On the way back from the device, the completion time is recorded. The difference is the duration of the I/O request. Averaged over the collection interval, this becomes the Avg. Disk sec/Transfer, a direct measure of disk response time from the point of view of the device driver. diskperf also maintains byte counts and separate counters for reads and writes, at both the Logical and Physical Disk level. (This allows Avg. Disk sec/Transfer to be broken out into reads and writes.)

The Avg. Disk sec/Transfer measurement reported is based on the complete roundtrip time of a request. Strictly speaking, it is a direct measure of disk response time-–which means it includes queue time. Queue time is the time spent waiting for the device because it is busy with another request or waiting for the SCSI bus to the device because it is busy.

% Disk Time is a value derived by diskperf from the sum of all IRP roundtrip times (Avg.Disk sec/Transfer) multiplied by Disk Transfers/sec, and divided by duration, or essentially:

% Disk Time = Avg Disk sec/Transfer * Disk Transfers/sec

which is a calculation (subject to capping when it exceeds 100 percent) that you can verify easily enough for yourself.

Because the Avg. Disk sec/Transfer that diskperf measures includes disk queuing, % Disk Time can grow greater than 100 percent if there is significant disk queuing (at either the Physical or Logical Disk level). The Explain text in the official documentation suggests that this product of Avg. Disk sec/Transfer and Disk Transfers/sec measures % Disk busy. If (and this a big "if") the IRP roundtrip time represented only service time, then the % Disk Time calculation would correspond to disk utilization. But Avg. Disk sec/Transfer includes queue time, so the formula used actually calculates something entirely different.

The formula used in the calculation to derive % Disk Time corresponds to Little's Law, a well-known equivalence relation that shows the number of requests in the system as a function of the arrival rate and service time. According to Little's Law, Avg. Disk sec/Transfer times Disk transfers/sec properly yields the average number of requests in the system, more formally known as the average Queue length. The average Queue length value calculated in this fashion includes both IRPs queued for service and those actually in service.

A direct measure of disk response time such as Avg. Disk sec/Transfer is a useful metric. Since people tend to buy disk hardware based on a service-time expectation, it is unfortunate that there is no way to break out the disk service time and the queue time separately in NT 4.0. (The situation is greatly improved in Windows 2000, however.) Given the way diskperf hooks into the I/O driver stack, the software RAID functions associated with Ftdisk, and the SCSI disks that support command tag queuing, one could argue this is the only feasible way to do things in the Windows 2000 architecture. The problem of interpretation arises because of the misleading Explain text and the arbitrary and surprising use of capping.

Microsoft's fix to the problem beginning in NT 4.0 is a different version of the counter that is not capped. This is Avg. Disk Queue Length. Basically, this is the same field as % Disk Time without capping and without being printed as a percentage.

For example, if % Disk Time is 78.3 percent, Avg. Disk Queue Length is 0.783. When % Disk Time is equal to 100 percent, then Avg. Disk Queue Length shows the actual value before capping. We recently had a customer reporting values like 2.63 in this field. That's a busy disk! The interpretation of this counter is the average number of disk requests that are active and queued-–the average queue length.

2. I see a value of 2.63 in the Ave Disk Queue Length counter field. How should I interpret this value?

The Avg. Disk Queue Length counter is derived from the product of Avg. Disk sec/Transfer multiplied by Disk Transfers/sec, which is the average response of the device times the I/O rate. Again, this corresponds to a well-known theorem of Queuing Theory called Little's Law, which states:

N = A * Sr

where N is the number of outstanding requests in the system, A is the arrival rate of requests, and Sr is the response time. So the Avg. Disk Queue Length counter is an estimate of the number of outstanding requests to the (Logical or Physical) disk. This includes any requests that are currently in service at the device, plus any requests that are waiting for service. If requests are currently waiting for the device inside the SCSI device driver layer of software below the diskperf filter driver, the Current Disk Queue Length counter will have a value greater than 0. If requests are queued in the hardware, which is usual for SCSI disks and RAID controllers, the Current Disk Queue Length counter will show a value of 0, even though requests are queued.

Since the Avg. Disk Queue Length counter value is a derived value and not a direct measurement, you do need to be careful how you interpret it. Little's Law is a very general result that is often used in the field of computer measurement to derive a third result when the other two values are measured directly. However, Little's Law does require an equilibrium assumption in order for it be valid. The equilibrium assumption is that the arrival rate equals the completion rate over the measurement interval. Otherwise, the calculation is meaningless. In practice, this means you should ignore the Ave Disk Queue Length counter value for any interval where the Current Disk Queue Length counter is not equal to the value of Current Disk Queue Length for the previous measurement interval.

Comment on this articleDo you have other Windows 2000 disk performance suggestions?
Post your comments

Suppose, for example, the Avg. Disk Queue Length counter reads 10.3, and the Current Disk Queue Length counter shows four requests in the disk queue at the end of the measurement interval. If the previous value of Current Disk Queue Length was 0, the equilibrium assumption necessary for Little's Law does not hold. Since the number of arrivals is evidently greater than the number of completions during the interval, there is no valid interpretation for the value in the Avg. Disk Queue Length counter, and you should ignore the counter value. However, if both the present measurement of the Current Disk Queue Length counter and the previous value are equal, then it is safe to interpret the Avg. Disk Queue Length counter as the average number of outstanding I/O requests to the disk over the interval, including both requests currently in service and requests queued for service.

You also need to understand the ramifications of having a total disk roundtrip time measurement instead of a simple disk service time measure. Assuming M/M/1, a disk at 50 percent busy has one request waiting on average and disk response time is 2 times service time. This means that at 50 percent busy--assuming M/M/1 holds--an Avg. Disk Queue Length value of 1.00 is expected. That means that any disk with an Avg. Disk Queue Length value greater than 0.70 probably has a substantial amount of queue time associated with it. The exception, of course, is when M/M/1 does not hold, such as during a back-up operation when there is only a single user of the disk. A single user of the disk can drive a disk to nearly 100 percent utilization without a queue!

Pages: 1, 2

Next Pagearrow