File Server Comparison:
Microsoft Windows NT Server 4.0
Windows NT Server 4.0 Has 4.2 Times Faster File-Server Performance and 11.2 Times Better Price/Performance Than Solaris 2.6 with TotalNET 5.2
Mindcraft tested the file-server performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 and SunSoft Solaris 2.6 on a Sun Ultra Enterprise 450. We tested the Server Message Block (SMB) file-sharing protocol on top of TCP/IP for both servers. The Sun server ran TotalNET 5.2 from Syntax to provide the SMB protocol. We used TotalNET because Sun ships an evaluation copy with its Ultra Enterprise 450. Table 1 shows the peak throughput measured for each system in megabytes per second (MB/S), the price of the systems tested, and the price/performance in dollars per MB/S.
Table 1: Result
Mindcraft tested these file servers with the Ziff-Davis Benchmark Operation NetBench 5.01 benchmark using its standard disk mix test. The price/performance calculations are presented in the Price/Performance section.
The benchmark results show that peak file-server performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 is over 4.2 times that of Solaris 2.6 and TotalNET 5.2 on a Sun Ultra Enterprise 450. The Microsoft solution also offers 11.2 times better price/performance than the Sun system, making it a much more cost-effective solution.
Looking at the Results
The NetBench 5.01 benchmark measures file server performance. Its primary performance metric is throughput in bytes per second. The NetBench documentation defines throughput as "The number of bytes a client transferred to and from the server each second. NetBench measures throughput by dividing the number of bytes moved by the amount of time it took to move them. NetBench reports throughput as bytes per second." We report throughput in megabytes per second to make the charts easier to read.
We tested the Server Message Block (SMB) file sharing protocol on both the Windows NT Server and Solaris-TotalNET platforms using the standard NetBench NBDM_60.TST test suite. Figure 1 shows the throughput we measured plotted against the number of test systems that participated in each data point. Note, while the test suite attempted to use the same number of test systems for a given data point for both servers, the Sun server could only support up to 28 test systems; the rest stopped participating in the test because of file-sharing errors.
Figure 1: SMB File Server Throughput Performance(larger numbers are better)
You need to know how NetBench 5.01 works in order to understand what the NetBench throughput measurement means. NetBench is designed to stress a file server by using a number of test systems to read and write files on it. Specifically, a NetBench test suite is made up of a number of mixes. A mix is a particular configuration of NetBench parameters, including the number of test systems used to load the server. Typically, each mix increases the load on the server by increasing the number of test systems involved while keeping the rest of the parameters the same. The NBDM_60.TST test suite we used has the parameters shown in Table 2.
Table 2: NBDM_60.TST Parameters
With this background, let us look at the results in Figure 1 (the supporting details for this chart are in Overall NetBench Results). It is obvious that the throughput of the Compaq ProLiant 3000 was significantly higher than that of the Sun Ultra Enterprise 450 when four or more test systems were used. The ProLiant 3000 had over 4.2 times the throughput of the Ultra Enterprise 450 at each servers peak performance.
For the Windows NT Server/ProLiant platform, all of the test systems specified for each data point participated. But the Solaris/Ultra Enterprise platform had test systems stop participating in each data point after 20 systems and it never had more than 28 test systems participating in any mix. This means that under moderate load people are likely to see a significant number of errors if they use SMB to access files stored a Solaris/Ultra Enterprise platform. These errors could result in lost data.
The readily measured factors that limit performance of a file server are:
Well examine each factor individually.
Performance Monitoring Tools
We ran the standard Windows NT performance-monitoring tool, perfmon, on the ProLiant 3000 during the tests to gather performance statistics. Perfmon allows you to select which performance statistics you want to monitor and lets you see them in a real-time chart as well as save them in a log file for later analysis. We logged the processor, memory, network interface, and disk subsystem performance counters for these tests.
To collect performance data on the Ultra Enterprise 450 during the test, we ran vmstat for memory-related statistics and mpstat for processor-related statistics. These programs output a fixed set of performance statistics that can be displayed or saved in a file.
Server CPU Performance
Each of the ProLiant 3000s CPUs was 45% utilized at peak performance. NetBench performance was not CPU-limited on the ProLiant 3000.
At peak performance, the Ultra Enterprise 450 CPUs spent 29% of their time in system and user time. In addition, they were waiting 70% of the time and idle only 1%. The Ultra Enterprise 450 CPUs were 99% utilized at peak performance. The large wait time indicates that something other than processing was causing the high CPU utilization. Well look at other factors below to understand the CPU bottleneck.
Memory was not a performance limitation for either system as shown by monitoring programs. Throughout the test the ProLiant 3000 had less than 30 MB committed for all uses (except for file system cache) out of the 512 MB on the system. The Ultra Enterprise 450 used less than 256 MB of memory. Other factors contributed to limit the performance of both systems.
Disk Subsystem Performance
Both systems were configured with the operating system and paging/swap space on one disk and the NetBench data on another. The third disk in each system was not used for these tests because NetBench does not readily support splitting its test data across multiple disks.
The perfmon "% Disk Time" counter shows the percentage of elapsed time that the selected disk drive is busy servicing read or write requests. Table 3 shows the % Disk Time information for the two disks involved in NetBench testing. The high disk utilization on the D: drive clearly indicates that NetBench performance was disk-limited on the ProLiant 3000.
Table 3: Windows NT Disk Utilization
The Ultra Enterprise 450 disk with the operating system and swap partitions averaged 4 operations per second. The disk with the NetBench data averaged 115 operations per second from the mix on. It would not go above this average even though the test system load increased. So the disk subsystem was a performance-limiting factor for the Ultra Enterprise 450 as well.
We believe that had we used a hardware-based RAID to store the NetBench data, the performance of the ProLiant 3000 would have been significantly higher because the effective disk access time would have been lower. We did not use a hardware RAID because we wanted to have a fair comparison with the Sun system and there was no hardware-based RAID available for it.
The network did not limit the performance of the ProLiant 3000. At peak performance, NetBench throughput was 64 Mbits/second. This represents heavy use of a single 100Base-TX network. Because the ProLiant 3000 had two such networks, there was sufficient bandwidth available to support higher throughput.
Similarly, the network did not limit the performance of the Ultra Enterprise 450 because its peak throughput was only 14.9 Mbits/second on both of its networks.
Operating System and File Server Software Performance
SMB file sharing is integrated into the core of Windows NT Server 4.0. So how much time the CPUs are in privileged mode is a good indicator of how much time is spent on file sharing support. During all mixes, privileged mode execution time accounted for 100% of the CPU utilization. Because the CPUs were only 45% utilized, the operating system and SMB file sharing were not limiting the performance of the ProLiant 3000.
The Ultra Enterprise 450 provides SMB file sharing via TotalNET. By analyzing CPU performance during the test, we can understand why TotalNET performed poorly. From the peak performance mix onward, the CPUs did 1,500 to 3,100 context switches per second and had to process 10,000 to 30,000 system calls per second. These are high figures and occur in large part because TotalNET is not integrated with Solaris 2.6. The extra work that Solaris needs to do to support TotalNET is an essential part of why the CPUs were idle less than 3% of the time after the second mix. Thus, both Solaris 2.6 and TotalNET combined to limit the performance of the Ultra Enterprise 450.
Windows NT Server 4.0 on a Compaq ProLiant 3000 offers high-performance file sharing. Comparably configured, its performance is over 4.2 times that of Solaris 2.6 with TotalNET on a Sun Ultra Enterprise 450. The price/performance of a Windows NT Server/ProLiant 3000 platform is 11.2 times better than a Solaris/TotalNET/Ultra Enterprise 450 platform.
Solaris 2.6 on an Ultra Enterprise 450 is performance-limited for SMB file sharing by a combination of its disk subsystem, operating system, and TotalNET. Solaris 2.6 on an Ultra Enterprise 450 with TotalNET is not appropriate for high-volume SMB file sharing.
We calculated price/performance by dividing the street price of the servers and software tested by the peak throughput measured in megabytes per second. We obtained the street price of the ProLiant 3000 shown in Table 4 by requesting a quote from a Compaq value-added reseller (VAR). Likewise, the street price of the Ultra Enterprise 450 was obtained from a Sun VAR quote; we added to that the cost of the TotalNET software for 60 users based on a quote we received from Syntax, Inc.
Table 4: Compaq ProLiant 3000 Pricing
Table 5: Sun Ultra Enterprise 450 Pricing
Configurations and Tuning
We configured the Compaq and Sun hardware comparably, and tuned the software on each system to maximize performance. Table 6 shows the configuration of the Compaq ProLiant 3000 we tested. Table 7 describes the Sun Ultra Enterprise 450 configuration we used, including the TotalNET configuration.
Table 6: Compaq ProLiant 3000 Configuration
Table 7: Sun Ultra Enterprise 450 Configuration
The Test Systems and Network Configuration
Mindcraft ran these tests using a total of 60 test systems configured as shown in Table 8.
Table 8: Test System Configurations
We balanced the networks by grouping the test systems so that one system on each hub would be added for each mix after the second mix, which uses four test systems. Figure 2 shows the test lab configuration.
Figure 2: Test Lab Configuration
Mindcraft, Inc. conducted the performance tests described in this report between April 14 and May 1, 1998. Mindcraft used the NetBench 5.01 benchmark to measure performance with the standard NetBench NBDM_60.TST test suite.
Mindcraft certifies that the results reported herein represent the performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 as measured by NetBench 5.01. Mindcraft also certifies that the results reported herein represent the performance of Sun Solaris 2.6 and Syntax TotalNET on a Sun Ultra Enterprise 450 as measured by NetBench 5.01.
Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.
Compaq ProLiant 3000 Detailed Benchmark Results
Sun Ultra Enterprise 450 Detailed Benchmark Results
The information in this publication is subject to change without notice.
MINDCRAFT, INC. SHALL NOT BE LIABLE FOR ERRORS OR OMISSIONS CONTAINED HEREIN, NOR FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES RESULTING FROM THE FURNISHING, PERFORMANCE, OR USE OF THIS MATERIAL.
This publication does not constitute an endorsement of the product or products that were tested. This test is not a determination of product quality or correctness, nor does it ensure compliance with any federal, state or local requirements.
The Mindcraft tests discussed herein were performed without independent verification by Ziff-Davis and Ziff-Davis makes no representations or warranties as to the results of the tests.
Mindcraft is a registered trademark of Mindcraft, Inc.
Product and corporate names mentioned herein are trademarks and/or registered trademarks of their respective companies.
|Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: email@example.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324