Mindcraft Certified Performance Comparison Report

File Server Comparison:

Microsoft Windows NT Server 4.0
on a Compaq ProLiant 3000
and
Sun Solaris 2.6 and Syntax TotalNET 5.2
on a Sun Ultra Enterprise 450

Contents

Executive Summary
Performance Analysis
How NetBench 5.01 Works
What Are the Bottlenecks?
Price/Performance
Products Tested
Test Lab
Mindcraft Certification
Overall NetBench Results

Executive Summary

Windows NT Server 4.0 Has 4.2 Times Faster File-Server Performance and 11.2 Times Better Price/Performance Than Solaris 2.6 with TotalNET 5.2

Mindcraft tested the file-server performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 and SunSoft Solaris 2.6 on a Sun Ultra Enterprise 450. We tested the Server Message Block (SMB) file-sharing protocol on top of TCP/IP for both servers. The Sun server ran TotalNET 5.2 from Syntax to provide the SMB protocol. We used TotalNET because Sun ships an evaluation copy with its Ultra Enterprise 450. Table 1 shows the peak throughput measured for each system in megabytes per second (MB/S), the price of the systems tested, and the price/performance in dollars per MB/S.

Table 1: Result Summary
(larger numbers are better for Peak Throughput, smaller numbers are better for the others)


File Server

Peak Throughput

System Price


Price/Performance

Windows NT Server 4.0
Compaq ProLiant 3000

(2 x 333MHz Pentium II, 512 MB)

8.0 MB/S $13,900 $1,737.50/MB/S

TotalNET/Solaris 2.6
Sun Ultra Enterprise 450

(2 x 296 MHz Ultra SPARC, 512 MB)

1.9 MB/S $37,047 $19,498.42/MB/S

Mindcraft tested these file servers with the Ziff-Davis Benchmark Operation NetBench 5.01 benchmark using its standard disk mix test. The price/performance calculations are presented in the Price/Performance section.

The benchmark results show that peak file-server performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 is over 4.2 times that of Solaris 2.6 and TotalNET 5.2 on a Sun Ultra Enterprise 450. The Microsoft solution also offers 11.2 times better price/performance than the Sun system, making it a much more cost-effective solution.

Performance Analysis

Looking at the Results

The NetBench 5.01 benchmark measures file server performance. Its primary performance metric is throughput in bytes per second. The NetBench documentation defines throughput as "The number of bytes a client transferred to and from the server each second. NetBench measures throughput by dividing the number of bytes moved by the amount of time it took to move them. NetBench reports throughput as bytes per second." We report throughput in megabytes per second to make the charts easier to read.

We tested the Server Message Block (SMB) file sharing protocol on both the Windows NT Server and Solaris-TotalNET platforms using the standard NetBench NBDM_60.TST test suite. Figure 1 shows the throughput we measured plotted against the number of test systems that participated in each data point. Note, while the test suite attempted to use the same number of test systems for a given data point for both servers, the Sun server could only support up to 28 test systems; the rest stopped participating in the test because of file-sharing errors.

Figure 1: SMB File Server Throughput Performance (larger numbers are better)

smbthruput.gif (8706 bytes)

How NetBench 5.01 Works

You need to know how NetBench 5.01 works in order to understand what the NetBench throughput measurement means. NetBench is designed to stress a file server by using a number of test systems to read and write files on it. Specifically, a NetBench test suite is made up of a number of mixes. A mix is a particular configuration of NetBench parameters, including the number of test systems used to load the server. Typically, each mix increases the load on the server by increasing the number of test systems involved while keeping the rest of the parameters the same. The NBDM_60.TST test suite we used has the parameters shown in Table 2.

Table 2: NBDM_60.TST Parameters

Parameter

Value

Comment

Ramp Up 30 seconds The amount of time at the beginning of a test mix that NetBench excludes from its measurements any file operations that occur.
Ramp Down 30 seconds The amount of time at the end of a test mix that NetBench excludes from its measurements any file operations that occur.
Length 11 minutes Includes both Ramp Up and Ramp Down time. So data is collected over a 10 minute period.
Delay 5 seconds How long a test system is to wait before starting a test after it is told by the controller to start. Each test system will pick a random number less than or equal to this value to stagger the start times of all test systems.
Think Time 2 seconds How long each test system will wait before performing the next piece of work.
Workspace 20 MB The size of the data files used by a test system, each of which has its own workspace.
Save Workspace Yes The last mix has this parameter set to No to clean up after the test is over.
Number of Mixes 16 Each mix tests the server with a different number of test systems. Mix 1 uses 1 system, Mix 2 uses 4 system, and subsequent mixes increment the number of test systems by 4.
Number of Clients 60 The maximum number of test systems available to be used by any test mix. The actual number of test systems that participate in a mix depends on the number specified in the mix definition and whether an error occurred to take a test system out of a particular mix.

NetBench does a good job of testing a file server under heavy load. To do this, each NetBench test system (called a client in the NetBench documentation) executes a script that specifies a file access pattern. As the number of test systems is increased, the load on a server is increased. You need to be careful, however, not to correlate the number of NetBench test systems participating in a test mix with the number of simultaneous users that a file server can support. This is because each NetBench test system represents more of a load than a single user would generate. NetBench was designed to behave this way in order to do benchmarking with as few test systems as possible while still generating large enough loads on a server to saturate it.

When comparing NetBench results, be sure to look at the configurations of the test systems because they have a significant effect on the measurements that NetBench makes. For example, the test system operating system may cache some or all of the workspace in its own RAM causing the NetBench test program not to go over the network to the file server as frequently as expected. This can significantly increase the reported throughput. In some cases, we’ve seen reported results that are 75% above the available network bandwidth. If the same test systems and network components are used to test multiple servers with the same test suite configuration, you can make a fair comparison of the servers.

With this background, let us look at the results in Figure 1 (the supporting details for this chart are in Overall NetBench Results). It is obvious that the throughput of the Compaq ProLiant 3000 was significantly higher than that of the Sun Ultra Enterprise 450 when four or more test systems were used. The ProLiant 3000 had over 4.2 times the throughput of the Ultra Enterprise 450 at each server’s peak performance.

For the Windows NT Server/ProLiant platform, all of the test systems specified for each data point participated. But the Solaris/Ultra Enterprise platform had test systems stop participating in each data point after 20 systems and it never had more than 28 test systems participating in any mix. This means that under moderate load people are likely to see a significant number of errors if they use SMB to access files stored a Solaris/Ultra Enterprise platform. These errors could result in lost data.

What Are the Bottlenecks?

The readily measured factors that limit performance of a file server are:

  1. The server’s CPU and memory;
  2. The disk subsystem;
  3. The network; and
  4. The operating system and file server software.

We’ll examine each factor individually.

Performance Monitoring Tools

We ran the standard Windows NT performance-monitoring tool, perfmon, on the ProLiant 3000 during the tests to gather performance statistics. Perfmon allows you to select which performance statistics you want to monitor and lets you see them in a real-time chart as well as save them in a log file for later analysis. We logged the processor, memory, network interface, and disk subsystem performance counters for these tests.

To collect performance data on the Ultra Enterprise 450 during the test, we ran vmstat for memory-related statistics and mpstat for processor-related statistics. These programs output a fixed set of performance statistics that can be displayed or saved in a file.

Server CPU Performance

Each of the ProLiant 3000’s CPUs was 45% utilized at peak performance. NetBench performance was not CPU-limited on the ProLiant 3000.

At peak performance, the Ultra Enterprise 450 CPUs spent 29% of their time in system and user time. In addition, they were waiting 70% of the time and idle only 1%. The Ultra Enterprise 450 CPUs were 99% utilized at peak performance. The large wait time indicates that something other than processing was causing the high CPU utilization. We’ll look at other factors below to understand the CPU bottleneck.

Memory was not a performance limitation for either system as shown by monitoring programs. Throughout the test the ProLiant 3000 had less than 30 MB committed for all uses (except for file system cache) out of the 512 MB on the system. The Ultra Enterprise 450 used less than 256 MB of memory. Other factors contributed to limit the performance of both systems.

Disk Subsystem Performance

Both systems were configured with the operating system and paging/swap space on one disk and the NetBench data on another. The third disk in each system was not used for these tests because NetBench does not readily support splitting its test data across multiple disks.

The perfmon "% Disk Time" counter shows the percentage of elapsed time that the selected disk drive is busy servicing read or write requests. Table 3 shows the % Disk Time information for the two disks involved in NetBench testing. The high disk utilization on the D: drive clearly indicates that NetBench performance was disk-limited on the ProLiant 3000.

Table 3: Windows NT Disk Utilization

Disk

% Disk Time

Comment

C:
(OS and paging)
0.25% Almost no disk activity throughout the test
D:
(NetBench data)
100.00% From the third mix on.

The Ultra Enterprise 450 disk with the operating system and swap partitions averaged 4 operations per second. The disk with the NetBench data averaged 115 operations per second from the mix on. It would not go above this average even though the test system load increased. So the disk subsystem was a performance-limiting factor for the Ultra Enterprise 450 as well.

We believe that had we used a hardware-based RAID to store the NetBench data, the performance of the ProLiant 3000 would have been significantly higher because the effective disk access time would have been lower. We did not use a hardware RAID because we wanted to have a fair comparison with the Sun system and there was no hardware-based RAID available for it.

Network Performance

The network did not limit the performance of the ProLiant 3000. At peak performance, NetBench throughput was 64 Mbits/second. This represents heavy use of a single 100Base-TX network. Because the ProLiant 3000 had two such networks, there was sufficient bandwidth available to support higher throughput.

Similarly, the network did not limit the performance of the Ultra Enterprise 450 because its peak throughput was only 14.9 Mbits/second on both of its networks.

Operating System and File Server Software Performance

SMB file sharing is integrated into the core of Windows NT Server 4.0. So how much time the CPUs are in privileged mode is a good indicator of how much time is spent on file sharing support. During all mixes, privileged mode execution time accounted for 100% of the CPU utilization. Because the CPUs were only 45% utilized, the operating system and SMB file sharing were not limiting the performance of the ProLiant 3000.

The Ultra Enterprise 450 provides SMB file sharing via TotalNET. By analyzing CPU performance during the test, we can understand why TotalNET performed poorly. From the peak performance mix onward, the CPUs did 1,500 to 3,100 context switches per second and had to process 10,000 to 30,000 system calls per second. These are high figures and occur in large part because TotalNET is not integrated with Solaris 2.6. The extra work that Solaris needs to do to support TotalNET is an essential part of why the CPUs were idle less than 3% of the time after the second mix. Thus, both Solaris 2.6 and TotalNET combined to limit the performance of the Ultra Enterprise 450.

Conclusion

Windows NT Server 4.0 on a Compaq ProLiant 3000 offers high-performance file sharing. Comparably configured, its performance is over 4.2 times that of Solaris 2.6 with TotalNET on a Sun Ultra Enterprise 450. The price/performance of a Windows NT Server/ProLiant 3000 platform is 11.2 times better than a Solaris/TotalNET/Ultra Enterprise 450 platform.

Solaris 2.6 on an Ultra Enterprise 450 is performance-limited for SMB file sharing by a combination of its disk subsystem, operating system, and TotalNET. Solaris 2.6 on an Ultra Enterprise 450 with TotalNET is not appropriate for high-volume SMB file sharing.

Price/Performance

We calculated price/performance by dividing the street price of the servers and software tested by the peak throughput measured in megabytes per second. We obtained the street price of the ProLiant 3000 shown in Table 4 by requesting a quote from a Compaq value-added reseller (VAR). Likewise, the street price of the Ultra Enterprise 450 was obtained from a Sun VAR quote; we added to that the cost of the TotalNET software for 60 users based on a quote we received from Syntax, Inc.

Table 4: Compaq ProLiant 3000 Pricing

Feature

Configuration

Price

Base System 1 x 333 MHz Pentium II; 64 MB RAM $4,599
Additional CPU 1 x 333 MHz Pentium II $1,299
Memory 512 MB EDO ECC Memory Kit $2,899
Disk 3 x 4.3 GB Hot Pluggable SCSI $2,097
Network 2 x Netelligent 10/100 TX PCI UTP $178
Operating System Windows NT Server 4.0 60 user license $2299
Monitor 17" Compaq V75 Monitor $529
Total   $13,900

 

Table 5: Sun Ultra Enterprise 450 Pricing

Feature

Configuration

Price

Base System A25-AA-UEC2-9S-512CD-5214x3-C:
2 x 300 MHz UltraSPARC II; 512 MB RAM; 3 x 4.2 GB UltraSCSI disk; 1 x 10/100Base-TX NIC; Solaris 2.6 server license for unlimited users
$29,937
Network 1 x SunSwift 10/100Base-TX $739
Operating System SOLS-C Solaris 2.6 Server CD-ROM Media Kit $75
Monitor X3660/7103A: 17" Monitor; PGX Graphics PCI card, and keyboard $1,077
SMB File Server Syntax TotalNET $5,219
Total   $37,047

Products Tested

Configurations and Tuning

We configured the Compaq and Sun hardware comparably, and tuned the software on each system to maximize performance. Table 6 shows the configuration of the Compaq ProLiant 3000 we tested. Table 7 describes the Sun Ultra Enterprise 450 configuration we used, including the TotalNET configuration.

Table 6: Compaq ProLiant 3000 Configuration

Feature

Configuration

CPU 2 x 333 MHz Pentium II

Cache: L1: 32 KB (16 KB I + 16 KB D); L2: 512 KB

RAM 512 MB EDO ECC
Disk 3 x 4.3 GB Ultra Wide SCSI-3; OS and paging file on C: drive; NetBench data files on D: drive; E: drive not used for these tests; used both on-board disk controllers with the D: drive on one controller and the C: and E: drives on the other.
Networks 2 x Netelligent 10/100 TX PCI UTP
Operating System Windows NT Server 4.0; Service Pack 3

Tuning

Performance set to maximize file sharing; foreground application boost set to NONE; ran chkdsk /l:65536 on all disks to increase the log size.

File Server Windows NT Server standard SMB protocol

Tuning

None

Table 7: Sun Ultra Enterprise 450 Configuration

Feature

Configuration

CPU 2 x 296 MHz Ultra SPARC

Cache: L1: 32 KB (16 KB I + 16 KB D); L2: 2 MB

RAM 512 MB ECC
Disk 3 x 4.2 GB; OS and swap on drive 0; NetBench data files on drive 1; Drive 2 not used for this test
Networks 2 Total: 1 Built-in FastEthernet, 1 SunSwift FastEthernet Card
Operating System Solaris 2.6

Tuning

tcp_close_wait_interval = 60000

tcp_conn_hash_size = 262144

From /etc/system:

rlim_fd_max = 8192

tcp_conn_hash_size=262144

File Server Syntax TotalNET version 5.2 with the tas5.2sh-04 patch

Tuning

None

Test Lab

The Test Systems and Network Configuration

Mindcraft ran these tests using a total of 60 test systems configured as shown in Table 8.

Table 8: Test System Configurations

Feature

Configuration

CPU 6 x 75 MHz Pentium; 46 x 90 MHz Pentium; and 8 x 100 MHz Pentium. All are in the Compaq ProLinea product line.
RAM 75 MHz Pentiums: 16 MB; 90 MHz Pentiums: 16 MB; 100 MHz Pentiums: 32 MB
Disk 2 GB IDE; standard Windows 95 driver
Network 60 x Intel Pro/100B LAN Adapter (100Base-TX) using e100b.sys driver version 2.02

4 x Compaq Netelligent 1124 hubs each connecting to a Compaq Netelligent 5506 switch. There were 15 test systems per hub. The server was connected to the switch.

Network software: Windows 95 TCP/IP driver.

Operating System Windows 95, version 4.00.950

We balanced the networks by grouping the test systems so that one system on each hub would be added for each mix after the second mix, which uses four test systems. Figure 2 shows the test lab configuration.

Figure 2: Test Lab Configuration

nblab2.gif (9519 bytes)

Mindcraft Certification

Mindcraft, Inc. conducted the performance tests described in this report between April 14 and May 1, 1998. Mindcraft used the NetBench 5.01 benchmark to measure performance with the standard NetBench NBDM_60.TST test suite.

Mindcraft certifies that the results reported herein represent the performance of Microsoft Windows NT Server 4.0 on a Compaq ProLiant 3000 as measured by NetBench 5.01. Mindcraft also certifies that the results reported herein represent the performance of Sun Solaris 2.6 and Syntax TotalNET on a Sun Ultra Enterprise 450 as measured by NetBench 5.01.

Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Overall NetBench Results

Compaq ProLiant 3000 Detailed Benchmark Results


Mix Name


Mix ID


Systems Participating

Total Throughput (bytes/sec)


Total Throughput (MBits/sec)


Peak Throughput (MB/sec)

1 System

1

1

490,618

3.7

0.5

4 Systems

2

4

1,933,744

14.8

1.8

8 Systems

3

8

3,643,368

27.8

3.5

12 Systems

4

12

5,244,448

40.0

5.0

16 Systems

5

16

6,522,468

49.8

6.2

20 Systems

6

20

7,024,739

53.6

6.7

24 Systems

7

24

8,153,727

62.2

7.8

28 Systems

8

28

8,398,371

64.1

8.0

32 Systems

9

32

7,516,432

57.3

7.2

36 Systems

10

36

7,012,446

53.5

6.7

40 Systems

11

40

5,978,437

45.6

5.7

44 Systems

12

44

6,029,609

46.0

5.8

48 Systems

13

48

5,958,706

45.5

5.7

52 Systems

14

52

5,782,095

44.1

5.5

56 Systems

15

56

5,634,835

43.0

5.4

60 Systems

16

60

5,301,541

40.4

5.1

Sun Ultra Enterprise 450 Detailed Benchmark Results



Mix Name



Mix ID


Systems Participating

Total Throughput (bytes/sec)


Total Throughput (MBits/sec)


Peak Throughput
(MB/sec)

1 System

1

1

448,335

3.4

0.4

4 Systems

2

4

1,300,301

9.9

1.2

8 Systems

3

8

1,643,252

12.5

1.6

12 Systems

4

12

1,837,777

14.0

1.8

16 Systems

5

16

1,950,885

14.9

1.9

20 Systems

6

20

1,846,562

14.1

1.8

24 Systems

7

23

1,877,313

14.3

1.8

28 Systems

8

24

1,840,275

14.0

1.8

32 Systems

9

23

1,815,568

13.9

1.7

36 Systems

10

23

1,900,522

14.5

1.8

40 Systems

11

24

1,837,014

14.0

1.8

44 Systems

12

25

1,899,205

14.5

1.8

48 Systems

13

27

1,796,524

13.7

1.7

52 Systems

14

26

1,862,259

14.2

1.8

56 Systems

15

28

1,857,302

14.2

1.8

60 Systems

16

27

1,844,181

14.1

1.8

 

NOTICE:

The information in this publication is subject to change without notice.

MINDCRAFT, INC. SHALL NOT BE LIABLE FOR ERRORS OR OMISSIONS CONTAINED HEREIN, NOR FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES RESULTING FROM THE FURNISHING, PERFORMANCE, OR USE OF THIS MATERIAL.

This publication does not constitute an endorsement of the product or products that were tested. This test is not a determination of product quality or correctness, nor does it ensure compliance with any federal, state or local requirements.

The Mindcraft tests discussed herein were performed without independent verification by Ziff-Davis and Ziff-Davis makes no representations or warranties as to the results of the tests.

Mindcraft is a registered trademark of Mindcraft, Inc.

Product and corporate names mentioned herein are trademarks and/or registered trademarks of their respective companies.


Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324