Mindcraft Certified Performance Comparison Report

Compaq ProLiant 5000
Microsoft Windows NT Server 4.0
Netscape Enterprise Server 2.0


Executive Summary
Mindcraft's Certification
Performance Analysis
Test Procedures
SUT Configuration
Test Lab
App1: Webstone Changes
App2: O/S Configuration

Executive Summary

This Certified Performance Report is for the Compaq ProLiant 5000 running the Windows NT Server 4.0 operating system and the Netscape Enterprise Server 2.0. It covers one, two, and four processor configurations of the ProLiant 5000.

The WebStone 2.0.1 benchmark was used to test the system. The maximum number of connections per second attained for the system is shown in Figure 1 and the peak throughput in Mbits per second is shown in Figure 2. A table summarizing the peak performance follows Figure 2.

Peak Performance - Connections

Peak Performance - Throughput

Peak Performance Data
Compaq ProLiant 5000
Windows NT Server 4.0
Netscape Enterprise Server 2.0
WebStone 2.0.1
Configuration Connections/s Throughput, Mbits/s
1 processor, 128 MB
HTML: 904 @ 500 clients
NSAPI: 108 @ 50 clients
CGI: 29 @ 20 clients
HTML: 132.88 @ 500 clients
NSAPI: 16.48 @ 200 clients
CGI: 4.45 @ 30 clients
2 processors, 256 MB
HTML: 940 @ 300 clients
NSAPI: 188 @ 20 clients
CGI: 42 @ 20 clients
HTML: 136.53 @ 500 clients
NSAPI: 27.73 @ 200 clients
CGI: 6.59 @ 50 clients
4 processors, 256 MB
HTML: Not tested
NSAPI: 309 @ 20 clients
CGI: 51 @ 20 clients
HTML: Not tested
NSAPI: 45.81 @ 20 clients
CGI: 7.37 @ 20 clients


This Performance Report was commissioned by Compaq Computer Corporation to demonstrate the performance of their ProLiant 5000 server and to allow the reader to compare the performance of the ProLiant 5000 with that of servers from other vendors. Two 100Base-TX network interfaces were used on the server in order to allow enough throughput to show the server's capabilities. In addition, the WebStone 2.0.1 run rules were extended to include runs with up to 600 client processes. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Mindcraft's Certification

Mindcraft, Inc. conducted the performance tests described in this report between October 25 and November 6, 1996, in our laboratory in Palo Alto, California. Mindcraft used the WebStone 2.0.1 test suite to measure performance.

Mindcraft certifies that the results reported herein fairly represent the performance of Compaq's ProLiant 5000 computer running Netscape's Enterprise Server 2.0 under Microsoft's Windows NT Server 4.0 operating system as measured by the WebStone 2.0.1 test suite. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Performance Analysis

This analysis is based on the complete WebStone benchmark results for the ProLiant 5000. We did not test a four processor configuration of the ProLiant 5000 for HTML performance because we were essentially saturating the network with two processors and did not expect to see significantly better performance from four processors.

The WebStone 2.0.1 HTML benchmark stresses a system's networking ability in addition to other aspects of server performance. The best way to see if there is unused capacity on a server computer running WebStone is to look at the CPU utilization. It was 95% to 100% for the HTML tests when the peak performance was reached. So the tests did fully utilize the system.

A ProLiant 5000 under Windows NT Server 4.0 is able to drive its network interfaces close to capacity, as shown in Figures 3, 4, and 5 below.

The WebStone 2.0.1 CGI and NSAPI tests are CPU intensive because they compute random characters to put on the Web pages they return. As expected, the CPU utilization was 100% for the CGI and NSAPI tests when the peak performance was reached. Because the CPUs are busy computing Web pages, the CGI and NSAPI tests show significantly lower connection rates and throughput than the HTML test. From looking at the peak performance data, it is clear that the NSAPI interface is four to six times faster than the CGI interface.

Throughput Per Client

Throughput Per Client - 2 Processors

For the HTML benchmark, the maximum throughput observed in our testing of a two processor configuration of a ProLiant 5000 was over 136 Mbits/second, which is close to the maximum that can be expected for two 100 Mbits/second half-duplex networks. With Netscape's Enterprise Server, network bandwidth and associated operating system overhead appeared to be the limiting HTML performance factor for the configuration tested, not the Web server itself.

The WebStone load that a server computer can support depends on four primary factors:

  • The bandwidth of the networks available;
  • The ability of the operating system to utilize the available network bandwidth;
  • The ability of the operating system to maximize the CPU time available to the Web server; and
  • The rate at which the Web server can service requests.

There is a strong correlation between the number of connections/second a server can provide and the throughput (in Mbits/second) it exhibits. This can be seen by comparing the connections/second per client in Figures 6, 7 and 8 below with the throughput in Figures 3, 4, and 5.

Connections/second Per Client

Connections/second Per Client

Test Procedures and Test Suite Configuration

Test Procedures

Mindcraft followed the standard WebStone 2.0.1 run rules with the following extensions:

  • In addition to the standard run from 10 to 100 clients, we also ran tests for 200 to 600 clients in order to show the capabilities of these systems.
  • We modified the runbench and webmaster.c files to support clients running on Windows NT systems. Code changes are shown in Appendix 1.

The following basic set of procedures was used for performing these tests:

  • The HTTP server was started automatically at boot time.
  • For the HTML runs, the server log files were deleted after each run from 10 to 100 clients and after each run from 200 to 600 clients.
  • All test runs were done with the server console idle (no user logged in).

Test Suite Configuration

Testing was controlled from a Ross Technology SPARCplug system running Solaris 2.5.1 and a webmaster binary compiled on that system using gcc. The webclient program was compiled on a Windows NT Server 4.0 system using Microsoft Visual C++ version 4.2. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Test Data

The data files used for the static HTML testing were the default "Silicon Surf" fileset, distributed as filelist.standard with the WebStone 2.0.1 benchmark.

This static HTML fileset was designed to represent a real-world server load. It was designed based on analysis of the access logs of Silicon Graphics, Inc.'s external Web site, http://www.sgi.com. Netscape's analysis of logs from other commercial sites indicated that Silicon Surf access patterns were fairly typical for the Web when they were designed.

The Silicon Surf model targets the following characteristics:

  • 93% of accessed files are smaller than 30 KB.
  • Average accessed file is roughly 7 KB.

Configuration of the System Tested

Web Server Software Vendor: Netscape Communications Corp.
HTTP Software: Enterprise Server 2.0a
Number of threads: Default
Server Cache: Default
Log Mode: Common
Tuning:The server ran error logging and access logging. DNS reverse name lookups were disabled to keep DNS server performance from affecting the tests of Web server performance. We ran a single Web server so that it served both network interfaces.
Computer System Vendor: Compaq Computer Corporation
Model: ProLiant 5000
Processor: 200 MHz Intel Pentium Pro
Number of Processors: 4 (one- and two-processor configurations also tested)
Memory: 256 MB EDO RAM (128 MB used for 1-processor tests)
Disk Subsystem: 1- Compaq 2 GB drive
Disk Controller: 1 - embedded Fast-Wide SCSI-2 Controller
Network Controllers: 1 - Compaq NetFlex III PCI 10/100Base-TX
1 - Compaq Netelligent PCI 10/100Base-TX
Tuning: The system's run-time configuration file (boot.ini) was changed so that only one CPU and 128MB of RAM would be used for single-processor testing, using a uniprocessor kernel. Different boot.ini lines caused a multiprocessor kernel and 256 MB of RAM to be used for two and four processor testing.
Operating System Microsoft Windows NT Server 4.0 with the tcpip.sys file from a December 1996 update installed. Other system configuration parameters used are listed in Appendix 2.
Network Type and Speed: 100Base-TX Ethernet
Number of Nets: 2
Additional Hardware: 1 Compaq Netelligent 100BaseTX hub and 1 Linksys 100BaseTX Hub

Test Lab Configuration

In order to cause the Netscape Enterprise Server to use all available CPU cycles on the server computer system, we used four WebStone client systems. A fifth system served as the Webmaster, controlling the WebStone driver. The test lab network configuration used for this work is shown below:

Mindcraft's Test Lab Configuration
Webstone Client
Computer Systems
Vendor: HD Computer Company
Processor: 200MHz Pentium Pro on an Intel Venus motherboard
Number of Processors: 1
Memory: 64 MB EDO RAM
Disk Subsystem: One 2 GB IDE Disk
Disk Controllers: Built-in EIDE
Network Controllers: 3Com 3C905 PCI 10/100Base-TX Interface
Number of Clients: Four (two per net)
Operating System
and Compiler
Operating System: Microsoft Windows NT Server 4.0 with the tcpip.sys file from a December 1996 update installed
Compiler: Microsoft Visual C++ Version 4.2

The tests described in this report were performed on isolated LANs that were quiescent except for the test traffic.


Number of processes or threads simultaneously requesting Web services from the server.
Connections per second
Average rate of creation and destruction of client/server connections.
Errors per second
Error rate for this run.
Average client wait for data to be returned.
Average net data transfer rate, in megabits per second.

Appendix 1: Changes to Webstone 2.0.1 Source

The following output from diff illustrates our changes:

Changes to bin/runbench: Comment out rcp and rsh commands directed at the NT hosts, and change the generated webclient configuration file to specify the server name. This uses a facility that's built into the software, but isn't used by the standard version of the WebStone run scripts.

< [ -n "$DEBUG" ] && set +x

> [ -n "$DEBUG" ] && set -x
< for i in $CLIENTS
< do
<       $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
< done

> #NT: for i in $CLIENTS
> #NT: do
> #NT:  $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
> #NT: done
<     TIMESTAMP=`date +"%y%m%d_11/06/96M"`

>     TIMESTAMP=`date +"%y%m%d_%H%M"`
<     for client in $CLIENTS
<     do
<       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
<     done

> #NT:     for client in $CLIENTS
> #NT:     do
> #NT:       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
> #NT:     done
>        CLIENTNET=`expr $i : "(.*)\..*"`
>        SERVERNAME=$[Macro error: j6- dagger]]
.$[Macro error: j6- dagger]]

<        >> $LOGDIR/config

>               $SERVERNAME"  >> $LOGDIR/config
<     for i in $CLIENTS localhost
<     do
<       $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
<       $RCP $LOGDIR/config $i:$TMPDIR/config
<       $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
<     done

> #NT:    for i in $CLIENTS localhost
> #NT:    do
> #NT:      $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
> #NT:      $RCP $LOGDIR/config $i:$TMPDIR/config
> #NT:      $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
> #NT:    done
<     $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
<     for i in $CLIENTS
<     do
<       $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
<     done

> #NT:    $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
> #NT:    for i in $CLIENTS
> #NT:    do
> #NT:      $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
> #NT:    done
<     set -x
<     do
<       $RCP $SERVER:$i $LOGDIR
<     done
<     set +x

> #NT:    do
> #NT:      $RCP $SERVER:$i $LOGDIR
> #NT:    done
<     CMD="$WEBSTONEROOT/bin/webmaster -v -u  $TMPDIR/filelist"
<     CMD=$CMD" -f $TMPDIR/config -t $TIMEPERRUN"
<     [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"

>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD="$WEBSTONEROOT/bin/webmaster -v -W -u $LOGDIR/filelist"
>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD=$CMD" -f $LOGDIR/config -t $TIMEPERRUN"
>     #GREG: [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"
Changes to sysdep.h: Emit an NT-style command line and to fix a C
    portability problem. 

< #error  NT gettimeofday() doesn't support USE_TIMEZONE (yet)

> #error  NT gettimeofday() does not support USE_TIMEZONE (yet)
> #else
> #define PROGPATH "D:\webstone2.0\webclient.exe" /* "/usr/local/bin/webclient" */
> #endif /* SOLARIS_CLIENT */

   Changes to webmaster.c: Emit an NT-style command line, and fix a bug that
    shows up on Solaris. 
> #endif /* SOLARIS_CLIENT */
> #else
>         strcat(commandline, " -u d:\WebStone2.0\filelist");
> #endif /* SOLARIS_CLIENT */
>       size_t count;
>       char **sptr, **dptr;
>       struct in_addr *iptr;
<       dest->h_addr_list = src->h_addr_list;
< }
>       /*
>        * ADDED: by Greg Burrell of Mindcraft Inc. 10/22/96
>        * PROBLEM: we can't just do the assignment:
>        *
>        *              dest->h_addr_list = src->h_addr_list
>        *
>        *     because those are just pointers and the memory pointed to
>        *     may get overwritten during the next gethostbyname() call.  
>        *     In fact, that happens on Solaris 2.5
>        *
>        * FIX: Make a copy of the h_addr_list of a hostent structure.
>        *     h_addr_list is really an array of pointers.  Each pointer 
>        *     points to a structure of type in_addr.  So, we allocate space 
>        *     for the structures and then allocate space for the array of 
>        *     pointers.  Then we fill in the structures and set up the array 
>        *     of pointers.
>        */
>       for(count = 0, sptr = src->h_addr_list; *sptr != NULL; sptr++, count++);
>       if ((dest->h_addr_list = malloc(count + 1)) == NULL)
>               return 0;
>       if ((iptr = malloc(count * sizeof(struct in_addr))) == NULL)
>               return 0;
>       for (sptr = src->h_addr_list, dptr = dest->h_addr_list; 
>                       *sptr != NULL; sptr++, dptr++, iptr++) [Macro error: j6- dagger]]

>       *dptr = NULL;
>       return 1;
> }

Appendix 2: Operating System Configuration

System Identification:

From the winmsd program:

Microsoft® Windows NT® Version 4.0 (Build 1381)

tcpip.sys from Microsoft's December 1996 update for Windows NT was installed.

Run Time Parameters

From Registry Editor:

                        ListenBacklog: 1024
                        TcpTimedWaitDelay: 1
                     MaxReceives: 500

It was probably not necessary to change the TcpTimedWait Delay parameter from its default value. We have done some trial WebStone runs with this parameter clear, and the results are within the range of run-to-run variability we observed with the parameter set.

Active Services:

Netscape Enterprise Server https-
Netscape Enterprise Server https-
Event Log
License Logging
RPC Service

All other services were disabled.

Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324