Mindcraft Certified Performance Comparison Report

Sun Ultra Enterprise 2, Model 2170
SunSoft Solaris 2.5.1
Netscape Enterprise Server 2.0


Executive Summary
Mindcraft's Certification
Performance Analysis
Test Procedures
SUT Configuration
Test Lab
App1: Webstone Changes
App2: O/S Configuration

Executive Summary

This Certified Performance Report is for the Sun Ultra Enterprise 2, Model 2170 running the Solaris 2.5.1 operating system and the Netscape Enterprise Server 2.0.

The WebStone 2.0.1 benchmark was used to test the system. The maximum number of connections per second attained for the system is shown in Figure 1 and the peak throughput in Mbits per second is shown in Figure 2. A table summarizing the peak performance follows Figure 2.

Peak Performance - Connections
Peak Performance - Throughput
Peak Performance Data
Sun Ultra Enterprise 2, Model 2170
Solaris 2.5.1
Netscape Enterprise Server 2.0
WebStone 2.0.1
Configuration Connections/s Throughput, Mbits/s
1 processor, 128 MB
HTML: 181 @ 40 clients
NSAPI: 64 @ 40 clients
CGI: 18 @ 20 clients
HTML: 27.35 @ 40 clients
NSAPI: 9.63 @ 70 clients
CGI: 2.65 @ 50 clients
2 processors, 256 MB
HTML: 296 @ 100 clients
NSAPI: 122 @ 60 clients
CGI: 30 @ 300 clients
HTML: 44.61 @ 100 clients
NSAPI: 18.63 @ 90 clients
CGI: 4.82 @ 70 clients


This Performance Report was commissioned by Compaq Computer Corporation to allow the reader to compare the performance of their ProLiant 5000 with that of Sun's Ultra Enterprise 2 Model 2170. Two 100Base-TX network interfaces were used on the server in order to allow enough throughput to show the server's capabilities. In addition, the WebStone 2.0.1 run rules were extended to include runs with up to 600 client processes. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Mindcraft's Certification

Mindcraft, Inc. conducted the performance tests described in this report between October 25 and November 6, 1996, in our laboratory in Palo Alto, California. Mindcraft used the WebStone 2.0.1 test suite to measure performance.

Mindcraft certifies that the results reported herein fairly represent the performance of Sun's Ultra Enterprise 2 Model 2170 computer running Netscape's Enterprise Server 2.0 under SunSoft's Solaris 2.5.1 operating system as measured by the WebStone 2.0.1 test suite. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Performance Analysis

This analysis is based on the complete WebStone benchmark results for the Ultra Enterprise 2.

The WebStone 2.0.1 HTML benchmark stresses a system's networking ability in addition to other aspects of server performance. The best way to see if there is unused capacity on a server computer running WebStone is to look at the CPU utilization. For the HTML tests, it was 100% for one CPU and was 55% to 60% for two CPUs when the peak performance was reached. So the two CPU system was not fully utilized for Web serving. However, it could not be driven to utilize the CPU more fully because of operating system overhead.

An Ultra Enterprise 2 under Solaris 2.5.1 is not able to drive its network interfaces close to capacity with the WebStone benchmark, as shown in Figures 3 and 4 below.

Throughput Per Client

Throughput Per Client - 2 Processors

For the HTML benchmark, the maximum throughput observed in our testing of a two processor configuration of a Ultra Enterprise 2 was over 44.61 Mbits/second, which is significantly less than what can be expected for two 100 Mbits/second half-duplex networks. With Netscape's Enterprise Server, network bandwidth and associated operating system overhead appeared to be the limiting HTML performance factor for the configurations tested, not the Web server itself. If the Enterprise Server were the limiting factor, CPU utilization would have been significantly higher.

The WebStone 2.0.1 CGI and NSAPI tests are CPU intensive because they compute random characters to put on the Web pages they return. As expected, the CPU utilization was 100% for the CGI and NSAPI tests when the peak performance was reached. Because the CPUs are busy computing Web pages, the CGI and NSAPI tests show significantly lower connection rates and throughput than the HTML test. From looking at the peak performance data, it is clear that the NSAPI interface is three and a half to four times faster than the CGI interface.

The WebStone load that a server computer can support depends on four primary factors:

  • The bandwidth of the networks available
  • The ability of the operating system to utilize the available network bandwidth
  • The ability of the operating system to maximize the CPU time available to the Web server
  • The rate at which the Web server can service requests

There is a strong correlation between the number of connections/second a server can provide and the throughput (in Mbits/second) it exhibits. This can be seen by comparing the connections/second per client in Figures 5 and 6 below with the throughput in Figures 3 and 4.

Connections/second Per Client

Connections/second Per Client

Test Procedures and Test Suite Configuration

Test Procedures

Mindcraft followed the standard WebStone 2.0.1 run rules with the following extensions:

  • In addition to the standard run from 10 to 100 clients, we also ran tests for 200 to 600 clients in order to show the capabilities of these systems.
  • We modified the runbench and webmaster.c files to support clients running on Windows NT systems. Code changes are shown in Appendix 1.

The following basic set of procedures was used for performing these tests:

  • The server was started automatically at boot time.
  • For the HTML runs, the server log files were deleted after each run from 10 to 100 clients and after each run from 200 to 600 clients.
  • All test runs were done with the server console idle (no user logged in).

Test Suite Configuration

Testing was controlled from a Ross Technology SPARCplug system running Solaris 2.5.1 and a webmaster binary compiled on that system using gcc. The webclient program was compiled on a Windows NT 4.0 Server system using Microsoft Visual C++ version 4.2. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Test Data

The data files used for the static HTML testing were the default "Silicon Surf" fileset, distributed as filelist.standard with the WebStone 2.0.1 benchmark.

This static HTML fileset was designed to represent a real-world server load. It was designed based on analysis of the access logs of Silicon Graphics, Inc.'s external Web site, http://www.sgi.com. Netscape's analysis of logs from other commercial sites indicated that Silicon Surf access patterns were fairly typical for the Web when they were designed.

The Silicon Surf model targets the following characteristics:

  • 93% of accessed files are smaller than 30 KB.
  • Average accessed file is roughly 7 KB.

Configuration of the System Tested

Web Server Software Vendor: Netscape Communications Corp.
HTTP Software: Enterprise Server 2.0a
Number of threads: Default
Server Cache: Default
Log Mode: Common
Tuning:The server ran error logging and access logging. DNS reverse name lookups were disabled to keep DNS server performance from affecting the tests of Web server performance. We ran a separate Web server on each network interface.
Computer System Vendor: Sun Microsystems Computer Corporation
Model: Ultra Enterprise 2 Model 2170
Processor: 167 MHz Ultra SPARC
Number of Processors: 2 (one- and two-processor configurations tested)
Memory: 256 MB RAM (128 MB used for 1-processor tests)
Disk Subsystem: 1- Sun 2.1 GB drive
Disk Controller: 1 - embedded Fast-Wide SCSI-2 Controller
Network Controllers: 2 Sun 10/100Base-TX FastEthernet
Tuning: See Appendix 2.
Operating System SunSoft Solaris 2.5.1. Other system configuration parameters used are listed in Appendix 2. Mindcraft tried to obtain Sun's Solaris Internet Server Supplement (SISS) which was announced while our testing was under way. Sun declined to provide the SISS to Mindcraft in time to be used for these tests.
Network Type and Speed: 100Base-TX Ethernet
Number of Nets: 2
Additional Hardware: 1 Compaq Netelligent 100BaseTX hub and 1 Linksys 100BaseTX Hub

Test Lab Configuration

In order to cause the Netscape Enterprise Server to use all available CPU cycles on the server computer system, we used four WebStone client systems. A fifth system served as the Webmaster, controlling the WebStone driver. The configuration of the test lab is shown below:

Mindcraft's Test Lab Configuration

Webstone Client
Computer Systems
Vendor: HD Computer Company
Processor: 200MHz Pentium Pro on an Intel Venus motherboard
Number of Processors: 1
Memory: 64 MB EDO RAM
Disk Subsystem: One 2 GB IDE Disk
Disk Controllers: Built-in EIDE
Network Controllers: 3Com 3C905 PCI 10/100Base-TX Interface
Number of Clients: Four (two per net)
Operating System
and Compiler
Operating System: Microsoft Windows NT 4.0 Server with the tcpip.sys file from prospective Service Pack 2 installed
Compiler: Microsoft Visual C++ Version 4.2

The tests described in this report were performed on isolated LANs that were quiescent except for the test traffic.


Number of processes or threads simultaneously requesting Web services from the server.
Connections per second
Average rate of creation and destruction of client/server connections.
Errors per second
Error rate for this run.
Average client wait for data to be returned.
Average net data transfer rate, in megabits per second.

Appendix 1: Changes to Webstone 2.0.1 Source

The following output from diff illustrates our changes:

Changes to bin/runbench: Comment out rcp and rsh commands directed at the NT hosts, and change the generated webclient configuration file to specify the server name. This uses a facility that's built into the software, but isn't used by the standard version of the WebStone run scripts.

< [ -n "$DEBUG" ] && set +x

> [ -n "$DEBUG" ] && set -x
< for i in $CLIENTS
< do
<       $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
< done

> #NT: for i in $CLIENTS
> #NT: do
> #NT:  $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
> #NT: done
<     TIMESTAMP=`date +"%y%m%d_11/06/96M"`

>     TIMESTAMP=`date +"%y%m%d_%H%M"`
<     for client in $CLIENTS
<     do
<       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
<     done

> #NT:     for client in $CLIENTS
> #NT:     do
> #NT:       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
> #NT:     done
>        CLIENTNET=`expr $i : "(.*)\..*"`
>        SERVERNAME=$[Macro error: j6- dagger]]
.$[Macro error: j6- dagger]]

<        >> $LOGDIR/config

>               $SERVERNAME"  >> $LOGDIR/config
<     for i in $CLIENTS localhost
<     do
<       $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
<       $RCP $LOGDIR/config $i:$TMPDIR/config
<       $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
<     done

> #NT:    for i in $CLIENTS localhost
> #NT:    do
> #NT:      $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
> #NT:      $RCP $LOGDIR/config $i:$TMPDIR/config
> #NT:      $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
> #NT:    done
<     $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
<     for i in $CLIENTS
<     do
<       $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
<     done

> #NT:    $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
> #NT:    for i in $CLIENTS
> #NT:    do
> #NT:      $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
> #NT:    done
<     set -x
<     do
<       $RCP $SERVER:$i $LOGDIR
<     done
<     set +x

> #NT:    do
> #NT:      $RCP $SERVER:$i $LOGDIR
> #NT:    done
<     CMD="$WEBSTONEROOT/bin/webmaster -v -u  $TMPDIR/filelist"
<     CMD=$CMD" -f $TMPDIR/config -t $TIMEPERRUN"
<     [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"

>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD="$WEBSTONEROOT/bin/webmaster -v -W -u $LOGDIR/filelist"
>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD=$CMD" -f $LOGDIR/config -t $TIMEPERRUN"
>     #GREG: [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"

Changes to sysdep.h: Emit an NT-style command line and to fix a C portability problem.

< #error  NT gettimeofday() doesn't support USE_TIMEZONE (yet)

> #error  NT gettimeofday() does not support USE_TIMEZONE (yet)
> #else
> #define PROGPATH                      "D:\webstone2.0\webclient.exe" /* "/usr/local/bin/webclient" */
> #endif /* SOLARIS_CLIENT */

Changes to webmaster.c: Emit an NT-style command line, and fix a bug that shows up on Solaris.

> #endif /* SOLARIS_CLIENT */
> #else
>         strcat(commandline, " -u d:\WebStone2.0\filelist");
> #endif /* SOLARIS_CLIENT */
>       size_t count;
>       char **sptr, **dptr;
>       struct in_addr *iptr;
<       dest->h_addr_list = src->h_addr_list;
< }
>       /*
>        * ADDED: by Greg Burrell of Mindcraft Inc. 10/22/96
>        * PROBLEM: we can't just do the assignment:
>        *
>        *              dest->h_addr_list = src->h_addr_list
>        *
>        *     because those are just pointers and the memory pointed to
>        *     may get overwritten during the next gethostbyname() call.  
>        *     In fact, that happens on Solaris 2.5
>        *
>        * FIX: Make a copy of the h_addr_list of a hostent structure.
>        *     h_addr_list is really an array of pointers.  Each pointer 
>        *     points to a structure of type in_addr.  So, we allocate space 
>        *     for the structures and then allocate space for the array of 
>        *     pointers.  Then we fill in the structures and set up the array 
>        *     of pointers.
>        */
>       for(count = 0, sptr = src->h_addr_list; *sptr != NULL; sptr++, count++);
>       if ((dest->h_addr_list = malloc(count + 1)) == NULL)
>               return 0;
>       if ((iptr = malloc(count * sizeof(struct in_addr))) == NULL)
>               return 0;
>       for (sptr = src->h_addr_list, dptr = dest->h_addr_list; 
>                       *sptr != NULL; sptr++, dptr++, iptr++) [Macro error: j6- dagger]]

>       *dptr = NULL;
>       return 1;
> }

Appendix 2: Operating System Configuration

System Identification:

The output from uname -a is:

SunOS ultra 5.5.1 Generic sun4u sparc SUNW,Ultra-2

Run Time Parameters

The TCP/IP listen backlog was increased to 1024 via the command:

/usr/sbin/ndd -set /dev/tcp tcp_conn_req_max 1024

The Netscape Enterprise 2.0 web servers were told to use this new value in their start-up configuration files magnus.conf:

ListenQ 1024

Active Services:

These daemons and processes were active at the time the tests were run: (ps -ae was used)


Copyright 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324