Mindcraft Certified Performance Comparison Report

Compaq ProLiant 2500
Microsoft Windows NT Server 4.0
Netscape Enterprise Server 2.0

Contents

Executive Summary
Introduction
Mindcraft's Certification
Performance Analysis
Test Procedures
SUT Configuration
Test Lab
Glossary
App1: Webstone Changes
App2: O/S Configuration

Executive Summary

This Certified Performance Report is for the Compaq ProLiant 2500 running the Windows NT 4.0 Server operating system and the Netscape Enterprise Server 2.0.

The WebStone 2.0.1 benchmark was used to test the system. The maximum number of connections per second attained for the system is shown in Figure 1 and the peak throughput in Mbits per second is shown in Figure 2. A table summarizing the peak performance follows Figure 2.

Peak Performance - Connections

Peak Performance - Throughput

Peak Performance Data
Compaq ProLiant 2500
Windows NT Server 4.0
Netscape Enterprise Server 2.0
WebStone 2.0.1
Configuration Connections/s Throughput, Mbits/s
1 processor, 128 MB
HTML: 602 @ 500 clients
NSAPI: 95 @ 90 clients
CGI: 24 @ 200 clients
HTML: 91.18 @ 500 clients
NSAPI: 14.96 @ 80 clients
CGI: 3.72@ 70 clients

Introduction

This Performance Report was commissioned by Compaq Computer Corporation to demonstrate the performance of their ProLiant 2500 server and to allow the reader to compare the performance of the ProLiant 2500 with that of servers from other vendors. Two 100Base-TX network interfaces were used on the server in order to allow enough throughput to show the server's capabilities. In addition, the WebStone 2.0.1 run rules were extended to include runs with up to 600 client processes. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Mindcraft's Certification

Mindcraft, Inc. conducted the performance tests described in this report on November 13 and 14, 1996, in our laboratory in Palo Alto, California. Mindcraft used the WebStone 2.0.1 test suite to measure performance.

Mindcraft certifies that the results reported herein fairly represent the performance of Compaq's ProLiant 2500 computer running Netscape's Enterprise Server 2.0 under Microsoft's Windows NT Server 4.0 operating system as measured by the WebStone 2.0.1 test suite. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Performance Analysis

This analysis is based on the complete WebStone benchmark results for the ProLiant 2500.

The WebStone 2.0.1 HTML benchmark stresses a system's networking ability in addition to other aspects of server performance. The best way to see if there is unused capacity on a server computer running WebStone is to look at the CPU utilization. It was 100% when the peak performance was reached. So the tests did fully utilize the system.

For the HTML benchmark, the maximum throughput observed in our testing of a ProLiant 2500 was 91 Mbits/second. This means that there was considerable network bandwidth available on the two 100 Mbit/s networks. These results thus represent the capabilities of the server system; network capacity was not a bottleneck for our testing.

The WebStone 2.0.1 CGI and NSAPI tests are CPU intensive because they compute random characters to put on the Web pages they return. As expected, the CPU utilization was 100% for the CGI and NSAPI tests when the peak performance was reached. Because the CPUs are busy computing Web pages, the CGI and NSAPI tests show significantly lower connection rates and throughput than the HTML test. From looking at the peak performance data, it is clear that the NSAPI interface is three and a half to four times faster than the CGI interface.

Throughput Per Client

The WebStone load that a server computer can support depends on four primary factors:

  • The bandwidth of the networks available;
  • The ability of the operating system to utilize the available network bandwidth;
  • The ability of the operating system to maximize the CPU time available to the Web server; and
  • The rate at which the Web server can service requests.

There is a strong correlation between the number of connections/second a server can provide and the throughput (in Mbits/second) it exhibits. This can be seen by comparing the connections/second per client in Figure4, below, with the throughput in Figure 3.

Connections/second Per Client


Test Procedures and Test Suite Configuration

Test Procedures

Mindcraft followed the standard WebStone 2.0.1 run rules with the following extensions:

  • In addition to the standard run from 10 to 100 clients, we also ran tests for 200 to 600 clients in order to show the capabilities of these systems.
  • We modified the runbench and webmaster.c files to support clients running on Windows NT systems. Code changes are shown in Appendix 1.

The following basic set of procedures was used for performing these tests:

  • The HTTP server was started automatically at boot time.
  • For the HTML runs, the server log files were deleted after each run from 10 to 100 clients and after each run from 200 to 600 clients.
  • All test runs were done with the server console idle (no user logged in).

Test Suite Configuration

Testing was controlled from a Ross Technology SPARCplug system running Solaris 2.5.1 and a webmaster binary compiled on that system using gcc. The webclient program was compiled on a Windows NT Server 4.0 system using Microsoft Visual C++ version 4.2. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1.

Test Data

The data files used for the static HTML testing were the default "Silicon Surf" fileset, distributed as filelist.standard with the WebStone 2.0.1 benchmark.

This static HTML fileset was designed to represent a real-world server load. It was designed based on analysis of the access logs of Silicon Graphics, Inc.'s external Web site, http://www.sgi.com. Netscape's analysis of logs from other commercial sites indicated that Silicon Surf access patterns were fairly typical for the Web when they were designed.

The Silicon Surf model targets the following characteristics:

  • 93% of accessed files are smaller than 30 KB.
  • Average accessed file is roughly 7 KB.

Configuration of the System Tested

Web Server Software Vendor: Netscape Communications Corp.
HTTP Software: Enterprise Server 2.0a
Number of threads: Default
Server Cache: Default
Log Mode: Common
Tuning:The server ran error logging and access logging. DNS reverse name lookups were disabled to keep DNS server performance from affecting the tests of Web server performance. We ran a single Web server so that it served both network interfaces.
Computer System Vendor: Compaq Computer Corporation
Model: ProLiant 2500
Processor: 200 MHz Intel Pentium Pro
Number of Processors: 1
Memory: 128 MB EDO RAM
Disk Subsystem: 5 - Compaq 2.1 GB drives, in RAID 0 configuration
Disk Controller: 1 - Compaq Smart-2 Array Controller
Network Controllers: 1 - Compaq NetFlex III PCI 10/100Base-TX
1 - integrated Compaq Netelligent PCI 10/100Base-TX
Tuning:
Operating System Microsoft Windows NT Server 4.0 with the tcpip.sys file from a December 1996 update installed. Other system configuration parameters used are listed in Appendix 2.
Network Type and Speed: 100Base-TX Ethernet
Number of Nets: 2
Additional Hardware: 2 - Linksys 100BaseTX Hubs

Test Lab Configuration

In order to cause the Netscape Enterprise Server to use all available CPU cycles on the server computer system, we used four WebStone client systems. A fifth system served as the Webmaster, controlling the WebStone driver. The test lab network configuration used for this work is shown below:

Mindcraft's Test Lab Configuration

Webstone Client
Computer Systems
Vendor: HD Computer Company
Model:
Victoria
Processor: 200MHz Pentium Pro on an Intel Venus motherboard
Number of Processors: 1
Memory: 64 MB EDO RAM
Disk Subsystem: One 2 GB IDE Disk
Disk Controllers: Built-in EIDE
Network Controllers: 3Com 3C905 PCI 10/100Base-TX Interface
Number of Clients: Four (two per net)
Operating System
and Compiler
Operating System: Microsoft Windows NT Server 4.0 with the tcpip.sys file from a December 1996 update installed
Compiler: Microsoft Visual C++ Version 4.2

The tests described in this report were performed on isolated LANs that were quiescent except for the test traffic.


Glossary

Clients
Number of processes or threads simultaneously requesting Web services from the server.
Connections per second
Average rate of creation and destruction of client/server connections.
Errors per second
Error rate for this run.
Latency
Average client wait for data to be returned.
Throughput
Average net data transfer rate, in megabits per second.

Appendix 1: Changes to Webstone 2.0.1 Source

The following output from diff illustrates our changes:

Changes to bin/runbench: Comment out rcp and rsh commands directed at the NT hosts, and change the generated webclient configuration file to specify the server name. This uses a facility that's built into the software, but isn't used by the standard version of the WebStone run scripts.

17c17
< [ -n "$DEBUG" ] && set +x



> [ -n "$DEBUG" ] && set -x
87,90c87,90
< for i in $CLIENTS
< do
<       $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
< done



> #NT: for i in $CLIENTS
> #NT: do
> #NT:  $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin
> #NT: done
101c101
<     TIMESTAMP=`date +"%y%m%d_11/06/96M"`



>     TIMESTAMP=`date +"%y%m%d_%H%M"`
107,110c107,110
<     for client in $CLIENTS
<     do
<       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
<     done



> #NT:     for client in $CLIENTS
> #NT:     do
> #NT:       $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1
> #NT:     done
119a120,121
>        CLIENTNET=`expr $i : "(.*)\..*"`
>        SERVERNAME=$[Macro error: j6- dagger]]
.$[Macro error: j6- dagger]]

122,123c124,125
<       echo "$i $CLIENTACCOUNT $CLIENTPASSWORD `expr $CLIENTSPERHOST + 1`" 
<        >> $LOGDIR/config



>       echo "$i $CLIENTACCOUNT $CLIENTPASSWORD `expr $CLIENTSPERHOST + 1` 
>               $SERVERNAME"  >> $LOGDIR/config
126c128
<       echo "$i $CLIENTACCOUNT $CLIENTPASSWORD $CLIENTSPERHOST" 
---
>       echo "$i $CLIENTACCOUNT $CLIENTPASSWORD $CLIENTSPERHOST $SERVERNAME" 
135,140c137,142
<     for i in $CLIENTS localhost
<     do
<       $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
<       $RCP $LOGDIR/config $i:$TMPDIR/config
<       $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
<     done



> #NT:    for i in $CLIENTS localhost
> #NT:    do
> #NT:      $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`"
> #NT:      $RCP $LOGDIR/config $i:$TMPDIR/config
> #NT:      $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist
> #NT:    done
145,149c147,151
<     $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
<     for i in $CLIENTS
<     do
<       $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
<     done



> #NT:    $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1
> #NT:    for i in $CLIENTS
> #NT:    do
> #NT:      $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1
> #NT:    done
151,156c153,156
<     set -x
<     for i in $OSTUNINGFILES $WEBSERVERTUNINGFILES
<     do
<       $RCP $SERVER:$i $LOGDIR
<     done
<     set +x



> #NT:    for i in $OSTUNINGFILES $WEBSERVERTUNINGFILES
> #NT:    do
> #NT:      $RCP $SERVER:$i $LOGDIR
> #NT:    done
162,164c162,166
<     CMD="$WEBSTONEROOT/bin/webmaster -v -u  $TMPDIR/filelist"
<     CMD=$CMD" -f $TMPDIR/config -t $TIMEPERRUN"
<     [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"



>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD="$WEBSTONEROOT/bin/webmaster -v -W -u $LOGDIR/filelist"
>     #GREG: bug fix: changed $TMPDIR below to $LOGDIR
>     CMD=$CMD" -f $LOGDIR/config -t $TIMEPERRUN"
>     #GREG: [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER"
    
Changes to sysdep.h: Emit an NT-style command line and to fix a C
    portability problem. 

   48c48
< #error  NT gettimeofday() doesn't support USE_TIMEZONE (yet)



> #error  NT gettimeofday() does not support USE_TIMEZONE (yet)
87a88
> #ifdef SOLARIS_CLIENT
88a90,92
> #else
> #define PROGPATH "D:\webstone2.0\webclient.exe" /* "/usr/local/bin/webclient" */
> #endif /* SOLARIS_CLIENT */

   Changes to webmaster.c: Emit an NT-style command line, and fix a bug that
    shows up on Solaris. 
    
539a540
> #ifdef SOLARIS_CLIENT
550a552
> #endif /* SOLARIS_CLIENT */
586a589
> #ifdef SOLARIS_CLIENT
588a592,594
> #else
>         strcat(commandline, " -u d:\WebStone2.0\filelist");
> #endif /* SOLARIS_CLIENT */
1395a1402
> 
1398a1406,1408
>       size_t count;
>       char **sptr, **dptr;
>       struct in_addr *iptr;
1406,1407d1415
<       dest->h_addr_list = src->h_addr_list;
< }
1408a1417,1446
>       /*
>        * ADDED: by Greg Burrell of Mindcraft Inc. 10/22/96
>        * PROBLEM: we can't just do the assignment:
>        *
>        *              dest->h_addr_list = src->h_addr_list
>        *
>        *     because those are just pointers and the memory pointed to
>        *     may get overwritten during the next gethostbyname() call.  
>        *     In fact, that happens on Solaris 2.5
>        *
>        * FIX: Make a copy of the h_addr_list of a hostent structure.
>        *     h_addr_list is really an array of pointers.  Each pointer 
>        *     points to a structure of type in_addr.  So, we allocate space 
>        *     for the structures and then allocate space for the array of 
>        *     pointers.  Then we fill in the structures and set up the array 
>        *     of pointers.
>        */
>       for(count = 0, sptr = src->h_addr_list; *sptr != NULL; sptr++, count++);
>       if ((dest->h_addr_list = malloc(count + 1)) == NULL)
>               return 0;
>       if ((iptr = malloc(count * sizeof(struct in_addr))) == NULL)
>               return 0;
>       for (sptr = src->h_addr_list, dptr = dest->h_addr_list; 
>                       *sptr != NULL; sptr++, dptr++, iptr++) [Macro error: j6- dagger]]

>       *dptr = NULL;
>       return 1;
> }

Appendix 2: Operating System Configuration

System Identification:

From the winmsd program:

Microsoft® Windows NT® Version 4.0 (Build 1381)

tcpip.sys from Microsoft's December 1996 for Windows NT was installed.

Run Time Parameters

From Registry Editor:

HKEY_LOCAL_MACHINE
    SYSTEM
        CurrentControlSet
            Services
                InetInfo
                    Parameters
                        ListenBacklog: 1024
HKEY_LOCAL_MACHINE
    SYSTEM
        CurrentControlSet
            Services
                Tcpip
                    Parameters
                        TcpTimedWaitDelay: 1
HKEY_LOCAL_MACHINE
    SYSTEM
        CurrentControlSet
            Services
               cpqnf30
                  Parameters
                     MaxReceives: 500

It was probably not necessary to change the TcpTimedWait Delay parameter from its default value. We have done some trial WebStone runs with this parameter clear, and the results are within the range of run-to-run variability we observed with the parameter set.

Active Services:

Netscape Enterprise Server https-192.168.1.9
Netscape Enterprise Server https-192.168.2.9
Alerter
Event Log
License Logging
RPC Service
Server
Spooler

All other services were disabled.


Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324