![]() |
![]()
Sun Ultra Enterprise 2, Model 2170 |
|||||||||||||||||||||||||||||||||||||||||||||||||
|
Executive SummaryThis Certified Performance Report is for the Sun Ultra Enterprise 2, Model 2170 running the Solaris 2.5.1 operating system and the Netscape Enterprise Server 2.0. The WebStone 2.0.1 benchmark was used to test the system. The maximum number of connections per second attained for the system is shown in Figure 1 and the peak throughput in Mbits per second is shown in Figure 2. A table summarizing the peak performance follows Figure 2. ![]() ![]()
IntroductionThis Performance Report was commissioned by Compaq Computer Corporation to allow the reader to compare the performance of their ProLiant 5000 with that of Sun's Ultra Enterprise 2 Model 2170. Two 100Base-TX network interfaces were used on the server in order to allow enough throughput to show the server's capabilities. In addition, the WebStone 2.0.1 run rules were extended to include runs with up to 600 client processes. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1. Mindcraft's CertificationMindcraft, Inc. conducted the performance tests described in this report between October 25 and November 6, 1996, in our laboratory in Palo Alto, California. Mindcraft used the WebStone 2.0.1 test suite to measure performance. Mindcraft certifies that the results reported herein fairly represent the performance of Sun's Ultra Enterprise 2 Model 2170 computer running Netscape's Enterprise Server 2.0 under SunSoft's Solaris 2.5.1 operating system as measured by the WebStone 2.0.1 test suite. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report. Performance AnalysisThis analysis is based on the complete WebStone benchmark results for the Ultra Enterprise 2. The WebStone 2.0.1 HTML benchmark stresses a system's networking ability in addition to other aspects of server performance. The best way to see if there is unused capacity on a server computer running WebStone is to look at the CPU utilization. For the HTML tests, it was 100% for one CPU and was 55% to 60% for two CPUs when the peak performance was reached. So the two CPU system was not fully utilized for Web serving. However, it could not be driven to utilize the CPU more fully because of operating system overhead. An Ultra Enterprise 2 under Solaris 2.5.1 is not able to drive its network interfaces close to capacity with the WebStone benchmark, as shown in Figures 3 and 4 below.
For the HTML benchmark, the maximum throughput observed in our testing of a two processor configuration of a Ultra Enterprise 2 was over 44.61 Mbits/second, which is significantly less than what can be expected for two 100 Mbits/second half-duplex networks. With Netscape's Enterprise Server, network bandwidth and associated operating system overhead appeared to be the limiting HTML performance factor for the configurations tested, not the Web server itself. If the Enterprise Server were the limiting factor, CPU utilization would have been significantly higher. The WebStone 2.0.1 CGI and NSAPI tests are CPU intensive because they compute random characters to put on the Web pages they return. As expected, the CPU utilization was 100% for the CGI and NSAPI tests when the peak performance was reached. Because the CPUs are busy computing Web pages, the CGI and NSAPI tests show significantly lower connection rates and throughput than the HTML test. From looking at the peak performance data, it is clear that the NSAPI interface is three and a half to four times faster than the CGI interface. The WebStone load that a server computer can support depends on four primary factors:
There is a strong correlation between the number of connections/second a server can provide and the throughput (in Mbits/second) it exhibits. This can be seen by comparing the connections/second per client in Figures 5 and 6 below with the throughput in Figures 3 and 4.
Test Procedures and Test Suite ConfigurationTest ProceduresMindcraft followed the standard WebStone 2.0.1 run rules with the following extensions:
The following basic set of procedures was used for performing these tests:
Test Suite ConfigurationTesting was controlled from a Ross Technology SPARCplug system running Solaris 2.5.1 and a webmaster binary compiled on that system using gcc. The webclient program was compiled on a Windows NT 4.0 Server system using Microsoft Visual C++ version 4.2. Minor changes were made to the WebStone scripts to allow us to run client systems against two different network interfaces and to accommodate the different command syntax of the webclient program on the Windows NT client systems. WebStone code changes are given in Appendix 1. Test DataThe data files used for the static HTML testing were the default "Silicon Surf" fileset, distributed as filelist.standard with the WebStone 2.0.1 benchmark. This static HTML fileset was designed to represent a real-world server load. It was designed based on analysis of the access logs of Silicon Graphics, Inc.'s external Web site, http://www.sgi.com. Netscape's analysis of logs from other commercial sites indicated that Silicon Surf access patterns were fairly typical for the Web when they were designed. The Silicon Surf model targets the following characteristics:
Configuration of the System Tested
Test Lab ConfigurationIn order to cause the Netscape Enterprise Server to use all available CPU cycles on the server computer system, we used four WebStone client systems. A fifth system served as the Webmaster, controlling the WebStone driver. The configuration of the test lab is shown below:
The tests described in this report were performed on isolated LANs that were quiescent except for the test traffic. Glossary
Appendix 1: Changes to Webstone 2.0.1 SourceThe following output from diff illustrates our changes: Changes to bin/runbench: Comment out rcp and rsh commands directed at the NT hosts, and change the generated webclient configuration file to specify the server name. This uses a facility that's built into the software, but isn't used by the standard version of the WebStone run scripts. 17c17 < [ -n "$DEBUG" ] && set +x > [ -n "$DEBUG" ] && set -x 87,90c87,90 < for i in $CLIENTS < do < $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin < done > #NT: for i in $CLIENTS > #NT: do > #NT: $RCP $WEBSTONEROOT/bin/webclient $i:$TMPDIR #/usr/local/bin > #NT: done 101c101 < TIMESTAMP=`date +"%y%m%d_11/06/96M"` > TIMESTAMP=`date +"%y%m%d_%H%M"` 107,110c107,110 < for client in $CLIENTS < do < $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1 < done > #NT: for client in $CLIENTS > #NT: do > #NT: $RSH $client "rm /tmp/webstone-debug*" > /dev/null 2>&1 > #NT: done 119a120,121 > CLIENTNET=`expr $i : "(.*)\..*"` > SERVERNAME=$[Macro error: j6- dagger]] .$[Macro error: j6- dagger]] 122,123c124,125 < echo "$i $CLIENTACCOUNT $CLIENTPASSWORD `expr $CLIENTSPERHOST + 1`" < >> $LOGDIR/config > echo "$i $CLIENTACCOUNT $CLIENTPASSWORD `expr $CLIENTSPERHOST + 1` > $SERVERNAME" >> $LOGDIR/config 126c128 < echo "$i $CLIENTACCOUNT $CLIENTPASSWORD $CLIENTSPERHOST" --- > echo "$i $CLIENTACCOUNT $CLIENTPASSWORD $CLIENTSPERHOST $SERVERNAME" 135,140c137,142 < for i in $CLIENTS localhost < do < $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`" < $RCP $LOGDIR/config $i:$TMPDIR/config < $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist < done > #NT: for i in $CLIENTS localhost > #NT: do > #NT: $RSH $i "rm -f $TMPDIR/config $TMPDIR/`basename $FILELIST`" > #NT: $RCP $LOGDIR/config $i:$TMPDIR/config > #NT: $RCP $LOGDIR/`basename $FILELIST` $i:$TMPDIR/filelist > #NT: done 145,149c147,151 < $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1 < for i in $CLIENTS < do < $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1 < done > #NT: $RSH $SERVER "$SERVERINFO" > $LOGDIR/hardware.$SERVER 2>&1 > #NT: for i in $CLIENTS > #NT: do > #NT: $RSH $i "$CLIENTINFO" > $LOGDIR/hardware.$i 2>&1 > #NT: done 151,156c153,156 < set -x < for i in $OSTUNINGFILES $WEBSERVERTUNINGFILES < do < $RCP $SERVER:$i $LOGDIR < done < set +x > #NT: for i in $OSTUNINGFILES $WEBSERVERTUNINGFILES > #NT: do > #NT: $RCP $SERVER:$i $LOGDIR > #NT: done 162,164c162,166 < CMD="$WEBSTONEROOT/bin/webmaster -v -u $TMPDIR/filelist" < CMD=$CMD" -f $TMPDIR/config -t $TIMEPERRUN" < [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER" > #GREG: bug fix: changed $TMPDIR below to $LOGDIR > CMD="$WEBSTONEROOT/bin/webmaster -v -W -u $LOGDIR/filelist" > #GREG: bug fix: changed $TMPDIR below to $LOGDIR > CMD=$CMD" -f $LOGDIR/config -t $TIMEPERRUN" > #GREG: [ -n "$SERVER" ] && CMD=$CMD" -w $SERVER" Changes to sysdep.h: Emit an NT-style command line and to fix a C portability problem. 48c48 < #error NT gettimeofday() doesn't support USE_TIMEZONE (yet) > #error NT gettimeofday() does not support USE_TIMEZONE (yet) 87a88 > #ifdef SOLARIS_CLIENT 88a90,92 > #else > #define PROGPATH "D:\webstone2.0\webclient.exe" /* "/usr/local/bin/webclient" */ > #endif /* SOLARIS_CLIENT */ Changes to webmaster.c: Emit an NT-style command line, and fix a bug that shows up on Solaris. 539a540 > #ifdef SOLARIS_CLIENT 550a552 > #endif /* SOLARIS_CLIENT */ 586a589 > #ifdef SOLARIS_CLIENT 588a592,594 > #else > strcat(commandline, " -u d:\WebStone2.0\filelist"); > #endif /* SOLARIS_CLIENT */ 1395a1402 > 1398a1406,1408 > size_t count; > char **sptr, **dptr; > struct in_addr *iptr; 1406,1407d1415 < dest->h_addr_list = src->h_addr_list; < } 1408a1417,1446 > /* > * ADDED: by Greg Burrell of Mindcraft Inc. 10/22/96 > * PROBLEM: we can't just do the assignment: > * > * dest->h_addr_list = src->h_addr_list > * > * because those are just pointers and the memory pointed to > * may get overwritten during the next gethostbyname() call. > * In fact, that happens on Solaris 2.5 > * > * FIX: Make a copy of the h_addr_list of a hostent structure. > * h_addr_list is really an array of pointers. Each pointer > * points to a structure of type in_addr. So, we allocate space > * for the structures and then allocate space for the array of > * pointers. Then we fill in the structures and set up the array > * of pointers. > */ > for(count = 0, sptr = src->h_addr_list; *sptr != NULL; sptr++, count++); > if ((dest->h_addr_list = malloc(count + 1)) == NULL) > return 0; > if ((iptr = malloc(count * sizeof(struct in_addr))) == NULL) > return 0; > for (sptr = src->h_addr_list, dptr = dest->h_addr_list; > *sptr != NULL; sptr++, dptr++, iptr++) [Macro error: j6- dagger]] > *dptr = NULL; > return 1; > } Appendix 2: Operating System ConfigurationSystem Identification:The output from uname -a is: SunOS ultra 5.5.1 Generic sun4u sparc SUNW,Ultra-2 Run Time ParametersThe TCP/IP listen backlog was increased to 1024 via the command: /usr/sbin/ndd -set /dev/tcp tcp_conn_req_max 1024 The Netscape Enterprise 2.0 web servers were told to use this new value in their start-up configuration files magnus.conf: ListenQ 1024 Active Services:These daemons and processes were active at the time the tests were run: (ps -ae was used) sched |
|||||||||||||||||||||||||||||||||||||||||||||||||
![]() |
Copyright © 1997-98. Mindcraft, Inc. All rights reserved. Mindcraft is a registered trademark of Mindcraft, Inc. For more information, contact us at: info@mindcraft.com Phone: +1 (408) 395-2404 Fax: +1 (408) 395-6324 |