Mindcraft Certified Performance Comparison Report

Ultra Enterprise 3000 Web Server
Accelerated by
Novell BorderManager FastCache

  • Using a Compaq ProLiant 6000
        CPU: Intel 200 MHz Pentium Pro

  • Using an Intel MB440LX DP
        CPU: Intel 266 MHz Pentium II

Contents

Executive Summary
Mindcraft's Certification
Performance Analysis
Test Procedures
SUT Configuration
Test Lab
Glossary

Executive Summary

This Certified Performance Report shows that Novell's BorderManager FastCache can increase the perceived Web-serving capacity of a Sun Ultra Enterprise 3000 system more than sevenfold.  "Perceived Web-serving capacity" is the maximum number of HTTP GET requests per second (rps) that a specified server environment can satisfy. The table below summarizes the peak performance of each server environment tested.

Novell BorderManager FastCache
Performance Summary

Server Environment

Peak Performance

Sun Ultra Enterprise 3000   stand-alone

550
requests/second

Sun Ultra Enterprise 3000
accelerated by a Compaq ProLiant 6000 with BorderManager FastCache

3288
requests/second

Ultra Enterprise 3000
accelerated by an Intel MB440LX DP with BorderManager FastCache

4055
requests/second

We used the Ziff-Davis WebBench™ 1.1 benchmark for the performance measurements in this report. Our testing started with a baseline test of a Sun Ultra Enterprise 3000 in a stand-alone configuration using static HTML data. It's peak perceived Web-serving capacity was 550 rps. We then tested a Web server environment with a Compaq ProLiant 6000 running the BorderManager FastCache between the Ultra Enterprise 3000 and the load generating clients. This configuration gave a perceived Web-serving capacity of 3288 rps for static HTML data. The third environment we tested put an Intel MB440LX DP server running the BorderManager FastCache between the Ultra Enterprise 3000 and the load generating clients. The performance of this configuration peaked at 4055 rps for static HTML data. Figure 1 shows the connection rate performance curves for these Web server environments. Similarly, Figure 2 shows the corresponding network throughput for these environments. In the figures in this report, the term "threads"  means the number of WebBench requestors simultaneously making requests of the server environment. The number of threads is not indicative of the number of real users that can access a Web server environment; it is shown here to help those who want to reproduce these tests.

Figure 1: BorderManager FastCache Static Connection Acceleration

bm-conn-comp-static.gif (9027 bytes)

Figure 2: BorderManager FastCache Static Throughput Acceleration

bm-thru-comp-static.gif (9193 bytes)

We also tested the Ultra Enterprise 3000 baseline Web server environment and the ProLiant 6000 accelerated environment with a mix of 80% static HTML data and 20% dynamic data. The dynamic data was generated by a CGI program running on the Ultra Enterprise 3000. The baseline performance peaked at 86 rps with eight threads. The accelerated environment reached 1067 rps at 16 threads. Figures 3 and 4 show the connection rate and throughput, respectively, for these two environments.

Figure 3: BorderManager FastCache Mixed Connection Acceleration

bm-conn-comp-mix.gif (8001 bytes)

Figure 4: BorderManager FastCache Mixed Throughput Acceleration

bm-thru-comp-mix.gif (8319 bytes)

Server Environments Tested

Mindcraft tested three Web server environments:

  1. Baseline: A stand-alone Sun Ultra Enterprise 3000. The detailed configuration is shown in Table 1 and how it fits into the test environment is show in Figure 5. Note, two pairs of fast Ethernet hubs were linked together to create two networks.
  2. ProLiant 6000 Accelerated: A combination of a Sun Ultra Enterprise 3000 with a Compaq ProLiant 6000 running BorderManager FastCache. The detailed configuration of the ProLiant 6000 is shown in Table 2 and how it fits into the test environment is shown in Figure 6.
  3. Intel Accelerated: A combination of a Sun Ultra Enterprise 3000 with an Intel MB440LX DP running BorderManager FastCache. The detailed configuration of the Intel MB440LX DP is shown in Table 3 and how it fits into the test environment is shown in Figure 7.

Table 1: Sun Ultra Enterprise 3000 Configuration

Sun Ultra Enterprise 3000

System
CPU 2 x 167 MHz UltraSPARC™
Cache L1: 32 KB
L2: 512 KB
RAM 320 MB
Disk 2 Seagate ST15230w 4.2GB drives (FastWide SCSI-2)
Operating System Solaris 2.5.1 with Solaris Internet Server Supplement 1.0
Tuning:
   tcp_close_wait_interval=60000
   tcp_rexmit_interval_min=1
   tcp_smallest_anon_port=2048
   tcp_conn_req_max=4096
   tcp_conn_hash_size=262144
   rlim_fd_max=2048
   rlim_fd_cur=1024
   sg_max_size=1024
Network 2 100Base-TX Sun adapters; half-duplex
Web Server Software Netscape Enterprise Server 3.0; logging was on
Magnus.conf Tuning:
   DNS=off
   RqThrottle=1024
   DaemonStats=off
   ACLFile removed
Obj.conf Tuning:
   Init fn=cache-init
   MaxNumberOfCachedFiles=1024
   MaxNumberOfOpenCachedFiles=1024
   MaxTotalCachedFileSize=262144
   MaxCachedFileSize=10485760
   KeepAliveTimeout=180

 

Figure 5: Baseline Web Server Environment

Novell-baseline.gif (15404 bytes)

Table 2: Compaq ProLiant 6000 Configuration

Compaq ProLiant 6000

System
CPU 200 MHz Pentium Pro
Cache L1: 16KB (8KB I + 8KB D) L2: 512 KB
RAM 128 MB EDO
Disk RAID 5 with 5 x 4.3GB FastWide SCSI disks; SMART-2 Array controller (50% Read / 50% Write acceleration)
Operating System intraNetWare 4.11
Network 5 x 100Base-TX Intel EtherExpress PRO/100 Server Adapters in polled mode (four adapters went to client systems and one to the Sun Web server); half-duplex
Web Server Accelerator BorderManager FastCache; logging was off

Figure 6: Compaq Accelerated Web Server Environment

Novell-compaq (29537 bytes)

Table 3: Intel MB440LX DP Configuration

Intel MB440LX DP

System
CPU 266 MHz Pentium II
Cache L1: 32 KB (16KB I + 16KB D) L2: 512 KB
RAM 512 MB SDRAM
Disk 12GB UltraWide SCSI
Operating System intraNetWare 4.11
Network 5 x 100Base-TX Intel EtherExpress PRO/100 Server Adapters in polled mode (four adapters went to client systems and one to the Sun Web server); half-duplex
Web Server Accelerator BorderManager FastCache; logging was off

 

Figure 7: Intel Accelerated Web Server Environment

Novell-intel (29822 bytes)

Test Lab

Figures 5, 6, and 7 show the test lab configurations used. Each client system was configured as shown in Table 4. When threads were added during the testing, they were added from each subnet in order to balance the load. Each client system always ran four threads of the WebBench client program for all tests.

Table 4: Client Systems

Client Systems

System
CPU 200 MHz Pentium Pro
Cache L1: 16 KB (8KB I + 8KB D) L2: 256 KB
RAM 32 MB EDO
Disk 2GB SCSI
Operating System Windows NT Workstation 4.0, Service Pack 3 and Post SP3 TCP/IP Hot Fix installed
Network 100Base-TX Intel EtherExpress PRO/100 Adapter; half-duplex

The networks were dedicated to this testing only. The fast Ethernet hubs we used were made by Bay Networks.

Test Procedures

Mindcraft followed the standard WebBench 1.1 test procedures.  For all of the static tests, we modified the ZD_STATIC_KEEPALIVE_V11.TST test suite file to use four threads per client, to change the ramp down time to 10 seconds,  to change the test time to120 seconds, and to test using 16 to 144 threads in increments of 16 (we tested to 152 threads only for the baseline test to determine where the Web server's performance fell off). All of the static tests used the ZD_STATIC_V11.WL workload file unchanged. The static data consists of 10 classes of 10 files, where each file is the same size. The file sizes start at 256 bytes and double in size for each class up to 128 KB for the last class. The 100 files of static data total 2.5 MB. The weighted average file size for the ZD_STATIC_V11.WL workload is 8,348 bytes.

For all of the mixed tests, we modified the ZD_UNIX_SIMPLE_CGI20_KEEPALIVE.TST test suite to use four threads per client, to change the ramp down time to 10 seconds, to change the test time to 120 seconds, and to test using 16 to 144 threads in increments of 16. All mixed tests were done with keep-alive turned on. All of the mixed tests used the ZD_UNIX_SIMPLE_CGI20.WL workload file unchanged. The mixed test data consists of 10 classes of 10 files, where each file is the same size, and one class of dynamic data. The  file sizes for the static data are the same as for the ZD_STATIC_V11.WL workload; however, the access percentages for each class of file is lower to allow for the dynamic data. The weighted average file size for the static data in the ZD_UNIX_SIMPLE_CGI20.WL workload is 6,735 bytes. The dynamic data is generated by a CGI program written in C that runs on the Web server system.

We didn't use the Intel system for the mixed tests because of time limitations for the test work. Also, we only tested the Compaq-based accelerator only up to 64 threads for the mixed tests because the Sun system reached its the peak request rate at 32 threads and because of test time limitations. If we were to have tested with more threads, we expect that mixed data performance would have improved beyond the 1067 rps reported because the Sun's CPU was only 12% utilized and the Compaq's CPU was less than 40% utilized.

Performance Analysis

The purpose of these tests was to measure the peak performance improvement offered by Novell's BorderManager FastCache. Our performance measurements of the baseline configuration, a stand-alone Sun Ultra Enterprise 3000 running Netscape's Enterprise Server 3.0, were done with configuration parameters set to obtain the maximum performance.

When analyzing the performance of a Web server, there are four primary factors to look at: the CPU utilization of the server system, the the disk subsystem performance, the networking subsystem performance, and the Web server software performance. The Sun system's overall CPU utilization reached 97% and 81% at the peak response rate for static static and mixed data, respectively. For the static data tests, the CPU was the main factor limiting performance of the Sun system since it was fully utilized and the other performance factors were not at their limits. For the mixed data test, the CPU was not fully utilized because of the disk overhead needed to launch the CGI program that generated the dynamic data. However, it was the largest factor limiting overall performance. Because the static data was small enough to fit into RAM, the performance of the disk subsystem played almost no part in the measured performance. In addition, the network usage peaked at less than 20 Mbits/second per subnet, well below the theoretical peak of 60 to 70 Mbits/second for a half-duplex 100Base-TX network. So the network did not limit the Sun's performance.

We can conclude that the primary performance-limiting factor was the Sun's CPUs, affected, of course, by the efficiency of its operating system and the Netscape Web server software. Given the performance limitation of the baseline Web server, it is a good candidate for performance improvement using the BorderManager FastCache.

The BorderManager FastCache is able to improve the perceived performance of the Sun system by the factors shown in Table 5. This improved performance is real for people accessing a Web site accelerated by Novell's BorderManager FastCache. The performance of the Web server system itself, however, does not improve. What happens is that BorderManager FastCache is able to serve the files requested directly from its cache rather than getting them from the Web server. This means that even when there are requests for non-cacheable data, such as dynamic data generated by a CGI program, BorderManager FastCache can still give significant perceived performance improvements. In fact, the acceleration factors in Table 5 show that BorderManager FastCache is more than twice as effective for Web sites with a mix of static and dynamic data than it is for Web sites with only static data. This increased effectiveness comes from off-loading the Web server from processing almost all of the requests for static data.

Table 5: BorderManager FastCache Acceleration Factors

BorderManager FastCache
Acceleration Factors
Measurement Compaq Intel
Requests/Second
    Static Data
    Mixed Data


6.0
12.4


7.4
N/A
Throughput (Mbits/s)
    Static Data
    Mixed Data

5.7
13.3

7.0
N/A

 

Mindcraft's Certification

Mindcraft, Inc. conducted the performance tests described in this report between July 21 and August 2, 1997, at Novell's Superlab in Provo, Utah.

Mindcraft certifies that the results reported herein fairly represent the performance of systems tested as measured by Ziff-Davis's WebBench™ 1.1 test suite running the ZD_STATIC_V11.TST workload. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Glossary

CGI
Common Gateway Interface is a method of passing information from a Web server to a program.
Dynamic data
Data that is generated by a program running on a Web server system and passed to a client. One method of generating dynamic data is to use a CGI program.
Keep-alive
A standard way  for a client to ask a Web server to keep a connection alive so that the client may make multiple requests for data without having to make a new connection for one.
Perceived Web serving capacity
The maximum number of HTTP GET requests per second (rps) that a specified server environment can satisfy.
Specified server environment
The particular combination of hardware and software that is being measured. It may be a stand-alone Web server, clusters of computers, or other configurations. The specified server environment is treated as a black box.
Static data
Data that is kept in files and that does not change.
Thread
Short for "thread of control in a process." As used in this report, it means the number of WebBench requestors threads simultaneously making requests of the server environment.

 


Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324