Mindcraft Certified Performance Comparison Report

Netscape Enterprise Server 2.0.1
and
Lotus Domino Web Server 1.5a

Contents

Executive Summary
Introduction
Mindcraft's Certification
Performance Analysis
Test Procedures
SUT Configuration
Test Lab
Glossary
App1: Webstone Changes
App2: O/S Configuration

Executive Summary

March 10, 1997

This report compares the performance of Netscape Enterprise Server 2.0.1 and Lotus Domino Web Server 1.5a serving HTML text. All tests were done on a single-processor Hewlett-Packard NetServer 5/133 LS2 running Microsoft Windows NT 4.0.

Three sets of tests were run: Netscape Enterprise Server serving HTML files, Lotus Domino serving the same files, and Lotus Domino serving from a Notes database containing the HTML files.

The WebStone 2.5 benchmark was used to test the system. All three configurations were tested over the range of 4 to 24 WebStone client processes. We restricted the testing to small loads because at higher loads the Domino server experienced very high error rates and lower performance (see data summary). In addition, the Netscape server was tested with loads from 30 to 240 WebStone client processes.

Figure 1 shows the peak connection rates measured in the range from 4 to 24 client processes. Figure 2 shows the corresponding data throughput rates. A table summarizing the peak performance follows Figure 2.

Peak Connection Rate

Peak Throughput

Peak Performance Data
Hewlett Packard NetServer LS2
Windows NT Server 4.0
Server Configuration Connections/s Throughput, Mbits/s
Lotus Domino Web Server 1.5a HTML From Notes Database 41 @ 4 clients 3.2 @ 4 clients
Lotus Domino Web Server 1.5a HTML From Files 93 @ 8 clients 5.7 @ 8 clients
Netscape Enterprise Server 2.0.1 HTML From Files 518 @ 24 clients 32.1 @ 24 clients

Introduction

This Performance Report was commissioned by Netscape Communications Corporation to compare the performance of their Enterprise Server 2.0.1 to the performance of Lotus Domino Web Server 1.5a. The WebStone 2.5 run rules were followed, with two exceptions: the fileset used was the Silicon Surf fileset from WebStone 1.1, and the range of loads used was from 4 to 24 WebStone client processes.

The Silicon Surf fileset was chosen because the standard WebStone 2.5 fileset includes one data file that is five megabytes in size, and we were not confident that we could load this file into the Notes database properly. The fileset was translated from the format used by WebStone 1.1 to that used by WebStone 2.0 and WebStone 2.5. The narrower range of loads was used because even at loads of 20 and 24 WebStone client processes tests of the Domino server gave error rates of 61 to 442 errors per second. Other servers we have tested give error rates of 0.0 to 0.1 error per second at the highest attainable load. See the data summary for the fileset used and the data summary tables.

The NetScape Enterprise Server was also tested at loads ranging up to 240 WebStone client processes.

Mindcraft's Certification

Mindcraft, Inc. conducted the performance tests described in this report between March 3 and March 7, 1997, in the Netscape performance laboratory in Mountain View, California. Mindcraft used the WebStone 2.5 benchmark to measure performance.

Mindcraft certifies that the results reported herein fairly represent the performance of the Netscape Enterprise Server 2.0.1 and of the Lotus Domino Web Server 1.5a running on a Hewlett-Packard NetServer 5/133 LS2 under Microsoft's Windows NT Server 4.0 operating system as measured by the WebStone 2.5 benchmark. Our test results should be reproducible by others who use the same test lab configuration as well as the computer and software configurations and modifications documented in this report.

Performance Analysis

This analysis is based on the complete WebStone benchmark results for the tests described here.

The WebStone 2.5 benchmark stresses a system's networking ability in addition to other aspects of server performance. The best way to see if there is unused capacity on a server computer running WebStone is to look at the CPU utilization. It was 100% for all of the tests when the peak performance was attained. The maximum network bandwidth used was 33.5 Mbits/second, for the Netscape server with 240 WebStone client processes. This is well below the maximum expected available bandwidth of a 100Base-TX Ethernet, and below the aggregate performance of the seven 10Base-T Ethernet connections to the client systems. Therefore, the networks did not significantly limit the performance of the Web servers as measured in these tests.

Figures 3 and 4 show the network throughput and connection rates, respectively, of the Lotus and Netscape servers running on a Hewlett Packard Web Server LS2. The performance curves are nearly flat for all three configurations over the range 8 through 24 clients. Above this load level the performance of the Lotus Domino server degrades rapidly, with increased latency (Figure 5) and greatly elevated error rates. The performance of the Netscape Enterprise Server increases slightly with increasing demand, up to the maximum test load of 240 WebStone client processes.

Throughput, 4 to 24
Clients

Connection Rate, 4 to 24
Clients

Latency, 4 to 24
Clients

The WebStone load that a server computer can support depends on four primary factors:

  • The bandwidth of the networks available;
  • The ability of the operating system to utilize the available network bandwidth;
  • The ability of the operating system to maximize the CPU time available to the Web server; and
  • The rate at which the Web server can process requests

For all the tests presented here the network throughput was well below the maximum capacity of the network hardware. The server platform was the same for all the test runs. Each server used all available CPU cycles when peak performance was measured. This leaves only Web server efficiency as the determining factor of the observed performance.


Test Procedures and Benchmark Configuration

Test Procedures

Mindcraft followed the standard WebStone 2.5 run rules with the following changes:

  • Instead of the standard run from 10 to 100 clients, we ran all three configurations from 4 to 24 clients. In addition, we ran tests for 30 to 240 clients in order to more fully demonstrate the capacity of the Netscape server.
  • Access logging was turned off, so the results would not be skewed by the overhead to record the very long URLs for the data in the the Notes database.

The following basic set of procedures was used for performing these tests:

  • Error logging was turned on.
  • All test runs were done with a user logged in to the server console.

Test Data

The data files used for the static HTML testing were the default "Silicon Surf" fileset, described in filelist.ss as distributed with the WebStone 1.1 benchmark. The fileset was translated from the format used by WebStone 1.1 to that used by WebStone 2.0 and WebStone 2.5. See the data summary for the translated fileset file.

This fileset was designed to represent a real-world server load, based on analysis of the access logs of Silicon Graphics, Inc.'s external Web site, http://www.sgi.com. Netscape's analysis of logs from other commercial sites indicated that Silicon Surf access patterns were fairly typical for the Web when they were designed.

The Silicon Surf model targets the following characteristics:

  • 93% of accessed files are smaller than 30 KB.
  • Average accessed file is roughly 7 KB.

Configuration of the System Tested

Web Server Software
Vendor: Netscape Communications Corp.
HTTP Software: Enterprise Server 2.0.1
Number of threads: Default
Server Cache: Default
Log Mode: Error Log Only
Tuning: DNS off, security off, ThreadSyncMin=2, ThreadSyncMax=2
Vendor: Lotus Development Corp.
HTTP Software: Domino Web Server 1.5a
Number of threads: Default
Server Cache: Default
Log Mode: Error Log Only
Tuning: DNS off, Security off.
Computer System
Vendor: Hewlett Packard Corporation
Model: NetServer LS2
Processor: 133 MHz Intel Pentium Pro
Number of Processors: 2
Memory: 192 MB RAM
Disk Subsystem: 2 - Seagate ST32550WC 2GB drives
Disk Controller: 2 - Adaptec AIC-7870 SCSI controllers
Network Controllers: 1 - Digital EtherWorks 10/100Base-TX PCI adapter
Tuning: Tests were run with one processor active.
Operating System Microsoft Windows NT 4.0 Server with Service Pack 2 and the tcpip.sys file from the hot fix published by Microsoft on February 11, 1997. All registry parameters were at their default settings. Unneeded services were turned off.
The Lotus Domino Web Server ran under Lotus Notes 4.5a.
Network
Type and Speed: 10BaseT Ethernet betwen each client system and the switch, 100Base-TX Ethernet between the switch and the server
Number of Nets: 1
Additional Hardware: 1 - Grand Junction FastSwitch 2800

Test Lab Configuration

In order to cause the Netscape Enterprise Server to use all available CPU cycles on the server computer system, we used seven WebStone client systems. One of these systems also served as the Webmaster, controlling the WebStone driver. The test lab network configuration used for this work is shown below:

Test Net Configuration

Webstone Client
Computer Systems
Vendor: Silicon Graphics
Model: Indy
Processor: 133MHz R4600
Number of Processors: 1
Memory: 32 MB
Disk Subsystem: One 1 GB SCSI Disk
Disk Controllers: Built-in SCSI
Network Controllers: Built-In 10Base-T Interface
Number of Clients: Seven
Operating System
and Compiler
Operating System: IRIX 5.3 with patches 455, 517, 617, 670, 841, 844, and 926
Compiler: IRIX Development Option 5.3 (SC4-IDO-5.3)

The tests described in this report were performed on an isolated LAN that was quiescent except for the test traffic.


Glossary

Clients
Number of processes or threads simultaneously requesting Web services from the server.
Connections per second
Average rate of creation and destruction of client/server connections.
Errors per second
Error rate for this run. Errors detected include:
-- socket() or connect() failure
-- error sending a request to the server
-- read() failure while receiving data from the server
-- malformed response or error code from server
Latency
Average client wait for data to be returned.
Throughput
Average data transfer rate, in megabits per second.

Copyright © 1997-98. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
For more information, contact us at: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324