|
We will be glad to conduct this Open
Benchmark at any mutually agreeable test site. Bob Young, President of Red Hat Software,
said in a
Salon
article published yesterday
that if an organization such as Ziff-Davis were brought into the
benchmark "...then absolutely we would be thrilled to participate." Well, PC Week has offered their lab as
the site for the Open Benchmark. Mindcraft believes that the PC Week
lab
would be an ideal location and we hope that Red Hat and the Linux, Samba, and Apache
experts will agree to using it.
|
We've Got A Time and
Place
John
Taschek from PC Week, a Ziff-Davis publication, has offered
the Ziff-Davis Labs in Foster City, California for the Open Benchmark. It looks like the fastest that
he can make the lab available consistent with existing commitments of some of the key
Linux, Samba, and Apache players is the week of June 14. So we have a
date!
|
|
|
Some people have asked why we want to use the
same Dell server we used for our second test. There are two reasons
why:
-
We want a witnessed, scientific test
showing if the results we got were accurate. This is the kind of test that
Jeremy Allison, a vocal Linux and Samba proponent, has asked for. Changing the
server would be unscientific.
-
We want to verify the scalability and enterprise
readiness of Linux. We believe that the best enterprise-class server to
test this on would be one from a major computer manufacturer that
supports Linux. The Dell server meets these criteria (look at the
joint Red Hat and Dell press
release
).
We have also seen that some people are concerned that
we picked a multiprocessor configuration knowing that Linux would not
perform as well as Windows NT Server. Red Hat's press release announcing
its Linux
6.0 Server Operating System
should allay those concerns. For
those who would still like to have a comparison on a uniprocessor configuration, we have
included one in the Open Benchmark.
Mindcraft believes that the best way to continue to show our name as a
credible source of information is to have an Open Benchmark. Therefore, we welcome the
opportunity to perform a benchmark of Linux and Windows NT Server that is open to the best
experts in the Linux community. Mindcraft will participate in this benchmark at
its own expense.
Mindcraft has withheld the publication of our
second Linux and Windows NT Server benchmark results (the test for which Linus
and others provided some suggestions for tuning) pending the response to this
Open Benchmark invitation.
We call on Linus Torvalds to invite anyone he
chooses to tune Linux, Samba, and Apache. We also invite Red Hat to send anyone
they choose to participate in the benchmarking as a Linux Expert. In addition,
we invite Microsoft to tune Windows NT Server. The Linux Experts, Microsoft,
PC Week,
and Mindcraft will witness all tests.
Purposes
To see if Mindcraft's second benchmark results are biased and not representative of
Linux's true performance.
To do a fair comparison of Linux and Windows NT Server 4.0 with Linux
tuned by Linux Experts and with all testing witnessed by them.
Test Environment
The following test environment will duplicate
as closely as possible in the PC Week lab the test
environment
Mindcraft used for its second test of Linux and Windows NT:
Mindcraft will arrange to have available the same system that
was used for the second Linux/Windows NT Server test.
The tests will use 4 x 100Base-TX networks for all configurations.
Samba and NT file sharing tests will be
done using the NetBench Enterprise mix. The NetBench tests
will use either 144 clients or 72 clients, depending upon client availability
in the PC Week
lab.
The same client setup will be
used for both Linux/Samba and Windows NT.
Apache and IIS Web server tests will be
done using the WebBench zd_static_v20.tst, modified only to account for the
number of clients used and to adjust the number of threads
to match the number Mindcraft used in its first and second tests, which was
two threads on each of the 144 clients we used. Thus, for 144 clients the
benchmark will use two threads per WebBench client and for 72 clients the
benchmark will use four threads per WebBench client
. The same client setup used for NetBench testing will be used for both
Linux and Windows NT WebBench testing.
Test Procedure
The testing will be divided
into three phases:
1
|
Reproduce the
results of Mindcraft's second test |
2
|
Linux experts
use anything available at the time of Mindcraft's second test. This will
show how much better Mindcraft could have done at the time
|
3
|
Get the best
performance possible using today's software
|
General Procedures
-
Modifications to the Open
Benchmark Procedures
-
These Open
Benchmark procedures may be modified if all parties agree.
-
If all parties
cannot agree to modify these procedures, PC Week will make the
final decision regarding any changes.
-
Witnessing the Tests
-
A PC Week
representative will witness all tests to be sure they are conducted
fairly.
-
One or more
representatives from the Linux experts will witness all tests.
-
A Mindcraft
representative will witness all tests.
-
If Microsoft
attends, they may have a representative witness all
tests.
-
Obtaining Software to
Test
-
PC Week will provide the version of Windows NT Server and Red Hat Linux 5.2 to be
used.
-
Windows NT Service
Packs will be provided by PC Week or downloaded from Microsoft's
public Web site.
-
Updates to Red Hat
Linux 5.2, Apache, and Samba will be provided by PC Week or
downloaded from publicly accessible Web sites.
-
Red Hat Software
will provide Red Hat 6.0, if the Linux experts choose to use
it.
-
Test Lab Validation
-
Client Systems
-
Technical Support
-
If Microsoft attends the Open Benchmark, they may
provide Mindcraft with technical support and may
perform tuning, configuration or patching along with Mindcraft, consistent with the constraints
of each phase. If Microsoft does not attend, Mindcraft
may contact Microsoft for technical support.
-
The Linux experts
are free to seek any technical support they want from any
source.
-
Test Sequence and
Efficiency
- If it is
acceptable to the Linux experts and if Mindcraft can obtain an additional
disk to hold a second operating system, both parties will conduct
each phase of the Open Benchmark in the sequence specified below. The Linux experts can
put each OS disk through any tests they want and can select
the disk they want to use. Otherwise, in
order to make the testing run as efficiently as possible,
the Linux experts will conduct all three phases of their testing before Mindcraft
conducts its two phases. This will eliminate the need to reformat and
reload the operating system disk, saving a great deal of time and reducing the
possibility of errors.
- Given the
number of tests and the availability of the PC Week lab, each party will
be limited to one day in which to tune, patch, and
debug their software. If, at the end of Phase 3, there is extra
time, the parties can use the time to rerun any tests they
want with additional tuning, patching and debugging. The extra time will be divided evenly
between the parties.
- General NetBench Test
Procedures
-
NetBench tests will be performed using the Enterprise
mix in its standard
timed testing mode. The only change permitted to the Enterprise mix will
be to account for the number of client systems used.
-
Before starting any NetBench test, the data
disk(s) will be reformatted and the NetBench software will be loaded on to
the freshly formatted data disk(s).
-
Either party may conduct partial
NetBench test to understand how the benchmark works and to gather
information to help them tune their software. PC Week can limit how many partial NetBench tests any party may
run and the length of any partial test.
- General WebBench Test
Procedures
-
WebBench tests will be performed using
the zd_static_v20.tst test suite and its standard workload files. The only
change permitted to the zd_static_v20.tst test suite will be to account for
the number of client systems and threads used.
-
Either party may conduct partial Web
Bench test to understand how the benchmark works and to gather
information to help them tune their software. PC Week can limit how many partial
Web
Bench
tests any party may
run and the length of any partial test.
Tunes, Patches and Debugging
The tunes allowed
include:
-
Changing or adding
operating system, Web server, or file server constants by a GUI tool, editing
source code, or at run-time via facilities like the /proc filesystem.
-
Changing operating
system, Web server and file server configuration files.
-
Recompiling and/or
restarting the operating system, Web server, and file server are
permitted.
Patches are defined as any changes
to the operating system, Web server, or file server that alter the programming
logic. Tunes are not considered patches. Patches can be in source code form or
in binary form, such as Service Packs.
Debugging, tracing system
calls, and other methods to help the participants understand how their products
are behaving is allowed. Participants may apply what they learn to tuning and
patching their products consistent with the phase of the Open Benchmark being
conducted at the time.
Phase 1: Reproduce
Mindcraft's Results
This
phase addresses the concerns that Linus Torvalds, Alan Cox, Jeremy Allison and
others have expressed about not being able to be in the lab when Mindcraft
conducted its second test. We'll do it again with them present. Alan Cox has expressed concern that the
Linux experts cannot use software that was not available at the time Mindcraft conducted its
second test. They will be able to use the very latest software
available during Phase 3.
The
Linux experts will conduct this phase of the Open Benchmark using the following
procedures:
-
Mindcraft will disclose to the Linux experts both the
Linux and Windows NT Server test results from its second test. These results
are not for publication until the results of the Open Benchmark are published
consistent with the Publishing the
Outcome
section.
-
Mindcraft will disclose in detail the hardware configuration, Linux
tunes, configuration files, patches, and software versions it used for
the second test. Mindcraft used
Linux 2.2.6, Apache 1.3.6, Samba 2.0.3, and
a TCP patch provided by David Miller.
-
The Linux experts will format all disks
and verify that the hardware is configured as Mindcraft
disclosed.
-
The Linux experts will install, configure
and tune all software as Mindcraft disclosed.
-
The Linux experts
will run the following tests:
1 |
1 |
256 MB |
NetBench |
2 |
1 |
256 MB |
WebBench |
3 |
4 |
1 GB |
NetBench |
4 |
4 |
1 GB |
WebBench |
Mindcraft will conduct this phase of the
Open Benchmark using the following procedures:
-
Mindcraft will disclose the hardware
configuration, Windows NT Server tunes, configuration files, patches, and
software versions it used for the second test. Mindcraft used Windows NT
Server 4.0 with Service Pack 4 applied, IIS 4, and the Windows NT Option
Pack.
-
Mindcraft will format all disks and verify that the hardware is configured as Mindcraft
disclosed.
-
Mindcraft will install, configure and tune all software as
Mindcraft disclosed.
-
Mindcraft will run the following
tests:
Phase 1
Windows NT ServerTests
5 |
1 |
256 MB |
NetBench |
6 |
1 |
256 MB |
WebBench |
7 |
4 |
1 GB |
NetBench |
8 |
4 |
1 GB |
WebBench |
Phase
2: Linux Experts Use and Tune The Best Software Available at the Time Mindcraft Did
Its Second Test
The purpose of Phase 2 is to see if the
Linux experts could have achieved higher Linux performance if they had been present
when Mindcraft conducted its second test. It directly addresses concerns on lab
accessibility raised by Linus Torvalds, Jeremy Allison, and Alan Cox.
In this
phase the Linux experts can use any versions of the operating system, Apache, or Samba,
including patches to any of them, that were available on generally accessible Web
or ftp sites, that were available for sale in stores, or that were available
to Mindcraft at the time it started its second test, April 20, 1999,
The Linux experts may make any tunes, patches, and configuration changes they
want; David Miller's TCP patch is allowed to be used.
The Linux experts will conduct this phase of
the Open Benchmark using the following procedures:
-
The Linux experts will install, configure
and tune all software as they choose.
-
The Linux experts will run the following
tests:
9 |
1 |
256 MB |
NetBench |
10 |
1 |
256 MB |
WebBench |
11 |
4 |
1 GB |
NetBench |
12 |
4 |
1 GB |
WebBench |
There are no Mindcraft tests to conduct in this
phase.
Phase 3: Get the Best Performance Possible Using
Today's Software
In this phase the Linux experts and Mindcraft can
use any versions of the operating system, Web server, or file server, including
patches to any of them, that are generally available on the Web, at ftp
sites, in stores, or from the product vendor at the time the Open
Benchmark is conducted. The Linux experts and Mindcraft may make any tunes,
patches, and configuration changes they want, consistent with the general
availability constraint.
The
Linux experts will conduct this phase of the Open Benchmark using the following
procedures:
-
The Linux experts will install, configure
and tune all software as they choose.
-
The Linux experts
will run the following tests:
13 |
1 |
256 MB |
NetBench |
14 |
1 |
256 MB |
WebBench |
15 |
4 |
1 GB |
NetBench |
16 |
4 |
1 GB |
WebBench |
Mindcraft will conduct this phase of the
Open Benchmark using the following procedures:
-
Mindcraft will install, configure and tune all software as
it chooses.
-
Mindcraft will run the following
tests:
Phase 3
Windows NT ServerTests
17 |
1 |
256 MB |
NetBench |
18 |
1 |
256 MB |
WebBench |
19 |
4 |
1 GB |
NetBench |
20 |
4 |
1 GB |
WebBench |
Mindcraft, any of the participating Linux
Experts, Microsoft, and PC Week will
receive the raw test results and will have unrestricted use of the test results.
Mindcraft and any of the Linux Experts that
want to participate will issue a joint press release describing the test results. The
tone of the press release will include the test results, will be factual in tone,
and will be positive about the opportunity to have an Open Benchmark with the Linux
experts involved. There will be quotations from Linus Torvalds or his designee, Red Hat (if
they want to participate), and Bruce Weiner (Mindcraft).
No test results or press
releases will be published until PC week has had the opportunity to
publish a story about the Open Benchmark. If PC Week chooses not to
publish a story within two weeks of the conclusion of the Open Benchmark. All
participating parties are free to publish the results consistent with this
"Publishing the Outcome" section.
Mindcraft will issue a report on its Web site similar in structure to
the one for its first report.
Red Hat and any of the Linux experts may generate their own reports
regarding the test.
-
Correct the test lab name to Ziff-Davis Labs in Foster
City, California
-
Changed the test date to the week of June 14, 1999
because of lab availability.
Change List - May 7, 1999
We made the following changes to the original invitation made on May
4, 1999. These changes were made to reflect PC Week's offer to use their lab for the Open
Benchmark and to clarify some items on which we have received feedback.
The changes are summarized by section.
Rationale and Invitation
-
Bob Young's offer to test at
Ziff-Davis has been added.
-
PC Week's offer has been added.
-
Mindcraft's acceptance of the PC Week lab has been
added.
-
PC Week's access to the benchmark results has been
added.
Test Environment
-
The client system specification in item
#3 was moved to the Test Procedures section. In addition, both Windows 9x and Windows NT clients
will be used.
-
The number of clients to be used is more clearly
specified in item #4 and structured to be consistent with the number used in
Mindcraft's second test and with the number of clients available in the PC
Week lab.
-
The number of threads per WebBench client is more exactly
specified in item #5. It now is specified to duplicate the number of total
threads making requests on the server and to duplicate the number Mindcraft
used in its first and second tests.
Test Procedures
-
There are extensive structural modifications to clarify the
purpose of each test, to clarify unintended restrictions on Samba and Apache,
and to address expressed by Jeremy Allison and others. The changes are too
many to enumerate here. In particular, the client OS to be used has been
expanded to include Windows NT and the tables of tests have been expanded and
reorganized to reflect all changes.
Publishing the Outcome
-
PC Week
was added to the list of receipients
of the raw test results.
-
A publication embargo has been added to
allow PC Week
an exclusive.
|
|