Mindcraft Certified Performance

Open Benchmark
Frequently Asked Questions

White Paper Contents

Phases 1 and 2
Phase 3
Linux Tunes

Related Article

PC Week's Story

Why did Mindcraft want to hold the Open Benchmark?

The reasons were: 

  1. To confirm that Mindcraft's previous testing was unbiased and representative of Linux's performance.
  2. To address concerns raised by the Linux community over our first and second Windows NT/Linux benchmark.
  3. To compare the performance of the latest Linux software tuned by Linux Experts to Windows NT Server 4.0 in an open environment on the same computer.

How should the Linux community take all of Mindcraft's tests?

We hope the Linux, Apache, and Samba communities will take our tests as a wake-up call. What you are waking to is the sound of the enterprise calling. If you want your products to have wide acceptance in the enterprise, you need to do more than give away the source code.

Performance is only one aspect of an enterprise purchasing decision. In many cases it may not be the most important one. However, poor performance can get your product thrown out of the decision process early. 

Performance is one make-or-break part of competing for enterprise sales. Excel at it rather than ignoring it.

Why did the Open Benchmark use NetBench and WebBench?

The choice of the benchmarking tools goes back to the Linux/Windows NT Server benchmark published in Smart Reseller on January 28, 1999. They used NetBench and WebBench. Microsoft hired Mindcraft to run the same benchmarks on an enterprise-class server. The Open Benchmark is the third incarnation of that testing.

Why did you use a four-processor server?

As stated in the answer to the previous question, we were originally hired by Microsoft to show the performance of Linux and Windows NT server on an enterprise-class system. While many in the Linux community may not have experience using computers with four 400 MHz Xeon processors with 2 MB of cache and 1 GB of RAM, such servers are common in large enterprises.

One of the major new enhancements in the Linux 2.2.x kernels was support for SMP computers. Here's what Bob Young, President of Red Hat Software, said on April 26, 1999 in the press release announcing Red Hat Linux 6.0:

"The breadth of application support and high-end features like SMP support illustrate the maturity of the Linux operating system as an enterprise server platform," said Bob Young, CEO of Red Hat. "When combined with the traditional strengths of Linux, which include unmatched stability, reliability and the freedom to change the operating system, Red Hat Linux 6.0 delivers a powerful, well-supported server operating system capable of handling the most intense and important business applications in today's companies."

It seems fair to check out the claims he made.

Why didn't you use Zeus or khttpd instead of Apache?

The Red Hat participants did try Zeus in Phase 3 and found that it was performance-limited by Linux. They didn't use khttpd.  

Why didn't Red Hat use a Linux 2.3 kernel in Phase 3?

They told us that it was too unstable for them to be sure of getting it working in the short time we had to run the Open Benchmark.

Why are the Windows NT file-server test results using Windows NT clients faster than those shown in an earlier PC Week report?

The table below compares performance in the  May 10, 1999 PC Week NOS Shootout with what we measured in Phase 3 of the Open Benchmark. Performance with Windows NT clients almost doubled in the Open Benchmark. The increase was due to the partitioning of the RAID and the use of RAID 0 in Phase 3 while the PC Week Shootout used RAID 5.


Client OS PC Week NOS Comparison Open Benchmark Phase 3
Windows 95 337 Mbits/second  338 Mbits/second 
Windows NT 150 Mbits/second  294 Mbits/second

We can't let this slip by: the system used for the PC Week shootout used 500 MHz Pentium III CPUs and 2 GB of RAM while the Open Benchmark system used slower 400 MHz Xeon CPUs and 1 GB of RAM.

Why is the performance in the Open Benchmark so much higher than Mindcraft's first test?

There are several reasons:

  • We made some Linux, Apache, and Samba configuration mistakes.
  • There were Linux, Apache, and Samba bugs that were fixed in the later releases used for the Open Benchmark.
  • There was a significant hardware difference for the two benchmarks. We used a server in the first test with a 1 MB L2 cache whereas the server in the Open Benchmark had a 2 MB L2 cache. The CPU clock rate was the same in both tests. The other major difference was the RAID partitioning for the file-server test in Phase 3.

Why did you choose this particular server and RAID controller? Was it because you knew Linux wouldn't perform well with this hardware?

No. We chose enterprise-level hardware from a vendor who supports Linux on the configuration tested. The tests in Phase 3 show that the RAID controller did not effect the overall performance for Linux.

Why don't you test FreeBSD or OS/2 too?

These operating systems were not part of our original test so they did not belong in the Open Benchmark. If you'd like us to test them for you, please contact Mindcraft.

Why don't you repeat this test on a more typical Linux system, say a 233 MHz Pentium with 32MB of RAM?

We were interested in testing an enterprise-class system using current technology. We're not sure that it is possible to order a new system like the one described in the question. Besides, it makes no sense to test server capabilities on a system that is resource-constrained.

How come your reports don't mention the cost or stability of each OS?

There is more to the cost of an operating system than the license fees. If you look at the business models of Linux distributors, you will find that they plan to make money by selling customers support, services, documentation, etc.

If you want to see one way price/performance can be computed for Linux, take a look at this report.

Neither WebBench nor NetBench test the stability of a system. We know of no stability test suite. Besides, even if there were one, such testing would take more time than was available for the Open Benchmark in ZD Labs.

Why didn't you compile Linux with pgcc or use <insert-name-of-favorite-patch-here>?

We checked into using egcs for the Second Benchmark. Jeremy Allison told us that it would make no difference for Samba and Doug Ledford told us it would not matter for the kernel. So why should we go through the hassles of using a new compiler when there was likely to be little benefit?

Why did you publish the net rage you received from some of the "raving loonies" in the Linux community? Don't you know we're all not like that?

We know that there are many responsible, well-mannered, smart people in the Linux community. Unfortunately, you have been overrun by the noise coming from the other side of your community [the term "raving loonies" was provided in an email we received from one of you].

We published the net rage to stop it. We figured that if it hit the light of the net that it would at least slow down, if not stop. We were right.

We are going to remove the net rage page from our Web site as we publish the results of the Open Benchmark. We'll chalk it up to the maturing of the Linux community.


Copyright 1997-99. Mindcraft, Inc. All rights reserved.
Mindcraft is a registered trademark of Mindcraft, Inc.
Product and corporate names mentioned herein are trademarks and/or registered trademarks of their respective owners.
For more information, contact usat: info@mindcraft.com
Phone: +1 (408) 395-2404
Fax: +1 (408) 395-6324