|
What's Wrong
Unfortunately, Mr. Martinez made two egregious mistakes:
he got important facts wrong and he failed to check the information his sources provided. Beyond
those mistakes, Mr. Martinez's biased article used innuendo and misquotes to defame
my personal reputation and Mindcraft's. The following sections show the details
that support my assertions.
Wrong Facts
Tuning Windows NT Server as we document takes less than 10 minutes, not "a great deal of time"
as Mr. Martinez states.
The article incorrectly implies that
the tunes used for Windows NT Server are not generally
available. They can be found at Microsoft's Web site. They are
now also available for any one to use at our Web
site.
Mr. Martinez printed only part of the answer I
gave him about the response
to a newsgroup posting about tuning Linux on the server we were testing.
What he did not publish was that I told him the response said we
would see much better performance with
FreeBSD than Linux. It's obvious why he didn't publish that
part.
The ABCnews.com article quotes Linus Torvalds as
saying, "We helped them out, gave them a few more things to tune, but
they wouldn't let us in the lab, and they wouldn't answer our follow-up
questions." He's correct that we didn't let him into the lab. We
couldn't because the work was being done in a lab at Microsoft. There's
more on this topic under the "Attacks on
Reputation" heading below.
-
Linus was wrong about our not answering his follow-up questions. He
and his experts gave us a lot of tunes they wanted us to make. We got
version 1.0 of the MegaRAID driver during our tests and used it. We sent
out our Apache and Samba configuration files for review and received
approval of them before we tested. (We actually got better performance
in Apache when we made some changes to the approved configuration file
on our own). Whenever we got poor performance we sent a description of
out how the system was set up and the performance we were measuring. We
received excellent support from the group of experts Linus put us in
contact with ("the experts"). Red Hat also provided excellent support
via email and on the phone. The experts and Red Hat told us what to
check out, offered tuning changes, and provided patches to try. We had
several rounds of messages between us in which we answered the questions
they posed.
|
Comparing the performance of a resource-constrained
desktop PC with an enterprise-class server is like saying a go-kart beat a
grand prix race car on a go-kart race course.
|
The article quotes Linus as saying, "In many of the other, independent tests we've
seen, Linux just beat the hell out of NT." The article goes on to claim,
"Testing by PC Week last month
seems to back him up." It's not clear as to which PC
Week
story he's referring. Is it the February 1, 1999
article about Linux being enterprise-ready? Is it the January 25, 1999 Smart
Reseller article? The March 15, 1999 PC
Week article with no
benchmarks in it? The only Linux test PC Week lists in its reviews directory is the
February 1,
1999 article. But that article does not include any Web server tests and was not published in March.
Mr. Martinez must have his sources wrong. The only recent article that I could find
at the ZDnet.com Web site that tested both Linux and Windows NT Web and file servers was
the January 25, 1999 Smart Reseller article. It tested performance on a resource-constrained 266 MHz
desktop PC. One cannot reasonably extrapolate the performance of a resource-constrained desktop PC
to an unconstrained, enterprise-class server with four 400 MHz Xeon processors.
If Mr. Martinez or Linus is referring to the February 1,
1999 PC Week article, it
contains no comparison with Windows NT Server. It only compares the
Linux 2.0 kernel with the Linux 2.2 kernel.
Mr. Martinez refers
to a non-existent PC Week
article.
|
|
In addition, the following testbed and server differences
add to the measured performance variances:
-
Mindcraft used a server with 400 MHz Xeon
processors while PC Week used one with 450 MHz Xeon processors. Jeremy
did not disclose what speed processor he was using.
-
Mindcraft used a server with a MegaRAID
controller with a beta driver (which was the latest version available
at the time of the test) while the PC Week
server used an
eXtremeRAID controller with a fully released driver. The MegaRAID
driver was single threaded while the eXtremeRAID driver was
multi-threaded.
-
Mindcraft used Windows 9x clients while Jeremy and PC Week
used Windows NT clients. According to Jeremy, he gets faster
performance with Windows NT clients than with Windows 9x clients.
Given these differences in the testbeds and
servers, is it any wonder we got lower performance than they did? If you
scale up our numbers to account for their speed advantage, we get
essentially the same results.
The only reason to use Windows NT clients is to
give Linux and Samba an advantage, if you believe Jeremy. In
the real world, there are many more Windows 9x clients connected to file servers than
Windows NT clients. So benchmarks that use Windows NT clients are unrealistic and
should be viewed as benchmark-special configurations.
Jeremy did provide me with tuning parameters for
Linux and Samba for the NetBench tests. Did he give me the same ones he
uses and that he applied for the PC Week tests? I hope so.
After all, the tunes he used for PC Week should be portable to
a server as similar as the one we used. But I don't know for sure
whether the tunes were the same as the ones PC Week
used because they didn't
publish theirs. Mindcraft published the tunings we made for our tests
because we have nothing to hide.
The fact that Jeremy did not publish the details
of the testbed he used and the tunes he applied to Linux and Samba is a
violation of the NetBench license. If he had published the tunes he
used, we would have tried them. What's the big secret?
The obvious assumptions for Mr. Martinez's need to
defame my reputation and Mindcraft's are that he must somehow justify his
biased and unfounded position, he is trying to attack Microsoft by attacking Mindcraft, or
that he his trying to gain favor with Linux proponents. I had expected more
from a reputable organization like ABCnews.com. His attacks as well as biased
and inaccurate reporting call into question the fairness and accuracy of all
reports at ABCnews.com.
The rhetorical question Mr. Martinez asks, "Was this a valid test, skeptics wonder, or an
attempt to spread fear, uncertainty and doubt (FUD, in tech parlance) about Linux?"
implies that our tests were biased and that Mindcraft reported a lie because Microsoft
paid for the test. This is the most damaging insult and attack in the whole article. Mr.
Martinez cannot back up implications with facts because they are unfounded. He did no
research with Mindcraft's clients to find out about us.
No Mindcraft client has ever asked us to deliver a report that lied or misrepresented a
test. On the contrary, all of our clients ask us to get the best performance for their
product and for their competitor's products. If a
client ever asked us to rig a test, to lie about test results, or to
misrepresent test results, we would decline to do the work.
Next time Mr. Martinez writes a story about Mindcraft he should consider our background.
Mindcraft has been in business for over 14 years doing various kinds of testing. For
example, from May 1, 1991 through September 30, 1998 Mindcraft was accredited as a POSIX
Testing Laboratory by the National Voluntary Laboratory Program (NVLAP), part of the
National Institute of Standards and Technology (NIST ). During that time, Mindcraft
did more POSIX FIPS certifications than all other POSIX labs combined.
All of those tests were paid for by the client seeking
certification. NIST saw no conflict of interest in our being paid by the company
seeking certification and NIST reviewed and validated each test result we
submitted. We apply the same honesty to our performance testing that we
do for our conformance testing. To do otherwise would be foolish and
would put us out of business quickly.
Some may ask why we decided not to renew our
NVLAP accreditation. The reason is simple, NIST stopped its POSIX FIPS
certification program on December 31, 1997. That program was picked up
by the IEEE and on November 7, 1997 the IEEE announced that they
recognized Mindcraft as an Accredited POSIX Testing Laboratory. We still
are IEEE accredited and are still certifying systems for POSIX FIPS
conformance.
Mr. Martinez slams Mindcraft
when he writes, "Torvalds notes that previous comparisons run by
Mindcraft for Microsoft showed similar results against other operating systems, such as
Sun's Solaris and Netware." So what? Are they wrong? Are they biased? No.
Novell, for example, had no complaints when we did a benchmark for them.
Mindcraft works much like a CPA hired by a
company to audit its books. We give an independent, impartial assessment
based on our testing. Like a CPA we're paid by our client.
NVLAP approved test labs that measure everything from asbestos to the
accuracy of scales are paid by their clients. This is a common practice.
If Linus, Jeremy, or ABCnews.com would like to hire Mindcraft to test
Linux, Samba, or Apache against Windows NT, Solaris, or any other operating system,
we'd be glad to do work. But we can't guarantee that
Linux will be faster than a competitive OS. We can guarantee that we will
do a fair and impartial test. We've got no axes to grind.
-
Mr. Martinez incorrectly attributes me with
questioning the testing methods PC Week used. I know that
PC Week uses appropriate test methods. I made that clear to Mr.
Martinez when he tried to put the words he wrote into my mouth during
our phone interview. What I told him was that it was a shame that PC
Week
does not
publish the test information that the NetBench license requires
Mindcraft and others to publish. If they had, we would have used their
Samba configuration.
-
It is a gross and biasing statement for Mr.
Martinez to write "...e-mail from Weiner and other Mindcraft testers
originates at a numerical IP address that belongs not to Mindcraft, but
to Microsoft." All email from me to the Linux experts did not originate
at a Microsoft IP address. Only email sent when I was conducting the
tests did. There were at least 11 messages between me and the Linux
experts before the retest started that originated from Mindcraft.
-
Mr. Martinez tries to imply that
something is wrong with Mindcraft's tests because they were done in a
Microsoft lab. You should know that Mindcraft verified the clients
were set up as documented in our report and that Mindcraft, not
Microsoft, loaded the server software and tuned it as documented in our
report. In essence, we took over the lab we were using and verified it
was set up fairly.
-
Mr. Martinez states in his article that Mindcraft did not
return calls seeking comments on our company's relationship with
Microsoft. He's wrong. He left one voice mail message that did not state
the purpose of his call. I returned the call as soon as I picked up his
message and left a voice mail message telling how to reach me via my
cell phone. He is the one who never returned my call. Why make it look
like Mindcraft had something to hide unless he is the one who is biased.
Considering the defamatory misrepresentations and bias in
Mr. Martinez's article, we believe that ABCnews.com should take the following actions
in fairness to Mindcraft and its readers:
Remove the article from its Web site and put an apology in
its place. If you do not do that, at least provide a link to this
rebuttal at the top of the article so that your readers can get both
sides of the story.
Provide fair coverage from an unbiased reporter
of Mindcraft's Open Benchmark
of Windows NT
Server and Linux. For this benchmark, we have invited Linus Torvalds, Jeremy Allison,
Red Hat, and other Linux experts to tune Linux, Apache, and Samba and
to witness all tests. We have also invited Microsoft to tune Windows
NT and to witness the tests. Mindcraft will participate
in this benchmark at its own expense.
References
The NetBench document entitled Understanding
and Using NetBench 5.01
states on page 24, " You
can only compare results if you used the same testbed each time you ran
that test suite [emphasis added]."
Understanding and Using NetBench 5.01
clearly gives another reason why the performance measurements
Mindcraft reported are so different than the ones Jeremy and PC Week
found. Look what's stated on
page 236, "Client-side caching occurs when the client is able to place
some or all of the test workspace into its local RAM, which it then uses
as a file cache. When the client caches these test files, the client can
satisfy locally requests that normally require a network access. Because a
client's RAM can handle a request many times faster than it takes that
same request to traverse the LAN, the client's throughput scores show a
definite rise over scores when no client-side caching occurs. In
fact, the client's throughput numbers with client-side caching can
increase to levels that are two to three times faster than is possible
given the physical speed of the particular network [emphasis added]."
|