Xah Lee's Unixism
SPAMhukolauTRAP at SPAMworldnetTRAP.att.net
Thu Sep 9 19:21:15 CEST 2004
jmfbahciv at aol.com wrote:
> In article <20040908192913.67c07e7d.steveo at eircom.net>,
> Steve O'Hara-Smith <steveo at eircom.net> wrote:
>>On Wed, 08 Sep 04 11:48:36 GMT
>>jmfbahciv at aol.com wrote:
>>>In article <p9qdnTnxTYDJR6PcRVn-pw at speakeasy.net>,
>>> rpw3 at rpw3.org (Rob Warnock) wrote:
>>>>*Only* a month?!? Here's the uptime for one of my FreeBSD boxes
>>>>[an old, slow '486]:
>>>> % uptime
>>>> 2:44AM up 630 days, 21:14, 1 user, load averages: 0.06, 0.02,
>>>>That's over *20* months!!
>>>I bet we can measure the youngster's age by the uptimes he boasts.
>> The Yahoo! server farm ran to very long uptimes last time I had
>>any details. The reason being that they commission a machine, add it to
>>the farm and leave it running until it is replaced two or three years
> Sure. But regular users of such computing services never get an
> uptime report. Hell, they have no idea how many systems their
> own webbit has used, let alone all the code that was executed
> to paint that pretty picture on their TTY screen.
> I bet, if we start asking, we might even get some bizarre
> definitions of uptime.
Well, there are lies, damn lies and statistics, don't
you know? :)
I have absolutely no idea of the size of Yahoo's "server
farm," but let's assume that it's roughly 100 servers
to make the arithmetic easier. Let's further assume
that the MTBF (Mean Time Between Failure) is roughly
2000 hours (about 3 months, or about 90 days).
Given these numbers (which are not real, I remind you,
just made up), it is likely that on any given day
one of those servers suffers some kind of failure.
However, one can argue, quite legitimately, that
the service which Yahoo! provides is still "up and
running." 1% of the users may not be able to access
their mail for a few hours, for example, but the Yahoo! is
> I do know that the defintion of CPU runtime is disappearing.
Not everywhere, Steve. There are still shops
which do measure CPU time for transactions
and base their sizing computations on that.
The better ones actually start from the requirements
and derive the CPU budget, Disk I/O budget, Lan budget, etc.
for each transaction based on that!
(Examples: "Hmmm... an in-memory dbms access takes about 150 usec,
my dbms schema requires 12 reads for this query. That's
1.8 msec. My CPU budget is 750 usec. Maybe I should
redesign something here?" ... or ... "Hmm... my CPU
budget is 3 ms. for this transaction, and I'm constrained
to use a particular XML parser. Time to measure. Whoops,
parsing takes around 6 ms for the average message on
my box. Maybe we shouldn't be using this particular
parser just because it's cheap? Or maybe we throw
more hardware at the problem and bid twice the number
of servers if we can't find a better XML parser.")
"It is impossible to make anything foolproof
because fools are so ingenious"
- A. Bloch
More information about the Python-list