Dealing with marketing types...

Andrew Dalke dalke at dalkescientific.com
Sun Jun 12 07:54:10 CEST 2005


Paul Rubin replied to me:
> If you're running a web site with 100k users (about 1/3 of the size of
> Slashdot) that begins to be the range where I'd say LAMP starts
> running out of gas.

Let me elaborate a bit.  That claim of 100K from me is the
entire population of people who would use bioinformatics or
chemical informatics.  It's the extreme upper bound of the
capacity I ever expect.  It's much more likely I'll only
need to handle a few thousand users.


> I believe
> LiveJournal (which has something more like a million users) uses
> methods like that, as does ezboard.  There was a thread about it here
> a year or so ago.

I know little about it, though I read at
http://goathack.livejournal.org/docs.html
] LiveJournal source is lots of Perl mixed up with lots of MySQL

I found more details at
http://jeremy.zawodny.com/blog/archives/001866.html

It's a bunch of things - Perl, C, MySQL-InnoDB, MyISAM, Akamai,
memcached.  The linked slides say "lots of MySQL usage."
60 servers.

I don't see that example as validating your statement that
LAMP doesn't scale for mega-numbers of hits any better than
whatever you might call "printing press" systems.

> As a simple example, that article's advice of putting all fine grained
> session state into the database (so that every single browser hit sets
> off SQL queries) is crazy.

To be fair, it does say "database plus cache" though the author
suggests the place for the cache is at the HTTP level and not
at the DB level.  I would have considered something like memcached
perhaps backed by an asychronous write to a db if you want the
user state saved even after the cache is cleared/reset.

How permanent though does the history need to be?  Your
approach wipes history when the user clears the cookie and it
might not be obvious that doing so should clear the history.

In any case, the implementation cost for this is likely
higher than what you did.  I mention it to suggest an
alternative.


> As for "big", hmm, I'd say as production web sites go, 100k users is
> medium sized, Slashdot is "largish", Ebay is "big", Google is huge.

I'ld say that few sites have >100k users, much less
daily users with personalized information. As a totally made-up
number, only few dozens of sites (maybe a couple hundred?) would
need to worry about those issues.

If that's indeed the case then I'll also argue that each of
them is going to have app-specific choke points which are best
hand-optimized and not framework optimized.  Is there enough
real-world experience to design a EnterpriseWeb-o-Rama (your
"printing press") which can handle those examples you gave
any better than starting off with a LAMP system and hand-caching
the parts that need it?

				Andrew
				dalke at dalkescientific.com




More information about the Python-list mailing list