mrkafk at gmail.com
Thu Mar 4 15:47:22 CET 2010
Duncan Booth wrote:
> If you look at some of the uses of bigtable you may begin to understand
> the tradeoffs that are made with sql. When you use bigtable you have
> records with fields, and you have indices, but there are limitations on
> the kinds of queries you can perform: in particular you cannot do joins,
> but more subtly there is no guarantee that the index is up to date (so
> you might miss recent updates or even get data back from a query when
> the data no longer matches the query).
Hmm, I do understand that bigtable is used outside of traditional
'enterprisey' contexts, but suppose you did want to do an equivalent of
join; is it at all practical or even possible?
I guess when you're forced to use denormalized data, you have to
simultaneously update equivalent columns across many tables yourself,
right? Or is there some machinery to assist in that?
> By sacrificing some of SQL's power, Google get big benefits: namely
> updating data is a much more localised option. Instead of an update
> having to lock the indices while they are updated, updates to different
> records can happen simultaneously possibly on servers on the opposite
> sides of the world. You can have many, many servers all using the same
> data although they may not have identical or completely consistent views
> of that data.
And you still have the global view of the table spread across, say, 2
servers, one located in Australia, second in US?
> Bigtable impacts on how you store the data: for example you need to
> avoid reducing data to normal form (no joins!), its much better and
> cheaper just to store all the data you need directly in each record.
> Also aggregate values need to be at least partly pre-computed and stored
> in the database.
So you basically end up with a few big tables or just one big table really?
Suppose on top of 'tweets' table you have 'dweebs' table, and tweets and
dweebs sometimes do interact. How would you find such interacting pairs?
Would you say "give me some tweets" to tweets table, extract all the
dweeb_id keys from tweets and then retrieve all dweebs from dweebs table?
> Boiling this down to a concrete example, imagine you wanted to implement
> a system like twitter. Think carefully about how you'd handle a
> sufficiently high rate of new tweets reliably with a sql database. Now
> think how you'd do the same thing with bigtable: most tweets don't
> interact, so it becomes much easier to see how the load is spread across
> the servers: each user has the data relevant to them stored near the
> server they are using and index changes propagate gradually to the rest
> of the system.
I guess that in a purely imaginary example, you could also combine two
databases? Say, a tweet bigtable db contains tweet, but with column of
classical customer_id key that is also a key in traditional RDBMS
referencing particular customer?
More information about the Python-list