
On Oct 8, 2010, at 9:25 AM, exarkun@twistedmatrix.com wrote:
On 5 Oct, 08:09 pm, stephen.c.waterbury@nasa.gov wrote:
First, the "PB Copyable: Passing Complex Types" doc is *great* and the examples are excellent -- my compliments to all who contributed!
My question is about the pb.Cacheable section (http://twistedmatrix.com/documents/current/core/howto/pb- copyable.html#auto9) -- specifically the first sentence: 'Sometimes the object you want to send to the remote process is big and slow. "big" means it takes a lot of data (storage, network bandwidth, processing) to represent its state. "slow" means that state doesn't change very frequently.'
I would think that the product of its size and its rate of change is the applicable metric -- i.e.: the bigger the object is *or* the faster it changes (not the slower), the more applicable Cacheable is, no?
That seems plausible. I wonder if the rate comment is motivated by something else, like the chance of the remote cache being out of date when the remote side wants to use some of its data. This would increase with the rate of change, but I don't know if it really matters. I haven't ever actually used a Cacheable myself, as far as I can recall.
I think I probably wrote that paragraph, and it was not very well put. Big objects which are "fast", i.e. change constantly, are perfectly suitable for Cacheables. The point I believe I was trying to make there was that if a significant proportion of the object's data is changing quickly, Cacheable doesn't make much of a difference over just re-Copyable-ing the whole object, since the delta updates will be the same size as the whole object.