It's possible to make each element in the set being edited a cacheable. If you think that would introduce too much overhead, then you can add a method to the "address book" object which updates one of the entries in this address book, and uses an id to identify the entry within the address book. So the address book is the dict. That's what I did in a similar case. I'm no expert though.
Yeah. I've just realised that after saving an object in the ZODB it should have an ID attribute called "_p_oid". I can can compare copyables that come back to objects in the database this way. I think this will be much easier to implement than having around 20 classes which are all cachables, although I can see some problems arising here because the "_p_oid" only has a useful value after the object has been persisted.
Perhaps someone knows of an open source app which has tackled this problem (i.e. keeping an object hierarchy in sync across multiple clients) that I could look at and get general strategies from? It seems all my solutions have a "hack" feel too them, and I can't wrap my head around how to do this using a mix of cacheables/copyables/ referencables. For now, I just have one top-level cache that get it's observe_* methods called (e.g. observe_addFoo(foo)) and then just look to see if there is an Foo with the same _p_oid in my list of Foo's and update (by just calling setCopyableState) if so, or append otherwise. Something just doesn't feel right about this approach.