"RuntimeError: dictionary changed size during iteration" ; Good atomic copy operations?
no-spam at no-spam-no-spam.com
Sat Mar 11 12:19:49 CET 2006
In very rare cases a program crashes (hard to reproduce) :
* several threads work on an object tree with dict's etc. in it. Items
are added, deleted, iteration over .keys() ... ). The threads are "good"
in such terms, that this core data structure is changed only by atomic
operations, so that the data structure is always consistent regarding
the application. Only the change-operations on the dicts and lists
itself seem to cause problems on a Python level ..
* one thread periodically pickle-dumps the tree to a file:
>>> cPickle.dump(obj, f)
"RuntimeError: dictionary changed size during iteration" is raised by
.dump ( or a similar "..list changed ..." )
What can I do about this to get a stable pickle-dump without risiking
execution error or even worse - errors in the pickled file ?
Is a copy.deepcopy ( -> "cPickle.dump(copy.deepcopy(obj),f)" ) an
atomic opertion with a guarantee to not fail?
Or can I only retry several times in case of RuntimeError? (which would
apears to me as odd gambling; retry how often?)
PS: Zope dumps thread exposed data structes regularly. How does the ZODB
in Zope handle dict/list changes during its pickling operations?
Python 2.4.1 (#2, May 5 2005, 11:32:06)
[GCC 3.3.5 (Debian 1:3.3.5-12)] on linux2
More information about the Python-list