David> An issue which has dogged the NumPy project is that there is (to David> my knowledge) no way to pickle very large arrays without creating David> strings which contain all of the data. This can be a problem David> given that NumPy arrays tend to be very large -- often several David> megabytes, sometimes much bigger. This slows things down, David> sometimes a lot, depending on the platform. It seems that it David> should be possible to do something more efficient. David, Using __getstate__/__setstate__, could you create a compressed representation using zlib or some other scheme? I don't know how well numeric data compresses in general, but that might help. Also, I trust you use cPickle when it's available, yes? Skip Montanaro | http://www.mojam.com/ skip@mojam.com | http://www.musi-cal.com/~skip/ 847-475-3758