On 02Apr2019 10:43, Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Paul Moore wrote:
now that Python has type inference, it should be possible for users to just type {} and have the interpreter work out which was intended from context.
Or have {} return an ambiguous object that turns into a dict or set depending on what is done to it.
We could call it a quict (quantum dict). [...]
I have concerns that this may lead to excessive memory use. My personal interpretation of QM is the continuous many worlds form, where _all_ values of the field are valid and real, and where the supposedly "collapse" on observation/interaction is just a measurement effect: we in the classical domain measure a particular value of the field and all of our future classical view flows consistent with that measurement, but there is _no_ value collapse of the system; the classical operation with the specific value is just a specific view of the uncollapsed QM state space. This is analogous to Python "bool(some-float-value)": we get a True or False but the underlying value is unchanged. As such, the quict type should be managed by an underlying state with a thread local view; when current thread performs a dict-like or set-like operation on the quict that thread should get a thread-local dict or set flavour view of the quict, with the underlying quict still open. In this way multiple threads get their "collapsed" view of the underlying quict, directly supporting many worlds programme execution. Obviously, any operations which do not induce a dict/set "measurement" leave the quict in the uncollapsed view. And it follows that "print(quict)" of an uncollapsed quict should choose a dict or set view for that thread at random and from then on the quict would have that flavour in that thread. For simple dict/set quicts this presents a fairly capped memory use (two flavours, and per thread view state) but companion types such as the quoat have much more scope for heavy memory consumption. Cheers, Cameron Simpson <cs@cskk.id.au>