[Python-ideas] Application awareness of memory storage classes

R. David Murray rdmurray at bitdance.com
Mon May 23 15:33:13 EDT 2016


On Fri, 20 May 2016 16:06:03 -0400, Terry Reedy <tjreedy at udel.edu> wrote:
> On 5/20/2016 7:17 AM, Piotr Balcer wrote:
> 
> [rearranged to undo top posting]
> 
> > 2016-05-20 12:30 GMT+02:00 Steven D'Aprano
> >     Wouldn't that mean you're restarting the game in the same state it was
> >     in just before it crashed? Which means (assuming the crash is
> >     deterministic) the very next thing the game will do is crash.
> 
> > That's an excellent observation ;) This is actually what happend when we
> > first wrote that game - there was a NULL-dereference in the game logic and
> > it caused a 'persistent segmentation fault'. It's really important for
> > programmers to step up when it comes to creating software that uses
> > this type of memory directly. Everyone essentially becomes a file system
> > developer, which is not necessarily a good thing.
> >
> > Our hope is that high level languages like python will also help in this
> > regard by making sure that the programmer cannot shoot himself in the foot
> > as easily as it is possible in C.
> 
> Unless one uses ctypes or buggy external C-coded modules, seg faulting 
> is already harder in Python than C.
> 
> One possibility to make using persistent memory easier might be a 
> context manager.  The __enter__ method would copy a persistent object to 
> a volatile object and return the volatile object.  The body of the with 
> statement would manipulate the volatile object. *If there are no 
> exceptions*, the __exit__ method would 'commit' the changes, presumed to 
> be consisting, by copying back to the persistent object.
> 
> with persistent(locator) as volatile:
>      <manipulate volatile>
> 
> Ignoring the possibility of a machine crash during the writeback, a 
> restart after an exception or crash during the block would start with 
> the persistent object as it was before the block, which absent bugs 
> would be a consistent state.

Addressing consistency is what libpmemobj does, and what I'm designing the
Python layer on top of it to do.  But instead of copying the persistent
data to volatile data and back again, what we have is a transaction scope
inside of which libpmemobj records all object changes.  If we never get
to the end of the commit phase, then the changelog is used to roll back
all of those changes, leaving the persistent objects in the state they
were in before the transaction started.  This applies across program
restarts as well, which takes care of the "ignoring the possibility
of a machine crash during writeback" (ie: that possibility is *not*
ignored and dealing with it is in fact a primary goal of the design).

So, we'll have:

    with persistent_store:
        <manipulate persistent objects>

and on exit from that code block either *all* of the changes are made
and recorded, or *none* of them are, regardless of what happens inside
the transaction, even if the machine crashes in the middle of the
commit.  Likewise, if a Python exception happens, the transaction commit
is aborted, and the state of persistent memory is rolled back.

I'm not coming up with anything that persistent->volatile->persistent
copying would get you that these transactions don't get you, and
transactions are more efficient because you don't have to copy the object
data around or do the persistent/volatile conversions.

There is one difference: in your version the changes to the volatile
versions that happened before the python exception would still exist...but
I don't see how that would be useful.

On the other hand, the fact that *all* in-block persistent object state
gets restored on block abort, regardless of where the exception occurred,
could be somewhat confusing to Python programmers.

--David


More information about the Python-ideas mailing list