[Python-ideas] Application awareness of memory storage classes

R. David Murray rdmurray at bitdance.com
Mon May 16 20:35:09 EDT 2016

I'm currently working on a project for Intel involving Python and directly
addressable non-volatile memory.  See https://nvdimm.wiki.kernel.org/
for links to the Linux features underlying this work, and http://pmem.io/
for the Intel initiated open source project I'm working with whose intent
is to bring support for DAX NVRAM to the application programming layer
(nvml currently covers C, with C++ support in process, and there are
python bindings for the lower level (non-libpmemobj) libraries).

tldr: In the future (and the future is now) application programs will
want to be aware of what class of memory they are allocating:  normal DRAM,
persistent RAM, fast DRAM, (etc?).  High level languages like Python that
manage memory for the application programmer will need to expose a way
for programmers to declare which kind of memory backs an object, and
define rules for interactions between objects that reside in different
memory types.  I'm interested in starting a discussion (not necessarily
here, though I think that probably makes sense) about the challenges
this presents and how Python-the-language and CPython the implementation
might solve them.

Feel free to tell me this is the wrong forum, and if so we can a new forum
to continue the conversation if enough people are interested.

Long version:

While I mentioned fast DRAM as well as Persistent RAM above, really it is
Persistent RAM that provides the most interesting challenges.  This is
because you actually have to program differently when your objects are
backed by persistent memory.  Consider the python script ('pop' is short
for Persistent Object Pool):

    source_account = argv[1].lower()
    target_account = argv[2].lower()
    delta = float(argv[3])
    pop = pypmemobj('/path/to/pool')
    if not hasattr(pop.ns, 'accounts'):
        pop.ns.accounts = dict()
    for acct in (source_account, target_account):
        if acct not in pop.ns.accounts:
            pop.ns.accounts[acct] = 0
    pop.ns.accounts[source_account] -= delta
    pop.ns.accounts[target_account] += delta
    print("{:-10s} {:10.2f} {:-10s}".format(
        source_account, pop.ns.accounts[source_account],
        target_account, pop.ns.accounts[target_account]

(I haven't bothered to test this code, forgive stupid errors.)

This is a simple CLI ap that lets you manage a set of bank accounts:

        > acct deposits checking 10.00
        deposits      -10.00    savings        10.00
        > acct checking savings 5.00
        savings         5.00    checking        5.00

Obviously you'd have a lot more code in a real ap.  The point here
is that we've got *persistent* account objects, with values that are
preserved between runs of the program.

There's a big problem with this code.  What happens if the program crashes
or the machine crashes?  Specifically, what happens if the machine crashes
between the -= and the += lines?  You could end up with a debit from one
account with no corresponding credit to the other, leaving your accounts
in an inconsistent state with no way to recover.

The authors of the nvml library documented at the pmem.io site have
discovered via theory and practice that what you need for reliable
programming with persistent memory is almost exactly same kind of
atomicity that is required when doing multi-threaded programming.

(I can sense the asyncio programmers groaning...didn't we just solve
that problem and now you are bringing it back?  As you'll see below
if you combine asyncio and persistent memory, providing the atomicity
isn't nearly as bad as managing threading locks; although it does take
more thought than writing DRAM-backed procedural code, it is relatively
straightforward to reason about.)

What we need for the above program fragment to work reliably in the
face of crashes is for all of the operations that Python programmers
think of as atomic (assignment, dictionary update, etc) to be atomic
at the persistent memory level, and to surround any code that does
multiple-object updates that are interdependent with a 'transaction'
guard.  Thus we'd rewrite the end of the above program like this:

    with pop:
        pop.ns.accounts[source_account] -= delta
        pop.ns.accounts[target_account] += delta
    print("{:-10s} {:10.2f} {:-10s}".format(
        source_account, pop.ns.accounts[source_account],
        target_account, pop.ns.accounts[target_account]

If this were in fact a threaded program, we'd be holding a write lock on
pop.ns.accounts in the same scope; but in that case (a) we'd also want
to include the print statement and (b) we'd need to pay attention to
what lock we were holding.  The pmem transaction doesn't have to worry
about locks, it is guarding *whatever* memory changes are made during the
transaction, no mater which object they are made to.  This is what I
mean by asyncio + pmem transactions being simpler than threading locks.

The transaction that is implied by 'with pop' is the heart of what
the nvml libpmemobj does:  it provides a transaction scope where any
registered changes to the persistent memory will be rolled back on abort,
or rolled back when the object pool is next opened if the transaction
did not complete before the crash.

Of course, in Python as it stands today, the above program would not
do anything practical, since when the dict that becomes accounts is
allocated, it gets allocated in DRAM, not in persistent memory.  To use
persistent memory, I'm about to start working on an extension module that
basically re-implements most of the Python object classes so that they
can be stored in persistent memory instead of RAM.  Using my proposed API,
the above program would actually look like this:

    source_account = argv[1].lower()
    target_account = argv[2].lower()
    delta = float(argv[3])
    pop = pypmemobj('/path/to/pool')
    if not hasattr(pop.ns, 'accounts'):
        pop.ns.accounts = PersistentDict(pop)
    for acct in (source_account, target_account):
        if acct not in pop.ns.accounts:
            pop.ns.accounts[acct] = 0.0
    with pop:
        pop.ns.accounts[source_account] -= delta
        pop.ns.accounts[target_account] += delta
    print("{:-10s} {:10.2f} {:-10s}".format(
        source_account, pop.ns.accounts[source_account],
        target_account, pop.ns.accounts[target_account]

The difference here is creating a PersistentDict object, and telling it
what persistent memory to allocate itself in.

But what happens when we do:

    pop.ns.accounts[acct] = 0.0

We don't want to have to write

    pop.ns.accounts[acct] = PersistentFloat(pop, 0.0)

but if we don't, we'd be trying to store a pointer to an float object
that lives in normal RAM into our persistent list, and that pointer
would be invalid on the next program run.

Instead, for immutable objects we can make a copy in persistent ram when
the assignment happens.  But we have to reject any attempt to assign a
mutable object in to a persistent object unless the mutable is itself
persistent...we can't simply make a copy of a mutable object and make
a persistent version, because of the following Python idiom:

    pop.ns.accounts = accounts = dict()

If we copied the dict into a PersistentDict automatically, pop.ns.accounts
and accounts would point to different objects, and the Python programmer's
expectations about his program would be violated.

So that's a brief description of what I'm planning to try to implement as
an extension module (thoughts and feedback welcome).

It's pretty clear that having to re-implement every mutable Python C type,
as well as immutable collection types (so that we can handle pointers
to other persistent objects correctly) will be a pain.

What would it take for Python itself to support the notion of objects
being backed by different kinds of memory?

It seems to me that it would be possible, but that in CPython at
least the cost would be a performance penalty for the all-DRAM case.
I'm guessing this is not acceptable, but I'd like to present the notion
anyway, in the hopes that someone cleverer than me can see a better way :)

At the language level, support would mean two things, I think: there
would need to be a way to declare which backing store an object belongs
to, and there would need to be a notion of a memory hierarchy where, at
least in the case of persistent vs non-persistent RAM, an object higher in
the hierarchy could not point to an object lower in the hierarchy that
was mutable, and that immutables would be copied from lower to higher.
(Here I'm notionally defining "lower" as "closer to the CPU", since RAM
is 'just there' whereas persistent memory goes through a driver layer
before direct access can get exposed, and fast DRAM would be closer yet
to the CPU :).

In CPython I envision this being implemented by having every object
be associated with a 'storage manager', with the DRAM storage manager
obviously being the default.  I think that which storage manager an
object belongs to can be deduced from its address at runtime, although
whether that would be better than storing a pointer is an open question.
Object method implementations would then need to be expanded as follows:
(1) wrapping any operations that comprise an 'atomic' operation with a
'start transaction' and 'end transaction' call on the storage manager.
(2) any memory management function (malloc, etc) would be called
indirectly through the storage manager, (3) any object pointer to be
stored or retrieved would be passed through an appropriate call on the
storage manager to give it an opportunity to block it or transform it,
and (4) any memory range to be modified would be reported to the memory
manager so that it can be registered with the transaction.

I *think* that's the minimum set of changes.  Clearly, however, making
such calls is going to be less efficient in the DRAM case than the
current code, even though the RAM implementation would be a noop.


Providing Python language level support for directly addressable
persistent RAM requires addressing the issues of how the application
indicates when an object is persistent, and managing the interactions
between objects that are persistent and those that are not.  In addition,
operations that a Python programmer expects to be atomic need to be
made atomic from the viewpoint of persistent memory by bracketing them
with implicit transactions, and a way to declare an explicit transaction
needs to be exposed to the application programmer.  These are all
interesting design challenges, and I may not have managed to identify
all the issues involved.

Directly addressable persistent memory is going to become much more
common, and probably relatively soon.  An extension module such as I
plan to work on can provide a way for Python to be used in this space,
but are there things that we are able and willing to do to support it
more directly in the language and, by implication, in the various Python
implementations?  Are there design considerations for this extension
module that would make it easier or harder for more integrated language
support to be added later?

I realize this could be viewed as early days to be bringing up this
subject, since I haven't even started the extension module yet.  On the
other hand, it seems to me that the community may have architectural
insights that could inform the development, and that starting the
conversation now without trying to nail anything down could be very
helpful for facilitating the long term support of variant memory types
in Python.

More information about the Python-ideas mailing list