Standard graph API?

Magnus Lie Hetland mlh at
Fri Sep 10 00:04:41 CEST 2004

In article <vCqZc.4817$w%6.3101 at>,
Andrew Dalke wrote:
>I've been off-line for a couple months so a bit late
>following up to this thread...

Well, I'm still looking for replies, so... ;)

>I too would like some standard collection of graph
>algorithms, but not necessarily a standard API.

Hm. How would the algorithms work without a standard API?

To clarify, by API I here mean a protocol, i.e. a definition of how a
graph is supposed to behave -- *not* a standard implementation.

>I work with a lot of molecular graphs which means I would prefer
>using names like "atoms" and "bonds" instead of "nodes" and "edges".

Sure -- but unless we have some standard protocol/API, it would be
hard to get the algorithms to work with your graphs...

I've been thinking a bit about the use of object adaptation here; I
think it would be quite perfect. One possibility would be to use the
functionality of PyProtocols, but that's hardly standard... But it
would allow you to do stuff like

  graph = AdjacencyMap(someStrangeMolecularGraph)
  # or graph = adapt(someStrange..., AdjacencyMap)

or the like...

We could then have a few different perspectives on graphs, such as
adjmap, incmap, adjarray and edgelist, for example.

Just loose thoughts.

An adjacency map would work just like a dict of lists (or something
equivalent) and a dict of list could be used as one -- except,
perhaps, for some "advanced" features which would be optional.
(Modifying the graph is a bit awkward with this syntax, especially as
you have to know whether the adjacency collection is a list or a set,
for example.)

>David Eppstein wrote:
>> I would strongly prefer not to have weights or similar attributes as 
>> part of a graph API.  I would rather have the weights be a separate dict 
>> or function or whatever passed to the graph algorithm.  The main reason 
>> is that I might want the same algorithm to be applied to the same graph 
>> with a different set of weights.
>An alternative, which is both intriguing and sends alarm bells

Sounds like a fun combination ;)

>in my head is to have the algorithm
>collection instead generate code as needed, so that
>I can ask for, say, "depth first search using '.atoms'
>to get a list of neighboring nodes."

Wouldn't it be *much* better to use the established (but not standard)
mechanism of object adaptation, as championed in PEP 246 and as
implemented and extended upon in PyProtocols?

Note that we could certainly do with *only* the PEP 246 mechanism
(perhaps with some minor updates to the PEP) *without* PyProtocols, if
we wanted a more minimalist solution.

(If only adapt() could become a standard library function... <sigh> ;)

>The result could exec'ed to generate usable Python dynamically, or
>written to a file to be used as a normal Python module.

See above -- adapt() would be much better IMO, and would fill the same
need (as I see it).


>There would need to be some standards on how the graph is used, like
>"two nodes are the same iff 'node1 is node2'"

Yeah. Or one might want to allow equality to determine this, so that
the implementer could decide how to do it (nodes might be dynamically
created front-end objects after all).

>or "the result of getting the list of edges is iterable."

Right -- that's typically the kind of API definition I'm after.

Basically what I was proposing is a sort of "Python Graph API" in the
same vein as the "Python DB API" (PEP 249), that is, simply an
informative PEP about how graphs should behave to ensure
interoperability, possibly with some standard wrapper functions (or
maybe not).


It seems there are at least a few people who are interested in the
general concept, though. In case there is any merit in the idea, I've
set up a temporary wiki at

I'll post a separate announcement about it.

Magnus Lie Hetland       The time you enjoy wasting is not wasted time                                  -- Bertrand Russel

More information about the Python-list mailing list