New Python block cipher API, comments wanted

Paul Rubin phr-n2003b at NOSPAMnightsong.com
Wed Jan 29 06:48:28 CET 2003


mertz at gnosis.cx (David Mertz, Ph.D.) writes:
> The rotor module existed before the PEP process did.  If someone wanted
> to introduce rotor today as a new addition, I would strongly advocate
> that it not be added without a prior PEP.

OK, look at all the other library modules introduced more recently,
and tell me how many have PEP's.  The answer is not many--in fact,
looking at the "Finished PEPs" section of <www.python.org/peps>, it
appears to be none.  A few of the "Open PEPs" (268, 282, and maybe one
or two others) are purely about library modules.  The rest involve
changing existing Python internals in one way or another.

> |Anyway, so far at least, the actual crypto application developers who
> |have looked at this module have either said it looks fine or else have
> |suggested minor changes which are easy to incorporate.
> 
> Ahhh... minor changes!  Exactly why a PEP (or a revision of an existing
> one) is necessary.

Well, no, an informal process (or no process) is usually enough, as we
see from the many libraries released without having gone through the
PEP process.  Notice that PEP 272 looks intended mostly to guide
cipher module implementers, not application writers.  The same can be
said of the other informational PEP's 247 through 249.  Application
interfaces are generally described in ordinary documentation, not
PEP's.

In fact, I don't think any module implementing PEP 272 has ever been
written, and the interface it describes isn't that great, so I
wouldn't call the PEP process a big success where block ciphers are
concerned.  There was a bunch of subsequent discussion on
Python-crypto that led up to the current proposal, and that seems to
have done a better job.

> |But such a delay means that useful, real-world applications that need
> |the module will take longer to become widely deployable, etc.
> 
> Nope.  It means that real-world applications that want to utilize the
> module will need to include it in their distribution archive.  That's
> all.  A few extra bytes to distribute.

That's reasonable for modules written in Python, but I don't subscribe
to that idea for modules written in C.  I can't expect the users to
have C compilers, and I don't have the dev tools to compile binaries
of the module for every platform people might want to use it on.  I
don't even have tools to compile it for Windows.  And even if I had
all those compilers, shipping dozens of binaries is far more than "a
few extra bytes".  I don't really want to distribute binaries AT ALL,
since I wouldn't want to run binaries that I got from other people.
(I'm semi-ok with binaries from the central Python distribution, but
if it's some random module written by some random stranger, I want to
compile it myself).  Finally, even if I can provide a binary for the
user's platform and OS, the user may not be allowed to run it.  For
example, if someone wants to use the crypto module in a Python web
application (this is a realistic and probable scenario), they have a
hard enough time finding web hosts that let them run Python cgi's at
all.  Finding a web host that lets them install their own binaries is
even harder.

So, the only way to make apps which depend on a C module widely
deployable is to get the module accepted into the Python library, and
then wait a long time until most users have a new enough version of
Python to have gotten the module along with it.  I realize this is a
screwy situation, but it's what we have.  I hope it will be eased by
the introduction of native-code Python compilers that will make fast
enough code from pure Python libraries that we won't need as many C
modules.  But that's a long way in the future.




More information about the Python-list mailing list