> Greg Ewing:
> That's completely different from what I had in mind, which was:
> (1) Keep a reference to the base object in the buffer object, and
> (2) Use the buffer API to fetch a fresh pointer from the
> base object each time it's needed.
> Is there some reason that still wouldn't be safe enough?
I don't know if this can help, but I have once created an object with
you can get it at: http://s.keim.free.fr/mem/ (see the memslice module)
From my experience, this solve all the problems caused by the buffer
I was approached by a legal firm with the questions below about
Python's crypto capabilities, from the POV of a legal review of
exporting software that embeds Python. I don't have time to research
the answers myself (I'm no crypto expert). If you think you can
answer the questions, please send me a price quote and I'll forward it
to them. They'd like the answers ASAP.
--Guido van Rossum (home page: http://www.python.org/~guido/)
------- Forwarded Message
> Hello Guido,
> I understand Python is open source, but when open source code is
> integrated in a commercial product, the owner of the commercial product
> must include the open source code in their product analysis for U.S.
> export classification purposes. Although as open source, Python falls
> under an export control exception, this exception is lost once the code is
> offered in a commercial product.
> I would appreciate your help in obtaining some additional technical
> information in order to complete my export classification analysis.
> 1. We have been advised the following encryption content is in Python.
> We are looking for additional information regarding the encryption
> a. The Rotor module, which implements a very ancient
> encryption algorithm based on the German Enigma. Please tell us the
> symmetric key length of the encryption contained within this module.
> Please also advise the asymmetric key exchange algorithm length.
> b. The wrapper module for Open SSL. Again, please tell
> us the symmetric key length of the encryption content contained within
> this module. Please also advise the asymmetric key exchange algorithm
> c. The following questions apply to both the Rotor
> module and the wrapper module:
> i. can the encryption function be directly
> accessed, or modified, by the end user?
> ii. Do either of these encryption components
> contain an "Open Cryptographic Interface" (an interface that is not fixed
> and permits a third party to insert encryption functionality)
> The following chart is an example of the type of information I need to
> submit to the U.S. government. Would you be able to provide similar
> information regarding the encryption component(s) included within Pyton?
> Algorithm Source Key-min Key-max Modes
> RC2 OpenSSL 40 128 CBC, ECB, CFB, OFB
> ARC4 OpenSSL 40 128 N/A (stream encryption)
> DES OpenSSL 40 56 CBC, ECB, CFB, OFB
> DESX OpenSSL 168 168 CBC
> 3DES-2Key OpenSSL 112 112 CBC, ECB, CFB, OFB
> 3DES OpenSSL 168 168 CBC, ECB, CFB, OFB
> Blowfish OpenSSL 128 CBC, ECB, CFB, OFB
> Diffie-Hellman OpenSSL 192* 16384* Key-exchange, authentication
> DSA OpenSSL Digital Signature
> MD5 OpenSSL Integrity
> SHA-1 OpenSSL Integrity
> * No explicit limit, these appear to be the practical range of values.
------- End of Forwarded Message
On Mon, Oct 20, 2003, Guido van Rossum wrote:
>> We are indeed sure (sadly) that list comprehensions leak control variable
> But they shouldn't. It can be fixed by renaming them (e.g. numeric
> names with a leading dot).
?!?! When listcomps were introduced, you were strongly against any
changes that would make it difficult to switch back and forth between a
listcomp and its corresponding equivalent for loop. Are you changing
your position or are you suggesting that for loops should grow private
Aahz (aahz(a)pythoncraft.com) <*> http://www.pythoncraft.com/
"It is easier to optimize correct code than to correct optimized code."
I've lost context for the following thread. What is this about? I
can answer one technical question regardless, but I have no idea what
I'm promoting here. :-)
> Hello Alex,
> On Mon, Oct 27, 2003 at 04:09:03PM +0100, Alex Martelli wrote:
> > Cool! Why don't you try copy.copy on types you don't automatically
> > recognize and know how to deal with, BTW? That might make this
> > cool piece of code general enough that Guido might perhaps allow
> > generator-produced iterators to grow it as their __copy__ method...
> I will try. Note that only __deepcopy__ makes sense, as far as I can tell,
> because there is too much state that really needs to be copied and not shared
> in a generator (in particular, the sequence iterators from 'for' loops).
> I'm not sure about how deep-copying should be defined for built-in
> types. Should a built-in __deepcopy__ method try to import and call
> copy.deepcopy() on the sub-items? This doesn't seem to be right.
Almost -- you have to pass the memo argument that your __deepcopy__
received as the second argument to the recursive deepcopy() calls, to
avoid looping on cycles.
> A bientot,
--Guido van Rossum (home page: http://www.python.org/~guido/)
Hello Python gurus!
I've been learning a lot about Python by following you folks here. Lots of headscratching on my part, but slowly the elegance and utility of Python is sinking in.
I've been going thru the PEPs on the Python site. Since I don't live and breathe with the PEPs like you do, I'm having a bit of a problem seeing the forest for the trees. Specifically, those PEPs which are most active or current are not 'popping off the page' in the PEP index.
Is there a view of the PEP index available that is sorted by the date each PEP was last edited? I've looked at the listing of the PEP's in CVS, sorted by Age. That's pretty close but the CVS listing doesn't show the title or status of each PEP.
#- Are you comfortable with CVS? Would you like to check
#- your changes in
#- directly? (Since this is sandbox, it doesn't require the
#- usual rigorous
#- approval process for patches.)
Kevin Jacobs wrote:
#- I'd be happier with at least one round of review before
#- committing to CVS.
#- The code is fairly complex and an extra set of eyes will help keep
#- things focused. I've also volunteered to be that extra set
#- of eyes, and
#- plan a quick turn-around on any patches sent to me.
I'm not comfortable with CVS.
I think I'll use the extra pair of eyes of Kevin (thanks), and start
learning CVS while keeping the universe secure, :)
Thank you all.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
La información contenida en este mensaje y cualquier archivo anexo al mismo,
son para uso exclusivo del destinatario y pueden contener información
confidencial o propietaria, cuya divulgación es sancionada por la ley.
Si Ud. No es uno de los destinatarios consignados o la persona responsable
de hacer llegar este mensaje a los destinatarios consignados, no está
autorizado a divulgar, copiar, distribuir o retener información (o parte de
ella) contenida en este mensaje. Por favor notifíquenos respondiendo al
remitente, borre el mensaje original y borre las copias (impresas o grabadas
en cualquier medio magnético) que pueda haber realizado del mismo.
Todas las opiniones contenidas en este mail son propias del autor del
mensaje y no necesariamente coinciden con las de Telefónica Comunicaciones
Personales S.A. o alguna empresa asociada.
Los mensajes electrónicos pueden ser alterados, motivo por el cual
Telefónica Comunicaciones Personales S.A. no aceptará ninguna obligación
cualquiera sea el resultante de este mensaje.
#- > >>> myDecimal = Decimal(5)
#- > >>> myfloat = 3.0
#- > >>> mywhat = myDecimal + myfloat
#- > >>> isinstance(mywhat, float)
#- > True
#- Absolutely not. No way, no how, no time. -1000
#- are inexactly represented in Python. My opinion is that conversion
#- between float and Decimal should always be explicit (and my
#- is that Tim Peters agrees).
I'm not decided for any option. I just want (it will be nice) the group to
decant either way. There's some controversial about this.
Anyway, I'll explicit the options in the pre-PEP, and we all will take a
At 10:03 AM 10/29/03 +1100, Delaney, Timothy C (Timothy) wrote:
> > From: Phillip J. Eby [mailto:firstname.lastname@example.org]
> > At 09:56 AM 10/28/03 +0100, Alex Martelli wrote:
> > >AND, adaptation is not typecasting:
> > >e.g y=adapt("23", int) should NOT succeed.
> > And, why do you consider adaptation *not* to be typecasting?
> > I always
> > think of it as "give me X, rendered as a Y", which certainly
> > sounds like a
> > description of typecasting to me.
>Because (IMO anyway) adaption is *not* "give me X, rendered as Y".
>Adaption is "here is an X, can it be used as a Y?".
>They are two distinct concepts, although obviously there are crossover
Yes, just like 2+2==4 and 2*2==4.
>A string cannot be used as an int, although an int can be created from the
>string representation of an int.
I'd often like to "use a string as an integer", or use some arbitrary
object as an integer. Of course, there's a perfectly valid way to express
this now (i.e. 'int()'), and I think that's fine and in my code I will
personally prefer to use int() to mean I want an int, because that's clearer.
But, if for some reason I have code that is referencing some protocol as a
*parameter*, say 'p', and I have no way to know in advance that p==int,
then the most sensible thing to do is 'adapt(x,p)', rather than
'p(x)'. (Assuming 'p' is expected to be a protocol, rather than a
Now, given that 'p' *might* be 'int' in some cases, it seems reasonable to
me that adapt("23",p) should return 23 in such a case. Since 23 satisfies
the desired contract (int) on behalf of "23", this seems to be a correct
adaptation. For a protocol p that has immutability as part of its
contract, adapt(x,p) is well within its rights to return an object that is
a "copy" of x in some sense. The immutability requirement means that the
"adapted" value can never change, so really it's a *requirement* that the
"adaptation" be a snapshot.
>Adaption should not involve any change to the underlying data - mutating
>operations on the adapted object should (attempt to) mutate the original
>object (assuming the adapted object and original object are not one and
I agree 100% -- for a protocol whose contract doesn't require immutability,
the way 'int' does.
I think now that I understand, however, why you and Alex think I'm saying
something different than I've been saying. To both of you, "typecasting"
means "convert to a different type" at an *implementation* level (as it is
in other languages), and I mean at a *logical* level. Thus, to me, "I
would like to use X as a Y" includes whatever contracts Y supplies *as
applied to X*. Not, "give me an instance of Y that's a copy of X".
It just so happens, however, that for a protocol whose contract includes
immutability, these two concepts overlap, just as multiplication and
addition overlap for the case of 2+2==2*2. So, IMO, for immutable types
such as tuple, str, int, and float, I believe that it's reasonable for
adapt(x,p)==p(x) iff x is not an instance of p already, and does not have a
__conform__ method that overrides this interpretation.
That such a default interpretation is redundant with p(x), I also
agree. However, for code that uses protocols dynamically, that redundancy
would eliminate the need to make a dummy protocol (e.g. 'IInteger') to use
in place of 'int'.
OTOH, if Guido decides that Python's eventual interface objects shouldn't
be types, then there will be an IInteger anyway, and the point becomes moot.
Anyway, I can only understand Alex's objection to such adaptation if he is
saying that there is no such thing as adapting to an immutable
protocol! In that case, there could never exist such a thing as IInteger,
because you could never adapt anything to it that wasn't already an
IInteger. Somehow, this seems wrong to me.
Over in the Web SIG, it was noted that the HTML parser in htmllib has
handlers for HTML 2.0 elements, and it should really support HTML 4.01, the
current version. I'm looking into doing this.
We actually have two HTML parsers: htmllib.py and the more recent
HTMLParser.py. The initial check-in comment for 2001/05/18 for
A much improved HTML parser -- a replacement for sgmllib. The API is
derived from but not quite compatible with that of sgmllib, so it's a
new file. I suppose it needs documentation, and htmllib needs to be
changed to use this instead of sgmllib, and sgmllib needs to be
declared obsolete. But that can all be done later.
sgmllib only handles those bits of SGML needed for HTML, and anyone doing
serious SGML work is going to have to use a real SGML parser, so deprecating
sgmllib is reasonable. HTMLParser needs no changes for HTML 4.01; only
htmllib needs to get a bunch more handler methods.
Should I try to do this for 2.4?
(I can't find an explanation of how the API differs between the two modules
but can figure it out by inspecting the code, and will try to keep the
htmllib module backward-compatible.)
> From: Zack Weinberg [mailto:email@example.com]
> > However, its use in such expressions as
> > sublist = lst[:var]
> > would lead to substantial ambiguities, right?
> I suppose it would. Unfortunately, there's no other punctuation mark
> that can really be used for the purpose -- I think both $ and @
> (suggested elsewhere in response to a similar proposal) have
> too many countervailing connotations. Witness e.g. the suggestion
> last week that $ become magic in string % dict notation.
First of all, I'm strongly *against* the idea of :var.
However, I think a syntax that would work with no ambiguities, and not look too bad, would be:
sublist = lst[.var]
I would also be strongly against this suggestion - it simply deals with the problems I see with the current suggestion. It has its own problems, including (but not limited to) not being very obvious.