From jimjjewett at gmail.com Sat Apr 1 00:01:54 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 31 Mar 2006 17:01:54 -0500 Subject: [Python-Dev] Class decorators Message-ID: <fb6fbf560603311401u5bc129f4p605822ecc5b562f0@mail.gmail.com> Nick Coghlan wrote: > [ much good, including the @instance decorator ] > P.S. If all you want is somewhere to store mutable > state between invocations, you can always use the > function's own attribute space >>> def f(): print "Hi world from %s!" % f >>> f() Hi world from <function f at 0x00AE90B0>! Not really. That assumes the expected name is (permanently) bound to *this* function in this function's globals. That's normally true, but not always, so relying on it seems wrong. >>> f="a string" >>> g() Hi world from a string! And of course, sometimes I really do want shared state, or helpers (like other methods on a class), or one-time ininitialization plus per-call parameters (like the send method on 2.5 generators), or ... -jJ From skip at pobox.com Sat Apr 1 00:58:22 2006 From: skip at pobox.com (skip at pobox.com) Date: Fri, 31 Mar 2006 16:58:22 -0600 Subject: [Python-Dev] gmane.comp.python.devel.3000 has disappeared In-Reply-To: <e0k2h5$vo8$1@sea.gmane.org> References: <e0k2h5$vo8$1@sea.gmane.org> Message-ID: <17453.46094.197153.725065@montanaro.dyndns.org> Terry> For about a week, I have been reading and occasionally posting to Terry> the new pydev-3000 mailing list via the gmane mirror Terry> gmane.comp.lang.devel.3000. Today, it has disappeared and was Terry> still gone after reloading their newsgroup list. Was this Terry> intentional on the part of the mail list maintainers? (I Terry> certainly hope not!) Or is it a repairable glitch somewhere Terry> between the list and gmane? Gmane is subscribed. I've sent a message to them. Hopefully they will get things corrected soon. Skip From martin at v.loewis.de Sat Apr 1 00:59:03 2006 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 01 Apr 2006 00:59:03 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned abug/patch In-Reply-To: <e0ikk2$n8o$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <9e804ac0603271427p332b5a9cid84380b44aca1a33@mail.gmail.com> <00ff01c6523f$7c583290$bf03030a@trilan> <1143554576.2310.161.camel@geddy.wooz.org> <17449.21764.475497.457592@montanaro.dyndns.org> <44298954.4040705@v.loewis.de> <bbaeab100603282345l260b3a3chfe219408eea3dc69@mail.gmail.com> <e0ikk2$n8o$1@sea.gmane.org> Message-ID: <442DB437.6070108@v.loewis.de> Robert Kern wrote: > FWIW: Trac has a Sourceforge bug tracker import script: > > http://projects.edgewall.com/trac/browser/trunk/contrib/sourceforge2trac.py > > Apologies: for the other blank reply. That isn't actually worth that much: somebody would need to operate it, too. Mere existence doesn't help. Regards, Martin From barry at python.org Sat Apr 1 00:38:22 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 31 Mar 2006 17:38:22 -0500 Subject: [Python-Dev] gmane.comp.python.devel.3000 has disappeared In-Reply-To: <e0k2h5$vo8$1@sea.gmane.org> References: <e0k2h5$vo8$1@sea.gmane.org> Message-ID: <1143844702.346.11.camel@resist.wooz.org> On Fri, 2006-03-31 at 15:13 -0500, Terry Reedy wrote: > For about a week, I have been reading and occasionally posting to the new > pydev-3000 mailing list via the gmane mirror gmane.comp.lang.devel.3000. > Today, it has disappeared and was still gone after reloading their > newsgroup list. Was this intentional on the part of the mail list > maintainers? (I certainly hope not!) Or is it a repairable glitch > somewhere between the list and gmane? No idea, but the ng's do appear to be gone. Definitely not intentional on our part! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060331/27b9ec8b/attachment.pgp From jimjjewett at gmail.com Sat Apr 1 01:10:49 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 31 Mar 2006 18:10:49 -0500 Subject: [Python-Dev] reference leaks, __del__, and annotations Message-ID: <fb6fbf560603311510p231679bfo461b63fd6ec77b22@mail.gmail.com> Duncan Booth wrote: > Surely if you have a cycle what you want to do is to pick just *one* of the > objects in the cycle and break the link which makes it participate in the > cycle. No, I really meant to do them all. I was trying to avoid creating an attractive nuisance. In http://mail.python.org/pipermail/python-dev/2006-March/063239.html, Thomas Wouters categorized the purpose of __del__ methods as [paraphrased] (A) Resource Cleanup (cycles irrelevant) (B) Object created a cycle, and knows how to break it safely. (C) Those which do things the style guide warns against. The catch is that objects in the first two categories have no way to know when they should do the cleanup, except by creating a __del__ method (as generators now do). For category (A), their __del__ methods should be called as early as possible. Even for category (B), it does no harm. Why waste time recalculating cycles more often than required? If the object is lying about whether it can be used to safely break cycles, that is a bug, and I would rather catch it as early as possible, instead of letting it hide behind "oh well, the cycle was already broken". In fairness, I had not fully considered category(C), when an object intends not only to revive itself, but to prevent its subobjects from cleaning up either. Nick Coghlan > A simple Boolean attribute (e.g. __finalized__) should be enough. ... > If it's both present and true, the GC can ignore the finaliser on that instance That doesn't really take care of resource release, which needs to be called, and called early.(And the name will sound odd if it holds resources only sometimes, so that it has to flip the __finalized__ attribute.) -jJ From tjreedy at udel.edu Sat Apr 1 01:36:31 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 31 Mar 2006 18:36:31 -0500 Subject: [Python-Dev] gmane.comp.python.devel.3000 has disappeared References: <e0k2h5$vo8$1@sea.gmane.org> <ca471dc20603311238o79dafb4bu5ad78bccd7b593a2@mail.gmail.com> Message-ID: <e0kee0$5gs$1@sea.gmane.org> "Guido van Rossum" <guido at python.org> wrote in message news:ca471dc20603311238o79dafb4bu5ad78bccd7b593a2 at mail.gmail.com... > Wasn't my intention. Good ;-) > gmane is black magic to me (I've never used it) so I can't be much > help debugging this... I did add 3 new admins and changed the list > password. Since one of the last messages I read was your announcement of the coming changeover, I suspect some part of that might be the cause. Skip > Gmane is subscribed. I've sent a message to them. Hopefully they will > get > things corrected soon. I posted a note to gmane.discuss before I read the second sentence. But I will probably have to confirm my humaness before it goes thru, so I will let it sit a day first. Thanks for responding. Terry Jan Reedy From greg.ewing at canterbury.ac.nz Sat Apr 1 02:52:23 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 01 Apr 2006 12:52:23 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <442D2E47.8070809@gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> Message-ID: <442DCEC7.9090906@canterbury.ac.nz> Nick Coghlan wrote: > Generators are even more special, in that they only require finalisation in > the first place if they're stopped on a yield statement inside a try-finally > block. I find it rather worrying that there could be a few rare cases in which my generators cause memory leaks, through no fault of my own and without my being able to do anything about it. Will there be a coding practice one can follow to ensure that this doesn't happen? -- Greg From ncoghlan at gmail.com Sat Apr 1 06:58:34 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 01 Apr 2006 14:58:34 +1000 Subject: [Python-Dev] Class decorators In-Reply-To: <fb6fbf560603311401u5bc129f4p605822ecc5b562f0@mail.gmail.com> References: <fb6fbf560603311401u5bc129f4p605822ecc5b562f0@mail.gmail.com> Message-ID: <442E087A.1060401@gmail.com> Jim Jewett wrote: > Nick Coghlan wrote: > >> [ much good, including the @instance decorator ] > >> P.S. If all you want is somewhere to store mutable >> state between invocations, you can always use the >> function's own attribute space > > >>> def f(): print "Hi world from %s!" % f > > >>> f() > Hi world from <function f at 0x00AE90B0>! > > Not really. That assumes the expected name is (permanently) bound to > *this* function in this function's globals. That's normally true, but > not always, so relying on it seems wrong. Well, true. If you want it to be bullet-proof, you have to use a closure instead: >>> def make_f(): ... def f(): ... print "Hi world from %s!" % f ... return f ... >>> f = make_f() >>> f() Hi world from <function f at 0x00AE90B0>! The point about the decorators still holds - they stay with the function they're decorating (the inner one), and you don't need to make any of the changes needed in order to decorate the class instead. The above is also a use case for a *function* decorator I've occasionally wanted - an @result decorator, that is the equivalent of the @instance decorator I suggested for classes: >>> def result(f): ... return f() ... >>> @result ... def f(): ... def f(): ... print "Hi world from %s!" % f ... return f ... >>> f() Hi world from <function f at 0x00AE90B0>! I never actually did it, though, as I was stuck on Python 2.2 at the time. This is something of a tangent to the real discussion though, so I'll leave this one there :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Sat Apr 1 07:27:45 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 01 Apr 2006 15:27:45 +1000 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <442DCEC7.9090906@canterbury.ac.nz> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> Message-ID: <442E0F51.7030608@gmail.com> Greg Ewing wrote: > Nick Coghlan wrote: > >> Generators are even more special, in that they only require finalisation in >> the first place if they're stopped on a yield statement inside a try-finally >> block. > > I find it rather worrying that there could be a > few rare cases in which my generators cause > memory leaks, through no fault of my own and > without my being able to do anything about it. The GC changes PJE is looking at are to make sure you *can* do something about it. If the generator hasn't been started, or has already finished, then the GC won't consider it as needing finalisation. > Will there be a coding practice one can follow > to ensure that this doesn't happen? I believe PJE's fix should take care of most cases (depending on how aggressive we can safely be, it may even take care of all of them). If there are any remaining cases, I think the main thing is to avoid keeping half-finished generators around: from contextlib import closing with closing(itr): # Use the iterator in here as you wish # secure in the knowledge it will be # cleaned up promptly when you are done # whether it is a file, a generator or # something with a database connection for item in itr: print item Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From greg.ewing at canterbury.ac.nz Sat Apr 1 08:50:02 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 01 Apr 2006 18:50:02 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <442E0F51.7030608@gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> Message-ID: <442E229A.8030608@canterbury.ac.nz> Nick Coghlan wrote: > from contextlib import closing > > with closing(itr): > # Use the iterator in here as you wish > # secure in the knowledge it will be > # cleaned up promptly when you are done > # whether it is a file, a generator or > # something with a database connection > for item in itr: > print item I seem to remember we've been here before. I'll be disappointed if I have to wrap every for-loop that I write in a with-statement on the offchance that it might be using a generator that needs finalisation in order to avoid leaking memory. I'm becoming more and more convinced that we desperately need something better than __del__ methods to do finalisation. A garbage collector that can't be relied upon to collect garbage is simply not acceptable. -- Greg From tds333+pydev at gmail.com Sat Apr 1 11:19:21 2006 From: tds333+pydev at gmail.com (Wolfgang Langner) Date: Sat, 1 Apr 2006 11:19:21 +0200 Subject: [Python-Dev] gmane.comp.python.devel.3000 has disappeared In-Reply-To: <17453.46094.197153.725065@montanaro.dyndns.org> References: <e0k2h5$vo8$1@sea.gmane.org> <17453.46094.197153.725065@montanaro.dyndns.org> Message-ID: <4c45c1530604010119j377a80e2xdc0dd8cf64cfaff5@mail.gmail.com> On 4/1/06, skip at pobox.com <skip at pobox.com> wrote: > > Terry> For about a week, I have been reading and occasionally posting to > Terry> the new pydev-3000 mailing list via the gmane mirror > Terry> gmane.comp.lang.devel.3000. Today, it has disappeared and was > Terry> still gone after reloading their newsgroup list. Was this > Terry> intentional on the part of the mail list maintainers? (I > Terry> certainly hope not!) Or is it a repairable glitch somewhere > Terry> between the list and gmane? > > Gmane is subscribed. I've sent a message to them. Hopefully they will get > things corrected soon. Yes Gmane is subscribed. I checked if there is a pydev-3000 newsgroup on there server. Python 3000 new group is: http://dir.gmane.org/gmane.comp.python.python-3000.devel I tested it (subscribed) and it works. Try to refresh your subscription list in the news reader. -- bye by Wolfgang From thomas at python.org Sat Apr 1 12:06:38 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 1 Apr 2006 12:06:38 +0200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <442E0F51.7030608@gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> Message-ID: <9e804ac0604010206r23c50904o71fecdbc45a80d37@mail.gmail.com> On 4/1/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > > Greg Ewing wrote: > > I find it rather worrying that there could be a > > few rare cases in which my generators cause > > memory leaks, through no fault of my own and > > without my being able to do anything about it. > > The GC changes PJE is looking at are to make sure you *can* do something > about > it. If the generator hasn't been started, or has already finished, then > the GC > won't consider it as needing finalisation. Actually, if a generator has already finished, it no longer holds a suspended frame alive, and there is no cycle (at least not through the generator.) That's why test_generators no longer leaks; explicitly closing the generator breaks the cycle. So the only thing fixing GC would add is cleaning up cycles where a created but not started generator is the only thing keeping the cycle alive. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060401/903cf1d2/attachment.html From thomas at python.org Sat Apr 1 12:19:34 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 1 Apr 2006 12:19:34 +0200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <442E229A.8030608@canterbury.ac.nz> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> Message-ID: <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> On 4/1/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > I'm becoming more and more convinced that we > desperately need something better than __del__ > methods to do finalisation. A garbage collector > that can't be relied upon to collect garbage > is simply not acceptable. Sure. I don't believe it's too hard, it just means violating some of the premises people have been writing __del__ methods under. For instance, to clean up cycles nicely we might have to set some attributes to None before calling __del__, so you can't rely on attributes being meaningful anymore. However, this is already the case for global names; I've seen many people wonder about their __del__ method raising warnings (since exceptions are ignored) going, say, 'NoneType has no attribute 'registry'' when they try to un-register their class but the global registry has been cleaned up already. While we're at it, I would like for the new __del__ (which would probably have to be a new method) to disallow reviving self, just because it makes it unnecessarily complicated and it's rarely needed. Allowing a partially deleted object (or an object part of a partially deleted reference-cycle) to revive itself is not terribly useful, and there's no way to restore the rest of the cycle. I suggested a __dealloc__ method earlier in the thread to do this. I didn't think of allowing attributes to be cleared before calling the method, but I do believe that is necessary to allow, now that I've thought more about it. An alternative would be to make GC check for a 'cleanup-cycle' method on any of the objects in the cycle, and just feed it the complete cycle of objects, asking it to clean it up itself (or maybe reconnect one of the objects itself.) That would also make debugging uncollectable cycles a lot easier ;-) But I'm not sure whether that will improve things. The generator example, the trigger for this discussion, could solve its cycle by just closing itself, after which the cycle is either broken or reconnected, but I don't know if other typical cycles could be resolved that easily. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060401/e0aa5a9a/attachment.htm From thomas at python.org Sat Apr 1 12:23:32 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 1 Apr 2006 12:23:32 +0200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <fb6fbf560603311510p231679bfo461b63fd6ec77b22@mail.gmail.com> References: <fb6fbf560603311510p231679bfo461b63fd6ec77b22@mail.gmail.com> Message-ID: <9e804ac0604010223n37dc60f1le1d91b5e161517a0@mail.gmail.com> On 4/1/06, Jim Jewett <jimjjewett at gmail.com> wrote: > Nick Coghlan > > A simple Boolean attribute (e.g. __finalized__) should be enough. ... > > If it's both present and true, the GC can ignore the finaliser on that > instance > > That doesn't really take care of resource release, which needs to be > called, and called early.(And the name will sound odd if it holds > resources only sometimes, so that it has to flip the __finalized__ > attribute.) Well, I don't want to sound too gross, but any such class could store its resources *in* __finalized__, leaving it an empty container when there is no resource to release. D'oh--not-sounding-gross-failed-ly y'rs, -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060401/4e3ae5ea/attachment.html From thomas at python.org Sat Apr 1 12:31:48 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 1 Apr 2006 12:31:48 +0200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <9e804ac0604010223n37dc60f1le1d91b5e161517a0@mail.gmail.com> References: <fb6fbf560603311510p231679bfo461b63fd6ec77b22@mail.gmail.com> <9e804ac0604010223n37dc60f1le1d91b5e161517a0@mail.gmail.com> Message-ID: <9e804ac0604010231i5b253d8cxe56be4c2f57b8d31@mail.gmail.com> On 4/1/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/1/06, Jim Jewett <jimjjewett at gmail.com> wrote: > > > Nick Coghlan > > > A simple Boolean attribute (e.g. __finalized__) should be enough. ... > > > If it's both present and true, the GC can ignore the finaliser on that > > instance > > > > That doesn't really take care of resource release, which needs to be > > called, and called early.(And the name will sound odd if it holds > > resources only sometimes, so that it has to flip the __finalized__ > > attribute.) > > > Well, I don't want to sound too gross, but any such class could store its > resources *in* __finalized__, leaving it an empty container when there is no > resource to release. > Eh, that would mean the attribute would have to be called '__notfinalized__' of course ;) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060401/12b77b4d/attachment.htm From mwh at python.net Sat Apr 1 15:54:25 2006 From: mwh at python.net (Michael Hudson) Date: Sat, 01 Apr 2006 14:54:25 +0100 Subject: [Python-Dev] refleaks in 2.4 In-Reply-To: <20060327145824.GA23444@code0.codespeak.net> (Armin Rigo's message of "Mon, 27 Mar 2006 16:58:24 +0200") References: <ee2a432c0603262339s1686123jd06e6470b8551057@mail.gmail.com> <20060327145824.GA23444@code0.codespeak.net> Message-ID: <2mfykxv1ta.fsf@starship.python.net> Armin Rigo <arigo at tunes.org> writes: > Hi Neal, > > On Sun, Mar 26, 2006 at 11:39:50PM -0800, Neal Norwitz wrote: >> test_pkg leaked [10, 10, 10] references > > This one at least appears to be caused by dummy (deleted) entries in the > dictionary of interned strings. So it is not really a leak. It's actually because somewhere in the bowels of compilation, the file name being compiled gets interned and test_pkg writes out some temporary files and imports them. If this doesn't happen on the trunk, did this feature get lost somewhere? > It is a pain that it is so hard to figure this out, though. > Wouldn't it make sense to find a trick to exclude these dummy > entries from the total reference count? E.g. by subtracting the > refcount of the dummy object... Something like that would be nice, yes... Cheers, mwh -- GET *BONK* BACK *BONK* IN *BONK* THERE *BONK* -- Naich using the troll hammer in cam.misc From mwh at python.net Sat Apr 1 15:59:55 2006 From: mwh at python.net (Michael Hudson) Date: Sat, 01 Apr 2006 14:59:55 +0100 Subject: [Python-Dev] improving quality In-Reply-To: <7790b6530603280739y423098a1v8c83c2558f7fdd3c@mail.gmail.com> (Chris AtLee's message of "Tue, 28 Mar 2006 10:39:18 -0500") References: <ee2a432c0603272253r196186b8k81ff85f4f268b6b6@mail.gmail.com> <7790b6530603280739y423098a1v8c83c2558f7fdd3c@mail.gmail.com> Message-ID: <2mbqvlv1k4.fsf@starship.python.net> "Chris AtLee" <chris at atlee.ca> writes: > On 3/28/06, Neal Norwitz <nnorwitz at gmail.com> wrote: >> We've made a lot of improvement with testing over the years. >> Recently, we've gotten even more serious with the buildbot, Coverity, >> and coverage (http://coverage.livinglogic.de). However, in order to >> improve quality even further, we need to do a little more work. This >> is especially important with the upcoming 2.5. Python 2.5 is the most >> fundamental set of changes to Python since 2.2. If we're to make this >> release work, we need to be very careful about it. > > This reminds me of something I've been wanting to ask for a while: > does anybody run python through valgrind on a regular basis? I've > noticed that valgrind complains a lot about invalid reads in > PyObject_Free. I know that valgrind can warn about things that turn > out not to be problems, but would generating a suppresion file and > running all or part of the test suite through valgrind on the > buildbots be useful? Have you read Misc/README.valgrind? I don't know if anyone runs Python under valgrind regularly though. Cheers, mwh -- The ultimate laziness is not using Perl. That saves you so much work you wouldn't believe it if you had never tried it. -- Erik Naggum, comp.lang.lisp From arigo at tunes.org Sat Apr 1 17:33:41 2006 From: arigo at tunes.org (Armin Rigo) Date: Sat, 1 Apr 2006 17:33:41 +0200 Subject: [Python-Dev] refleaks in 2.4 In-Reply-To: <2mfykxv1ta.fsf@starship.python.net> References: <ee2a432c0603262339s1686123jd06e6470b8551057@mail.gmail.com> <20060327145824.GA23444@code0.codespeak.net> <2mfykxv1ta.fsf@starship.python.net> Message-ID: <20060401153341.GA22522@code0.codespeak.net> Hi Michael, On Sat, Apr 01, 2006 at 02:54:25PM +0100, Michael Hudson wrote: > It's actually because somewhere in the bowels of compilation, the file > name being compiled gets interned and test_pkg writes out some > temporary files and imports them. If this doesn't happen on the > trunk, did this feature get lost somewhere? I guess it's highly non-deterministic. If the new strings happen to take a previously-dummy entry of the interned strings dict, then after they die the entry is dummy again and we don't have an extra refcount. But if they take a fresh entry, then the dummy they become afterwards counts for one ref. A bientot, Armin. From jjl at pobox.com Sat Apr 1 18:02:33 2006 From: jjl at pobox.com (John J Lee) Date: Sat, 1 Apr 2006 16:02:33 +0000 (UTC) Subject: [Python-Dev] Name for python package repository In-Reply-To: <442B36AE.4040508@canterbury.ac.nz> References: <442B36AE.4040508@canterbury.ac.nz> Message-ID: <Pine.LNX.4.64.0604011601060.8333@alice> On Thu, 30 Mar 2006, Greg Ewing wrote: > I just thought of a possible name for the > Python package repository. We could call > it the PIPE - Python Index of Packages > and Extensions. +1 John From jeremy at alum.mit.edu Sat Apr 1 19:17:27 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Sat, 1 Apr 2006 12:17:27 -0500 Subject: [Python-Dev] line numbers, pass statements, implicit returns Message-ID: <e8bf7a530604010917gf0200aen98a26daa1257622e@mail.gmail.com> There are several test cases in test_trace that are commented out. We did this when we merged the ast-branch and promised to come back to them. I'm coming back to them now, but the test aren't documented well and the feature they test isn't specified well. The failing tests I've looked at so far involving pass statements and implicit "return None" statements generated by the compiler. The tests seem to verify that 1) if you have a function that ends with an if/else where the body of the else is pass, there is no line number associated with the return 2) if you have a function that ends with a try/except where the body of the except is pass, there is a line number associated with the return. Here's a failing example def ireturn_example(): a = 5 b = 5 if a == b: b = a+1 else: pass The code is traced and test_trace verifies that the return is associated with line 4! In these cases, the ast compiler will always associate a line number with the return. (Technically with the LOAD_CONST preceding the RETURN_VALUE.) This happens pretty much be accident. It always associates a line number with the first opcode generated after a new statement is visited. Since a Pass statement node has no opcode, the return generates the first opcode. Now I could add some special cases to the compiler to preserve the old behavior, the question is: Why bother? It's such an unlikely case (an else that has no effect). Does anyone depend on the current behavior for the ireturn_example()? It seems sensible to me to always generate a line number for the pass + return case, just so you see the control flow as you step through the debugger. The other case that has changed is that the new compiler does not generate code for "while 0:" I don't remember why <0.5 wink>. There are several test cases that verify line numbers for code using this kind of bogus construct. There are no lines anymore, so I would change the tests so that they don't expect the lines in question. But I have no idea what they are trying to test. Does anyone know? Jeremy From tjreedy at udel.edu Sat Apr 1 21:28:45 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 1 Apr 2006 14:28:45 -0500 Subject: [Python-Dev] gmane.comp.python.devel.3000 has disappeared References: <e0k2h5$vo8$1@sea.gmane.org><17453.46094.197153.725065@montanaro.dyndns.org> <4c45c1530604010119j377a80e2xdc0dd8cf64cfaff5@mail.gmail.com> Message-ID: <e0mk9e$13i$1@sea.gmane.org> > Yes Gmane is subscribed. > I checked if there is a pydev-3000 newsgroup on there server. I found the renamed group. Prefered the original name since it sorted just after this one in the subscribed groups list. tjr From nnorwitz at gmail.com Sat Apr 1 22:23:09 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 1 Apr 2006 12:23:09 -0800 Subject: [Python-Dev] improving quality In-Reply-To: <2mbqvlv1k4.fsf@starship.python.net> References: <ee2a432c0603272253r196186b8k81ff85f4f268b6b6@mail.gmail.com> <7790b6530603280739y423098a1v8c83c2558f7fdd3c@mail.gmail.com> <2mbqvlv1k4.fsf@starship.python.net> Message-ID: <ee2a432c0604011223j3f23b339u67d39aebcf1783@mail.gmail.com> On 4/1/06, Michael Hudson <mwh at python.net> wrote: > > I don't know if anyone runs Python under valgrind regularly though. I do for some definition of "regularly". It would be better to setup a cron job to truly run it regularly, perhaps once a month. It should run on both HEAD and supported release(s) (ie, 2.4 currently). I've heard of others using valgrind, I have no idea how many people and how often though. n From noamraph at gmail.com Sat Apr 1 22:32:19 2006 From: noamraph at gmail.com (Noam Raphael) Date: Sat, 1 Apr 2006 23:32:19 +0300 Subject: [Python-Dev] Saving the hash value of tuples Message-ID: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> Hello, I've found out that the hash value of tuples isn't saved after it's calculated. With strings it's different: the hash value of a string is calculated only on the first call to hash(string), and saved in the structure for future use. Saving the value makes dict lookup of tuples an operation with an amortized cost of O(1). Saving the hash value means that if an item of the tuple changes its hash value, the hash value of the tuple won't be changed. I think it's ok, since: 1. Hash value of things shouldn't change. 2. Dicts assume that too. I tried the change, and it turned out that I had to change cPickle a tiny bit: it uses a 2-tuple which is allocated when the module initializes to lookup tuples in a dict. I changed it to properly use PyTuple_New and Py_DECREF, and now the complete test suite passes. I run test_cpickle before the change and after it, and it took the same time (0.89 seconds on my computer). What do you think? I see three possibilities: 1. Nothing should be done, everything is as it should be. 2. The cPickle module should be changed to not abuse the tuple, but there's no reason to add an extra word to the tuple structure and break binary backwards compatibility. 3. Both should be changed. I will be happy to send a patch, if someone shows interest. Have a good day, Noam From aahz at pythoncraft.com Sat Apr 1 22:37:01 2006 From: aahz at pythoncraft.com (Aahz) Date: Sat, 1 Apr 2006 12:37:01 -0800 Subject: [Python-Dev] Saving the hash value of tuples In-Reply-To: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> Message-ID: <20060401203701.GB13210@panix.com> On Sat, Apr 01, 2006, Noam Raphael wrote: > > I've found out that the hash value of tuples isn't saved after it's > calculated. With strings it's different: the hash value of a string is > calculated only on the first call to hash(string), and saved in the > structure for future use. Saving the value makes dict lookup of tuples > an operation with an amortized cost of O(1). > [...] > I will be happy to send a patch, if someone shows interest. Regardless of whether anyone shows interest, please submit a patch! Then post the URL back here. That way if someone gets interested in the future, your code is still available. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From noamraph at gmail.com Sat Apr 1 23:01:44 2006 From: noamraph at gmail.com (Noam Raphael) Date: Sun, 2 Apr 2006 00:01:44 +0300 Subject: [Python-Dev] Saving the hash value of tuples In-Reply-To: <20060401203701.GB13210@panix.com> References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> <20060401203701.GB13210@panix.com> Message-ID: <b348a0850604011301s5b9548acte81ebb9133b131a4@mail.gmail.com> Ok, I uploaded it. Patch no. 1462796: https://sourceforge.net/tracker/index.php?func=detail&aid=1462796&group_id=5470&atid=305470 On 4/1/06, Aahz <aahz at pythoncraft.com> wrote: > On Sat, Apr 01, 2006, Noam Raphael wrote: > > > > I've found out that the hash value of tuples isn't saved after it's > > calculated. With strings it's different: the hash value of a string is > > calculated only on the first call to hash(string), and saved in the > > structure for future use. Saving the value makes dict lookup of tuples > > an operation with an amortized cost of O(1). > > [...] > > I will be happy to send a patch, if someone shows interest. > > Regardless of whether anyone shows interest, please submit a patch! Then > post the URL back here. That way if someone gets interested in the > future, your code is still available. > -- > Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ > > "Look, it's your affair if you want to play with five people, but don't > go calling it doubles." --John Cleese anticipates Usenet > From tim.peters at gmail.com Sun Apr 2 00:43:41 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 1 Apr 2006 17:43:41 -0500 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <20060401204023.804801E4006@bag.python.org> References: <20060401204023.804801E4006@bag.python.org> Message-ID: <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> > Author: walter.doerwald > Date: Sat Apr 1 22:40:23 2006 > New Revision: 43545 > > Modified: > python/trunk/Doc/lib/libcalendar.tex > python/trunk/Lib/calendar.py > Log: > Make firstweekday a simple attribute instead > of hiding it behind a setter and a getter. Walter, what's the purpose of this patch? The first weekday is still an attribute, but instead of being settable and gettable via methods, looks like it's now settable and gettable via module-global functions, and only for the single default instance of Calendar created by the module. If so, then (a) the functionality of the Calendar class is weaker now, and in a backward-incompatible way; and, (b) the new module-global firstweekday() and setfirstweekday() functions are a more obscure way to restore the lost functionality for just one specific instance of a Calendar subclass. I don't see the attraction to any part of this. > --- python/trunk/Lib/calendar.py (original) > +++ python/trunk/Lib/calendar.py Sat Apr 1 22:40:23 2006 > @@ -128,25 +128,14 @@ > """ > > def __init__(self, firstweekday=0): > - self._firstweekday = firstweekday # 0 = Monday, 6 = Sunday > - > - def firstweekday(self): > - return self._firstweekday > - > - def setfirstweekday(self, weekday): > - """ > - Set weekday (Monday=0, Sunday=6) to start each week. > - """ > - if not MONDAY <= weekday <= SUNDAY: > - raise IllegalWeekdayError(weekday) > - self._firstweekday = weekday > + self.firstweekday = firstweekday # 0 = Monday, 6 = Sunday Removing those Calendar methods is backward-incompatible, ... > -firstweekday = c.firstweekday > -setfirstweekday = c.setfirstweekday > +def firstweekday(): > + return c.firstweekday > + > +def setfirstweekday(firstweekday): > + if not MONDAY <= firstweekday <= SUNDAY: > + raise IllegalWeekdayError(firstweekday) > + c.firstweekday = firstweekday > + > monthcalendar = c.monthdayscalendar > prweek = c.prweek And here they're obscurely added back again, but work only for the module-global default instance `c` of the TextCalendar subclass. From brett at python.org Sun Apr 2 01:22:22 2006 From: brett at python.org (Brett Cannon) Date: Sat, 1 Apr 2006 15:22:22 -0800 Subject: [Python-Dev] PEP to list externally maintained modules and where to report bugs? Message-ID: <bbaeab100604011522j546f8d30i37f09c4190b06c3b@mail.gmail.com> I reported some warnings I was getting for ctypes the other day and Martin said I should report it to ctypes. I now get a warning for sqlite on OS X 10.4 about INT32_MIN being redefined (I have stdint.h on my machine and that macro is being redefined in Modules/_sqlite/cursor.c instead of using the C99 version). Anyone else think we need a PEP to point to places where externally maintained code should have bugs or patches reported? I don't want to hunt down a URL for where to do this every time and so it would be nice to have a list of what code needs bugs/patches reported where. Otherwise I am prone to just check my changes into the tree and not get them reported upstream since I want the warnings to go away. =) -Brett From brett at python.org Sun Apr 2 01:43:10 2006 From: brett at python.org (Brett Cannon) Date: Sat, 1 Apr 2006 15:43:10 -0800 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X Message-ID: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> I think these are all Tim's fault =) : Objects/object.c: In function '_Py_NegativeRefcount': Objects/object.c:144: warning: format '%d' expects type 'int', but argument 7 has type 'Py_ssize_t' Objects/stringobject.c: In function 'PyString_FromFormatV': Objects/stringobject.c:278: warning: format '%u' expects type 'unsigned int', but argument 3 has type 'size_t' Python/pythonrun.c: In function 'Py_Finalize': Python/pythonrun.c:393: warning: format '%d' expects type 'int', but argument 3 has type 'Py_ssize_t' Python/pythonrun.c: In function 'PyRun_InteractiveLoopFlags': Python/pythonrun.c:683: warning: format '%d' expects type 'int', but argument 3 has type 'Py_ssize_t' Modules/gcmodule.c: In function 'collect': Modules/gcmodule.c:746: warning: format '%d' expects type 'int', but argument 2 has type 'Py_ssize_t' Modules/gcmodule.c:841: warning: format '%d' expects type 'int', but argument 2 has type 'Py_ssize_t' Modules/gcmodule.c:841: warning: format '%d' expects type 'int', but argument 3 has type 'Py_ssize_t' What's odd about them is that that the use of PY_FORMAT_SIZE_T seems to be correct, so I don't know what the problem is. Unfortunately ``gcc -E Include/pyport.h`` is no help since PY_FORMAT_SIZE_T is not showing up defined (possibly because gcc says it can't find pyconfig.h). -Brett From tim.peters at gmail.com Sun Apr 2 01:59:16 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 1 Apr 2006 18:59:16 -0500 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> Message-ID: <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> [Brett Cannon] > I think these are all Tim's fault =) : No, they're Anthony's fault :-) He added this clause to pyport.h yesterday: # if SIZEOF_SIZE_T == SIZEOF_INT # define PY_FORMAT_SIZE_T "" and that's obviously triggering on your platform. He added this (at my suggestion) to shut up similar gcc warnings on some other box. Before he added it, PY_FORMAT_SIZE_T was "l" on that box (and on yours) If it had remained "l", you wouldn't be getting warnings now, but he still would. On his box, the gripes where about passing a size_t argument to a "%lu" format, but size_t resolved to unsigned int, not to unsigned long. sizeof(int) == sizeof(long) on that box (and on yours), so the warning isn't really helpful. > Objects/object.c: In function '_Py_NegativeRefcount': > Objects/object.c:144: warning: format '%d' expects type 'int', but > argument 7 has type 'Py_ssize_t' And I bet (you should check) that Py_ssize_t resolves to "long" on your box, and that sizeof(long) == sizeof(int) on your box, so that again the warning is just a damned nuisance. > ... [similar complaints] ... > What's odd about them is that that the use of PY_FORMAT_SIZE_T seems > to be correct, so I don't know what the problem is. My best guess is above. gcc appears to be generating type complaints based on "name equivalence" rather than "structure equivalence" here (meaning it's comparing integer types by the names they resolve to rather than to the combination of size and signedness regardless of name). Worming around that would probably require a bunch of poke-and-hope experimenting with various gcc's, of which I have none :-) > Unfortunately ``gcc -E Include/pyport.h`` is no help since PY_FORMAT_SIZE_T > is not showing up defined (possibly because gcc says it can't find pyconfig.h). You can run on it this part in isolation: #ifndef PY_FORMAT_SIZE_T # if SIZEOF_SIZE_T == SIZEOF_INT # define PY_FORMAT_SIZE_T "" # elif SIZEOF_SIZE_T == SIZEOF_LONG # define PY_FORMAT_SIZE_T "l" # elif defined(MS_WINDOWS) # define PY_FORMAT_SIZE_T "I" # else # error "This platform's pyconfig.h needs to define PY_FORMAT_SIZE_T" # endif #endif after sticking on the right expansions for the SIZEOF_xyz thingies -- but it's clear enough from the error messages that the SIZEOF_INT branch is triggering (it's complaining about format '%d', not format '%ld' or format '%Id'). From brett at python.org Sun Apr 2 02:40:56 2006 From: brett at python.org (Brett Cannon) Date: Sat, 1 Apr 2006 16:40:56 -0800 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> Message-ID: <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> On 4/1/06, Tim Peters <tim.peters at gmail.com> wrote: > [Brett Cannon] > > I think these are all Tim's fault =) : > > No, they're Anthony's fault :-) He added this clause to pyport.h yesterday: > > # if SIZEOF_SIZE_T == SIZEOF_INT > # define PY_FORMAT_SIZE_T "" > > and that's obviously triggering on your platform. He added this (at > my suggestion) to shut up similar gcc warnings on some other box. > Before he added it, PY_FORMAT_SIZE_T was "l" on that box (and on > yours) If it had remained "l", you wouldn't be getting warnings now, > but he still would. > Great, so either Anthony gets warnings or I do. > On his box, the gripes where about passing a size_t argument to a > "%lu" format, but size_t resolved to unsigned int, not to unsigned > long. sizeof(int) == sizeof(long) on that box (and on yours), so the > warning isn't really helpful. > Well that figures. > > Objects/object.c: In function '_Py_NegativeRefcount': > > Objects/object.c:144: warning: format '%d' expects type 'int', but > > argument 7 has type 'Py_ssize_t' > > And I bet (you should check) that Py_ssize_t resolves to "long" on > your box, and that sizeof(long) == sizeof(int) on your box, so that > again the warning is just a damned nuisance. > > > ... [similar complaints] ... > > > What's odd about them is that that the use of PY_FORMAT_SIZE_T seems > > to be correct, so I don't know what the problem is. > > My best guess is above. gcc appears to be generating type complaints > based on "name equivalence" rather than "structure equivalence" here > (meaning it's comparing integer types by the names they resolve to > rather than to the combination of size and signedness regardless of > name). Worming around that would probably require a bunch of > poke-and-hope experimenting with various gcc's, of which I have none > :-) This is just so ridiculous. Is there even a way to do this reasonably? Would we have to detect when Py_ssize_t resolves to either int or long and when the size of both is the same force to the same name on all platforms? > > > Unfortunately ``gcc -E Include/pyport.h`` is no help since PY_FORMAT_SIZE_T > > is not showing up defined (possibly because gcc says it can't find pyconfig.h). > > You can run on it this part in isolation: > > #ifndef PY_FORMAT_SIZE_T > # if SIZEOF_SIZE_T == SIZEOF_INT > # define PY_FORMAT_SIZE_T "" > # elif SIZEOF_SIZE_T == SIZEOF_LONG > # define PY_FORMAT_SIZE_T "l" > # elif defined(MS_WINDOWS) > # define PY_FORMAT_SIZE_T "I" > # else > # error "This platform's pyconfig.h needs to define PY_FORMAT_SIZE_T" > # endif > #endif > > after sticking on the right expansions for the SIZEOF_xyz thingies -- > but it's clear enough from the error messages that the SIZEOF_INT > branch is triggering (it's complaining about format '%d', not format > '%ld' or format '%Id'). > OK, so how should we solve this one? Or should we just ignore it and hope gcc eventually wises up and starting using structural equivalence for its printf checks? =) -Brett From tim.peters at gmail.com Sun Apr 2 03:22:24 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 1 Apr 2006 20:22:24 -0500 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> Message-ID: <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> [Brett Cannon] > ... > This is just so ridiculous. Ya think ;-)? > Is there even a way to do this reasonably? Not really in C89. That's why C99 introduced the "z" printf modifier, and approximately a billion ;-) format macros like PY_FORMAT_SIZE_T (since there's almost nothing portably useful you can say about standard C's billion names for "some kind of integer"). In effect, the PY_FORMAT_SIZE_T case was important enough that C99 moved it directly into the printf format syntax. > Would we have to detect when Py_ssize_t resolves to either int or long It's worse that that: there's no guarantee that sizeof(Py_ssize_t) <= sizeof(long), and it fact it's not on Win64. But Windows is already taken care of here. > OK, so how should we solve this one? Or should we just ignore it and > hope gcc eventually wises up and starting using structural equivalence > for its printf checks? =) For gcc we _could_ solve it in the obvious way, which I guess Martin was hoping to avoid: change Unixish config to detect whether the platform C supports the "z" format modifier (I believe gcc does), and if so arrange to stick #define PY_FORMAT_SIZE_T "z" in the generated pyconfig.h. Note that if pyconfig.h defines PY_FORMAT_SIZE_T, pyport.h believes whatever that says. It's the purpose of "z" to format size_t-ish arguments, so gcc complaints should end then. From brett at python.org Sun Apr 2 03:26:38 2006 From: brett at python.org (Brett Cannon) Date: Sat, 1 Apr 2006 17:26:38 -0800 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> Message-ID: <bbaeab100604011726t62e9164ahc64caea969b93ab6@mail.gmail.com> On 4/1/06, Tim Peters <tim.peters at gmail.com> wrote: > [Brett Cannon] > > ... > > This is just so ridiculous. > > Ya think ;-)? > > > Is there even a way to do this reasonably? > > Not really in C89. That's why C99 introduced the "z" printf modifier, > and approximately a billion ;-) format macros like PY_FORMAT_SIZE_T > (since there's almost nothing portably useful you can say about > standard C's billion names for "some kind of integer"). In effect, > the PY_FORMAT_SIZE_T case was important enough that C99 moved it > directly into the printf format syntax. > > > Would we have to detect when Py_ssize_t resolves to either int or long > > It's worse that that: there's no guarantee that sizeof(Py_ssize_t) <= > sizeof(long), and it fact it's not on Win64. But Windows is already > taken care of here. > > > OK, so how should we solve this one? Or should we just ignore it and > > hope gcc eventually wises up and starting using structural equivalence > > for its printf checks? =) > > For gcc we _could_ solve it in the obvious way, which I guess Martin > was hoping to avoid: change Unixish config to detect whether the > platform C supports the "z" format modifier (I believe gcc does), and > if so arrange to stick > > #define PY_FORMAT_SIZE_T "z" > > in the generated pyconfig.h. Note that if pyconfig.h defines > PY_FORMAT_SIZE_T, pyport.h believes whatever that says. It's the > purpose of "z" to format size_t-ish arguments, so gcc complaints > should end then. > I vote we move to C99. =) If that doesn't happen, I guess the above solution is the best. I am probably not the best person to tweak the Makefile for this, but I can if no one gets to it. -Brett From anthony at interlink.com.au Sun Apr 2 06:17:30 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Sun, 2 Apr 2006 15:17:30 +1100 Subject: [Python-Dev] Firefox searchbar engine for Python bugs Message-ID: <200604021417.33521.anthony@interlink.com.au> I've created a searchbar plugin for the firefox search bar that allows you to search bugs. I think someone created one for the sidebar, but this works in the searchbar at the top of the window. I gave up trying to knit the files into the new website builder, and so it can be found here: http://www.python.org/~anthony/searchbar/ If you can think of other useful searchbar plugins (Python Docs, maybe?) let me know and I'll look at creating them. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Sun Apr 2 06:23:39 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Sun, 2 Apr 2006 15:23:39 +1100 Subject: [Python-Dev] Firefox searchbar engine for Python bugs In-Reply-To: <200604021417.33521.anthony@interlink.com.au> References: <200604021417.33521.anthony@interlink.com.au> Message-ID: <200604021423.41185.anthony@interlink.com.au> On Sunday 02 April 2006 14:17, Anthony Baxter wrote: > I've created a searchbar plugin for the firefox search bar that > allows you to search bugs. I should clarify - it allows you to pull up a bug by bug ID, using the www.python.org/sf/ redirector. From guido at python.org Sun Apr 2 08:18:17 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 1 Apr 2006 22:18:17 -0800 Subject: [Python-Dev] line numbers, pass statements, implicit returns In-Reply-To: <e8bf7a530604010917gf0200aen98a26daa1257622e@mail.gmail.com> References: <e8bf7a530604010917gf0200aen98a26daa1257622e@mail.gmail.com> Message-ID: <ca471dc20604012218n6c34b2fau7dd070a60481bc1a@mail.gmail.com> On 4/1/06, Jeremy Hylton <jeremy at alum.mit.edu> wrote: > There are several test cases in test_trace that are commented out. We > did this when we merged the ast-branch and promised to come back to > them. I'm coming back to them now, but the test aren't documented > well and the feature they test isn't specified well. > > The failing tests I've looked at so far involving pass statements and > implicit "return None" statements generated by the compiler. The > tests seem to verify that > > 1) if you have a function that ends with an if/else where the body > of the else is pass, > there is no line number associated with the return > 2) if you have a function that ends with a try/except where the > body of the except is pass, > there is a line number associated with the return. > > Here's a failing example > > def ireturn_example(): > a = 5 > b = 5 > if a == b: > b = a+1 > else: > pass > > The code is traced and test_trace verifies that the return is > associated with line 4! > > In these cases, the ast compiler will always associate a line number > with the return. (Technically with the LOAD_CONST preceding the > RETURN_VALUE.) This happens pretty much be accident. It always > associates a line number with the first opcode generated after a new > statement is visited. Since a Pass statement node has no opcode, the > return generates the first opcode. > > Now I could add some special cases to the compiler to preserve the old > behavior, the question is: Why bother? It's such an unlikely case (an > else that has no effect). Does anyone depend on the current behavior > for the ireturn_example()? It seems sensible to me to always generate > a line number for the pass + return case, just so you see the control > flow as you step through the debugger. Makes sense to me. I can't imagine what this was testing except perhaps a corner case in the algorithm for generating the (insanely complicated) linenumber mapping table. > The other case that has changed is that the new compiler does not > generate code for "while 0:" I don't remember why <0.5 wink>. There > are several test cases that verify line numbers for code using this > kind of bogus construct. There are no lines anymore, so I would > change the tests so that they don't expect the lines in question. But > I have no idea what they are trying to test. Does anyone know? Not me. This is definitely not part of the language spec! :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Sun Apr 2 08:29:36 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 02 Apr 2006 08:29:36 +0200 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> Message-ID: <442F6F50.2030002@v.loewis.de> Tim Peters wrote: > For gcc we _could_ solve it in the obvious way, which I guess Martin > was hoping to avoid: change Unixish config to detect whether the > platform C supports the "z" format modifier (I believe gcc does), and > if so arrange to stick > > #define PY_FORMAT_SIZE_T "z" It's not gcc to support "z" (except for the compile-time check); it's the C library (on Unix, the C library is part of the system, not part of the compiler). But yes: if we could detect in configure that the C library supports %zd, then we should use that. Regards, Martin From g.brandl at gmx.net Sun Apr 2 08:49:08 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 02 Apr 2006 08:49:08 +0200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> Message-ID: <e0ns54$ug1$1@sea.gmane.org> Tim Peters wrote: >> Author: walter.doerwald >> Date: Sat Apr 1 22:40:23 2006 >> New Revision: 43545 >> >> Modified: >> python/trunk/Doc/lib/libcalendar.tex >> python/trunk/Lib/calendar.py >> Log: >> Make firstweekday a simple attribute instead >> of hiding it behind a setter and a getter. > > Walter, what's the purpose of this patch? The first weekday is still > an attribute, but instead of being settable and gettable via methods, > looks like it's now settable and gettable via module-global functions, > and only for the single default instance of Calendar created by the > module. If so, then (a) the functionality of the Calendar class is > weaker now, and in a backward-incompatible way; and, (b) the new > module-global firstweekday() and setfirstweekday() functions are a > more obscure way to restore the lost functionality for just one > specific instance of a Calendar subclass. > > I don't see the attraction to any part of this. > >> --- python/trunk/Lib/calendar.py (original) >> +++ python/trunk/Lib/calendar.py Sat Apr 1 22:40:23 2006 >> @@ -128,25 +128,14 @@ >> """ >> >> def __init__(self, firstweekday=0): >> - self._firstweekday = firstweekday # 0 = Monday, 6 = Sunday >> - >> - def firstweekday(self): >> - return self._firstweekday >> - >> - def setfirstweekday(self, weekday): >> - """ >> - Set weekday (Monday=0, Sunday=6) to start each week. >> - """ >> - if not MONDAY <= weekday <= SUNDAY: >> - raise IllegalWeekdayError(weekday) >> - self._firstweekday = weekday >> + self.firstweekday = firstweekday # 0 = Monday, 6 = Sunday > > Removing those Calendar methods is backward-incompatible, Isn't it that the Calendar class was just added and therefore is new to 2.5? >> -firstweekday = c.firstweekday >> -setfirstweekday = c.setfirstweekday >> +def firstweekday(): >> + return c.firstweekday >> + >> +def setfirstweekday(firstweekday): >> + if not MONDAY <= firstweekday <= SUNDAY: >> + raise IllegalWeekdayError(firstweekday) >> + c.firstweekday = firstweekday >> + >> monthcalendar = c.monthdayscalendar >> prweek = c.prweek > > And here they're obscurely added back again, but work only for the > module-global default instance `c` of the TextCalendar subclass. Since the functions are part of the traditional calendar API which cannot be broken. Georg From fredrik at pythonware.com Sun Apr 2 10:06:02 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 2 Apr 2006 10:06:02 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned a bug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <9e804ac0603271427p332b5a9cid84380b44aca1a33@mail.gmail.com> <00ff01c6523f$7c583290$bf03030a@trilan> <1143554576.2310.161.camel@geddy.wooz.org> <17449.21764.475497.457592@montanaro.dyndns.org> <44298954.4040705@v.loewis.de> <bbaeab100603282345l260b3a3chfe219408eea3dc69@mail.gmail.com><e0ikk2$n8o$1@sea.gmane.org> <442DB437.6070108@v.loewis.de> Message-ID: <e0o0la$9r6$1@sea.gmane.org> Martin v. Löwis wrote: > That isn't actually worth that much: somebody would need to operate it, > too. Mere existence doesn't help. why do you keep repeating this when I've already posted a link to a company that does this for only a few bucks per month ? </F> From walter at livinglogic.de Sun Apr 2 10:24:04 2006 From: walter at livinglogic.de (=?iso-8859-1?Q?Walter_D=F6rwald?=) Date: Sun, 2 Apr 2006 10:24:04 +0200 (CEST) Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> Message-ID: <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> Tim Peters wrote: >> Author: walter.doerwald >> Date: Sat Apr 1 22:40:23 2006 >> New Revision: 43545 >> >> Modified: >> python/trunk/Doc/lib/libcalendar.tex >> python/trunk/Lib/calendar.py >> Log: >> Make firstweekday a simple attribute instead >> of hiding it behind a setter and a getter. > > Walter, what's the purpose of this patch? The first weekday is still an attribute, but instead of being settable and > gettable via methods, looks like it's now settable and gettable via module-global functions, and only for the single default > instance of Calendar created by the module. This is because in 2.4 there where no Calendar objects and firstweekday was only setable and getable via module level functions. > If so, then (a) the functionality of the Calendar class is weaker now, No really. firstweekday is changeable simply by assigning to the attribute: import calendar cal = calendar.Calendar() cal.firstweekday = 6 The only thing lost is the range check in the setter. > and in a > backward-incompatible way; Yes, this change is backwards-incompatible, but imcompatible to code that has been in the repository since Friday, so this shouldn't be a problem! ;) > and, > (b) the new > module-global firstweekday() and setfirstweekday() functions are a more obscure way to restore the lost functionality for > just one > specific instance of a Calendar subclass. That's because in 2.4 the module level interface was all there was. > I don't see the attraction to any part of this. Simple attribute access looks much more Pythonic to me than setters and gettes (especially as the attributes of subclasses are simple attributes). Or are you talking about the Calendar class itself? > [...] Bye, Walter D?rwald From ncoghlan at gmail.com Sun Apr 2 12:21:14 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 02 Apr 2006 20:21:14 +1000 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> Message-ID: <442FA59A.9000005@gmail.com> Walter D?rwald wrote: > firstweekday is changeable simply by assigning to the attribute: > > import calendar > cal = calendar.Calendar() > cal.firstweekday = 6 > > The only thing lost is the range check in the setter. Any particular reason for not making it a property? Then you could keep the range check while still manipulating it as if it was a simple attribute. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From martin at v.loewis.de Sun Apr 2 14:58:58 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 02 Apr 2006 14:58:58 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned a bug/patch In-Reply-To: <e0o0la$9r6$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <9e804ac0603271427p332b5a9cid84380b44aca1a33@mail.gmail.com> <00ff01c6523f$7c583290$bf03030a@trilan> <1143554576.2310.161.camel@geddy.wooz.org> <17449.21764.475497.457592@montanaro.dyndns.org> <44298954.4040705@v.loewis.de> <bbaeab100603282345l260b3a3chfe219408eea3dc69@mail.gmail.com><e0ikk2$n8o$1@sea.gmane.org> <442DB437.6070108@v.loewis.de> <e0o0la$9r6$1@sea.gmane.org> Message-ID: <442FCA92.1020707@v.loewis.de> Fredrik Lundh wrote: >> That isn't actually worth that much: somebody would need to operate it, >> too. Mere existence doesn't help. > > why do you keep repeating this when I've already posted a link to a > company that does this for only a few bucks per month ? Because they don't do that. They won't import the Python SF data on their own: somebody has to tell them. Even then, they won't do that: somebody has to provide them with the data (if for no other reason that SF gives access only to project admins). If you are (still) talking about python-hosting.com: where on their website do they say that they will import SF data into trac when asked to? In short: somebody has to take charge, and make sure the thing is available. Maybe it is as simple as filing a support request, but somebody *still* has to do that (or else they won't guess that something is broken). I haven't heard anybody volunteering to do this specific job, working with this specific company. I don't volunteer to do that (I work with SF on issues with their tracker, but only because nobody else does, and because I believe these things need to be done). My impression of of python-hosting is that they provide the machine, and the initial setup. Then you are mostly on your own. Regards, Martin From pj at place.org Sat Apr 1 15:48:13 2006 From: pj at place.org (Paul Jimenez) Date: Sat, 01 Apr 2006 07:48:13 -0600 Subject: [Python-Dev] New uriparse.py Message-ID: <20060401134813.721B082CA@place.org> Announcing uriparse.py, submitted for inclusion in the standard library. Patch request 1462525. Per the original discussion at http://mail.python.org/pipermail/python-dev/2005-November/058301.html I'm submitting a library meant to deprecate the existing urlparse library. Questions and comments welcome. --pj From p.f.moore at gmail.com Sun Apr 2 16:15:04 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 2 Apr 2006 15:15:04 +0100 Subject: [Python-Dev] Firefox searchbar engine for Python bugs In-Reply-To: <200604021423.41185.anthony@interlink.com.au> References: <200604021417.33521.anthony@interlink.com.au> <200604021423.41185.anthony@interlink.com.au> Message-ID: <79990c6b0604020715w34651995h28d5e4ec1c0fac8e@mail.gmail.com> On 4/2/06, Anthony Baxter <anthony at interlink.com.au> wrote: > On Sunday 02 April 2006 14:17, Anthony Baxter wrote: > > I've created a searchbar plugin for the firefox search bar that > > allows you to search bugs. > > I should clarify - it allows you to pull up a bug by bug ID, using the > www.python.org/sf/ redirector. I just use a "Quick Search" item, keyword pysf, URL http://www.python.org/sf. I also have one with keyword "pep" and URL javascript:window.location=%22http://www.python.org/peps/pep-%22+%220000%s%22.slice(-4)+%22.html%22 (the latter is a bit more fiddly to automatically pad a number to 4 digits, so I can write "pep 1" for example). Paul. From fredrik at pythonware.com Sun Apr 2 17:01:51 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 2 Apr 2006 17:01:51 +0200 Subject: [Python-Dev] I'm not getting email from SF when assignedabug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> Message-ID: <e0op11$741$1@sea.gmane.org> Brett Cannon wrote: > > oh, I forgot that the Procrastination & Stop energy Foundation was involved > > in this. > Fredrik, if you would like to help move this all forward, great; I > would appreciate the help. You can write a page scraper to get the > data out of SF if you don't believe SF will cooperate fast enough for > your tastes or I am giving them too long to try to fix things on their > end (I don't expect they will fix things fast enough either, but > because of school ending soon I don't have time right now to start > working on a scraper myself plus I am willing to give them a little > time to try to fix the XML export so we can minimize headaches on our > end). here > > If you would rather contribute by collecting a list of possible > trackers along with who will maintain it, then please do. I am not > going to dive into that quite yet, but if you want to parallelize the > work needed then I would appreciate the help. The tracker will need > to be able to import the SF data somehow (probably will require a > custom tool so the volunteers need to be aware of this), be able to > export data (so we can back it up on a regular basis so we don't have > to go through this again), and an email interface for at least > replying to tracker items. A community-wide announcement will > probably be needed to get a good group of volunteers together for any > one non-commercial tracker. > > But I am not procrastinating. I don't think I have ever come off as a > procrastinator on this list and I don't think I deserve the label. I > am not putting more time in now because I am near the end of term here > at school and thus passing my courses takes precendent over a new bug > tracker. I ended up the chairman at the infrastructure committee > during one of my busiest school terms I have ever had. But I will see > this through and it will be done in a timely manner. > > -Brett > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org > From fredrik at pythonware.com Sun Apr 2 17:31:15 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 2 Apr 2006 17:31:15 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned a bug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> Message-ID: <e0oqo6$c4n$1@sea.gmane.org> Brett Cannon wrote: > > oh, I forgot that the Procrastination & Stop energy Foundation was involved > > in this. > > Fredrik, if you would like to help move this all forward, great; I > would appreciate the help. You can write a page scraper to get the > data out of SF challenge accepted ;-) http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ contains three basic tools; getindex to grab index information from a python tracker, getpages to get "raw" xhtml versions of the item pages, and getfiles to get attached files. I'm currently downloading a tracker snapshot that could be useful for testing; it'll take a few more hours before all data are downloaded (provided that SF doesn't ban me, and I don't stumble upon more cases where a certain rhettinger has pasted binary gunk into an iso-8859-1 form ;-). $ python status.py tracker-105470 6681 items 1201 pages (17%) 104 files tracker-305470 3610 items 0 pages (0%) 0 files tracker-355470 430 items 430 pages (100%) 80 files the final step is to finish the "off-line scraper" library (a straightfor- ward ET hack), and make a snapshot archive available to interested parties. (drop me a line if you want a copy) > If you would rather contribute by collecting a list of possible > trackers along with who will maintain it, then please do. I am not > going to dive into that quite yet, but if you want to parallelize the > work needed then I would appreciate the help. that is what I expected the PSF infrastructure committee to do (I hope you're not the only one in that committee?); it's a bit disappointing to hear that we're still stuck on the SF export issue. (wasn't there someone with backchannel access to the SF data ?) > The tracker will need to be able to import the SF data somehow (probably will require a > custom tool so the volunteers need to be aware of this), be able to > export data (so we can back it up on a regular basis so we don't have > to go through this again), and an email interface for at least > replying to tracker items. A community-wide announcement will > probably be needed to get a good group of volunteers together for any > one non-commercial tracker. > But I am not procrastinating. I don't think I have ever come off as a > procrastinator on this list and I don't think I deserve the label. I wasn't talking about individuals, I was referring to the trend where PSF moves something off a public forum, and the work just ends up going nowhere. </F> From fredrik at pythonware.com Sun Apr 2 17:49:42 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 2 Apr 2006 17:49:42 +0200 Subject: [Python-Dev] Bug Day on Friday, 31st of March References: <dvmbgd$drs$2@sea.gmane.org> Message-ID: <e0orqk$f8a$1@sea.gmane.org> Georg Brandl wrote: > it's time for the 7th Python Bug Day. The aim of the bug day is to close > as many bugs, patches and feature requests as possible, this time with a > special focus on new features that can still go into the upcoming 2.5 alpha > release. so, how did it go? a status report / summary would be nice, I think ? </F> From martin at v.loewis.de Sun Apr 2 19:02:30 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 02 Apr 2006 19:02:30 +0200 Subject: [Python-Dev] New uriparse.py In-Reply-To: <20060401134813.721B082CA@place.org> References: <20060401134813.721B082CA@place.org> Message-ID: <443003A6.4060509@v.loewis.de> Paul Jimenez wrote: > Announcing uriparse.py, submitted for inclusion in the standard library. > Patch request 1462525. > Per the original discussion at > http://mail.python.org/pipermail/python-dev/2005-November/058301.html > I'm submitting a library meant to deprecate the > existing urlparse library. Questions and comments welcome. My initial reaction was "why is it necessary to break everything"? I then re-read the thread, and found this is all fine. However, this shows (to me) that there is a big lack of rationale in this contribution. You explained it as "urlparse breaks abstractions"; however, this didn't mean anything to me. Saying "urlparse doesn't comply with STD66 (aka RFC3986) because it hard-codes URI schemes, instead of applying the same syntax to all of them" is something I would have understood as a problem. So in short: are you willing to write documentation for this? The rationale section could either go into the urllib documentation (pointing out that particular problem, and referring to urilib as an improvement) Regards, Martin From martin at v.loewis.de Sun Apr 2 19:09:08 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 02 Apr 2006 19:09:08 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned a bug/patch In-Reply-To: <e0oqo6$c4n$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> Message-ID: <44300534.4010002@v.loewis.de> Fredrik Lundh wrote: >> If you would rather contribute by collecting a list of possible >> trackers along with who will maintain it, then please do. I am not >> going to dive into that quite yet, but if you want to parallelize the >> work needed then I would appreciate the help. > > that is what I expected the PSF infrastructure committee to do (I hope > you're not the only one in that committee?); it's a bit disappointing to > hear that we're still stuck on the SF export issue. > > (wasn't there someone with backchannel access to the SF data ?) Yes. We found a way to export all data (except for file attachments), through a different exporter. This gives all data, unfortunately, it is ill-formed XML (& is not properly entity-referenced sometimes). Anybody who wants to work with these data, please let me know; I made a snapshot a few days ago. The "backchannel access to SF data" was actually someone different: he experimented with the existing export, confirmed the problem, promised to talk to Paul Moore about that, and referred us to the other XML exporter as a work-around (that one allows to export 500 items at a time). Regards, Martin From fredrik at pythonware.com Sun Apr 2 19:20:16 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 2 Apr 2006 19:20:16 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigneda bug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com><e0oqo6$c4n$1@sea.gmane.org> <44300534.4010002@v.loewis.de> Message-ID: <e0p14h$173$1@sea.gmane.org> Martin v. Löwis wrote: > Yes. We found a way to export all data (except for file attachments), > through a different exporter. This gives all data, unfortunately, it > is ill-formed XML (& is not properly entity-referenced sometimes). so why didn't Brett know about this ? > Anybody who wants to work with these data, please let me know; > I made a snapshot a few days ago. the scraper I wrote in response to Brett's post has successfully down- loaded 80% of the data at this very moment (including attachments), so I'll probably keep playing with that one instead... </F> From martin at v.loewis.de Sun Apr 2 19:36:58 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 02 Apr 2006 19:36:58 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigneda bug/patch In-Reply-To: <e0p14h$173$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com><e0oqo6$c4n$1@sea.gmane.org> <44300534.4010002@v.loewis.de> <e0p14h$173$1@sea.gmane.org> Message-ID: <44300BBA.6050702@v.loewis.de> Fredrik Lundh wrote: >> Yes. We found a way to export all data (except for file attachments), >> through a different exporter. This gives all data, unfortunately, it >> is ill-formed XML (& is not properly entity-referenced sometimes). > > so why didn't Brett know about this ? I'm not sure; I'm sure I mentioned it. Regards, Martin From guido at python.org Sun Apr 2 19:38:04 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 2 Apr 2006 10:38:04 -0700 Subject: [Python-Dev] Saving the hash value of tuples In-Reply-To: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> Message-ID: <ca471dc20604021038k4dde46e6tcb03863a555c4dc8@mail.gmail.com> On 4/1/06, Noam Raphael <noamraph at gmail.com> wrote: > I've found out that the hash value of tuples isn't saved after it's > calculated. With strings it's different: the hash value of a string is > calculated only on the first call to hash(string), and saved in the > structure for future use. Saving the value makes dict lookup of tuples > an operation with an amortized cost of O(1). Have you done any measurements to confirm that this makes any difference? I'm not particularly enamored of theoretical optimizations. This one in particular has a definite cost in space which needs to be weighed seriously. > Saving the hash value means that if an item of the tuple changes its > hash value, the hash value of the tuple won't be changed. I think it's > ok, since: > 1. Hash value of things shouldn't change. > 2. Dicts assume that too. Sure. (But you do realize that if a tuple contains a mutable value its hash value raises an exception, right?) > I tried the change, and it turned out that I had to change cPickle a > tiny bit: it uses a 2-tuple which is allocated when the module > initializes to lookup tuples in a dict. I changed it to properly use > PyTuple_New and Py_DECREF, and now the complete test suite passes. I > run test_cpickle before the change and after it, and it took the same > time (0.89 seconds on my computer). Not just cPickle. I believe enumerate() also reuses a tuple. > What do you think? I see three possibilities: > 1. Nothing should be done, everything is as it should be. > 2. The cPickle module should be changed to not abuse the tuple, but > there's no reason to add an extra word to the tuple structure and > break binary backwards compatibility. > 3. Both should be changed. I'm -1 on the change. Tuples are pretty fundamental in Python and hashing them is relatively rare. I think the extra required space for all tuples isn't worth the potential savings for some cases. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jjl at pobox.com Sun Apr 2 19:55:10 2006 From: jjl at pobox.com (John J Lee) Date: Sun, 2 Apr 2006 17:55:10 +0000 (UTC) Subject: [Python-Dev] New uriparse.py In-Reply-To: <443003A6.4060509@v.loewis.de> References: <20060401134813.721B082CA@place.org> <443003A6.4060509@v.loewis.de> Message-ID: <Pine.LNX.4.64.0604021733480.8353@alice> On Sun, 2 Apr 2006, "Martin v. L?wis" wrote: > Paul Jimenez wrote: >> Announcing uriparse.py, submitted for inclusion in the standard library. >> Patch request 1462525. [...] > abstractions"; however, this didn't mean anything to me. Saying > "urlparse doesn't comply with STD66 (aka RFC3986) because > it hard-codes URI schemes, instead of applying the same > syntax to all of them" is something I would have understood > as a problem. Evidently Paul quickly realised that back at the time of the original thread: hence the lack of posts from Paul protesting at Guido & Mike Brown's explanations, and the appearance now of this nice module :-) > So in short: are you willing to write documentation for this? > The rationale section could either go into the urllib documentation > (pointing out that particular problem, and referring to urilib > as an improvement) Currently of course we have both the functions in urllib, plus module urlparse. This module is roughly a replacement for urlparse. Probably if this module is accepted (after a few changes, no doubt) the urllib functions should then be deprecated (which would probably trigger adding a few more functions to the new module). I guess module urlparse would also be deprecated. I have a list of concrete and mostly easily-resolved problems with the module (including not liking the name). I also suspect there are issues related to unicode, %-encoding &c. exist which should be resolved before including this in the stdlib; I won't comment further on that until I've read the relevant RFCs. I've posted detailed comments on the tracker. John From crutcher at gmail.com Sun Apr 2 20:05:42 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Sun, 2 Apr 2006 11:05:42 -0700 Subject: [Python-Dev] String formating in python 3000 Message-ID: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> Python currently supports 'S % X', where S is a strinng, and X is one of: * a sequence * a map * treated as (X,) But I have some questions about this for python 3000. 1. Shouldn't there be a format method, like S.format(), or S.fmt()? 2. What about using __call__ instead of / in addition to __rmod__? * "time: %s"(time.ctime()) == "time: %s" % time.ctime() * "%(a)s %(b)s"(a=1, b=2) == "%(a)s %(b)s" % {'a'=1, 'b'=2} -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From guido at python.org Sun Apr 2 20:10:55 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 2 Apr 2006 11:10:55 -0700 Subject: [Python-Dev] String formating in python 3000 In-Reply-To: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> References: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> Message-ID: <ca471dc20604021110s54963c5hfc9898a24e82a984@mail.gmail.com> Hi Crutcher, We've created a separate list for discussing Python 3000. http://mail.python.org/mailman/listinfo/python-3000 --Guido On 4/2/06, Crutcher Dunnavant <crutcher at gmail.com> wrote: > Python currently supports 'S % X', where S is a strinng, and X is one of: > * a sequence > * a map > * treated as (X,) > > But I have some questions about this for python 3000. > > 1. Shouldn't there be a format method, like S.format(), or S.fmt()? > > 2. What about using __call__ instead of / in addition to __rmod__? > * "time: %s"(time.ctime()) == "time: %s" % time.ctime() > * "%(a)s %(b)s"(a=1, b=2) == "%(a)s %(b)s" % {'a'=1, 'b'=2} > > -- > Crutcher Dunnavant <crutcher at gmail.com> > littlelanguages.com > monket.samedi-studios.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From aahz at pythoncraft.com Sun Apr 2 20:15:03 2006 From: aahz at pythoncraft.com (Aahz) Date: Sun, 2 Apr 2006 11:15:03 -0700 Subject: [Python-Dev] String formating in python 3000 In-Reply-To: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> References: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> Message-ID: <20060402181503.GA6583@panix.com> On Sun, Apr 02, 2006, Crutcher Dunnavant wrote: > > But I have some questions about this for python 3000. Please use the python-3000 list for questions like this. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From crutcher at gmail.com Sun Apr 2 20:37:27 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Sun, 2 Apr 2006 11:37:27 -0700 Subject: [Python-Dev] String formating in python 3000 In-Reply-To: <20060402181503.GA6583@panix.com> References: <d49fe110604021105r2648464dq56e8c36468ac723b@mail.gmail.com> <20060402181503.GA6583@panix.com> Message-ID: <d49fe110604021137p768e5d0by2f5cde0c4e4b7e8b@mail.gmail.com> Yep, moved this there. On 4/2/06, Aahz <aahz at pythoncraft.com> wrote: > On Sun, Apr 02, 2006, Crutcher Dunnavant wrote: > > > > But I have some questions about this for python 3000. > > Please use the python-3000 list for questions like this. > -- > Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ > > "Look, it's your affair if you want to play with five people, but don't > go calling it doubles." --John Cleese anticipates Usenet > -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From tim.peters at gmail.com Sun Apr 2 20:57:42 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 2 Apr 2006 14:57:42 -0400 Subject: [Python-Dev] Bug Day on Friday, 31st of March In-Reply-To: <e0orqk$f8a$1@sea.gmane.org> References: <dvmbgd$drs$2@sea.gmane.org> <e0orqk$f8a$1@sea.gmane.org> Message-ID: <1f7befae0604021157k21272e57j5d9d18b680000125@mail.gmail.com> [/F] > so, how did it go? a status report / summary would be nice, I think ? http://wiki.python.org/moin/PythonBugDayStatus has been kept up to date. Note that, e.g., there are still open items in the "Bugs/patches to assess for commit" section, if you want to do more than just read. From g.brandl at gmx.net Sun Apr 2 22:52:57 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 02 Apr 2006 22:52:57 +0200 Subject: [Python-Dev] Bug Day on Friday, 31st of March In-Reply-To: <1f7befae0604021157k21272e57j5d9d18b680000125@mail.gmail.com> References: <dvmbgd$drs$2@sea.gmane.org> <e0orqk$f8a$1@sea.gmane.org> <1f7befae0604021157k21272e57j5d9d18b680000125@mail.gmail.com> Message-ID: <e0pdj9$7t2$1@sea.gmane.org> Tim Peters wrote: > [/F] >> so, how did it go? a status report / summary would be nice, I think ? 19 bugs, 9 patches (which were mostly created to fix one of the bugs). Not much, but better than nothing and there has been quite a participation from "newbies". > http://wiki.python.org/moin/PythonBugDayStatus > > has been kept up to date. Note that, e.g., there are still open items > in the "Bugs/patches to assess for commit" section, if you want to do > more than just read. BTW, Tim, thanks for coming. I'm often in a situation that I look at a patch and say, "yes, that's working, but I'm not sure whether to apply this" and need a yes or no from someone up the ranks. Cheers, Georg From tim.peters at gmail.com Sun Apr 2 23:05:31 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 2 Apr 2006 17:05:31 -0400 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> Message-ID: <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> [Tim, gripes about ...] >>> Author: walter.doerwald >>> Date: Sat Apr 1 22:40:23 2006 >>> New Revision: 43545 >>> >>> Modified: >>> python/trunk/Doc/lib/libcalendar.tex >>> python/trunk/Lib/calendar.py >>> Log: >>> Make firstweekday a simple attribute instead >>> of hiding it behind a setter and a getter. [Walter][ > This is because in 2.4 there where no Calendar objects and firstweekday was > only setable and getable via module level functions. I didn't realize that, of course <blush>. Skipping the rest ;-), then, it would be best to make firstweekday a property on the new base class. > ... > The only thing lost is the range check in the setter. Which isn't a good thing to lose. It's not good that the current Calendar constructor skips that sanity check either ("errors should never pass silently"). > ... > Simple attribute access looks much more Pythonic to me than setters and gettes > (especially as the attributes of subclasses are simple attributes). > Or are you talking about the Calendar class itself? Yes, it would be best if Calendar had a property, so that sanity checks were performed when setting `firstweekday`, and also if the Calendar constructor performed that sanity check (which could happen "by magic" if `firstweekday` were a property). From tdelaney at avaya.com Sun Apr 2 23:47:22 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Mon, 3 Apr 2006 07:47:22 +1000 Subject: [Python-Dev] SF #1462485 - StopIteration raised in body of 'with' statement suppressed Message-ID: <2773CAC687FD5F4689F526998C7E4E5F074364@au3010avexu1.global.avaya.com> Discovered this while playing around with the 2.5 released end of last week. Given: @contextmanager def gen(): print '__enter__' yield print '__exit__' with gen(): raise StopIteration('body') I would expect to get the StopIteration exception raised. Instead it's suppressed by the @contextmanager decorator. I think we should only suppress the exception if it's *not* the exception passed into gen.throw() i.e. it's raised by the generator. Does this sound like the correct behaviour? I've attached tests and a fix implementing this to the bug report. I can't confirm right now (at work, need to install 2.5) but I'm also wondering what will happen if KeyboardInterrupt or SystemExit is raised from inside the generator when it's being closed via __exit__. I suspect a RuntimeError will be raised, whereas I think these should pass through. Tim Delaney From tdelaney at avaya.com Sun Apr 2 23:52:01 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Mon, 3 Apr 2006 07:52:01 +1000 Subject: [Python-Dev] SF #1462700 - Errors in PCbuild Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E632@au3010avexu1.global.avaya.com> Discovered a couple of minor errors in pcbuild.sln and pythoncore.vsproj while working out how to compile 2.5 on Windows using the VS C++ Toolkit for the bug day (no Visual Studio at home). FWIW, I eventually ended up using Nant (using the <solution> task). Nant couldn't build 2.5 without the fixes - basically, _ctypes_test didn't have any dependencies, and so was being built first. And the GUIDs for pythoncore were different between pcbuild.sln and pythoncore.vsproj. I've attached the fix to the bug report. Tim Delaney From noamraph at gmail.com Sun Apr 2 23:54:14 2006 From: noamraph at gmail.com (Noam Raphael) Date: Mon, 3 Apr 2006 00:54:14 +0300 Subject: [Python-Dev] Saving the hash value of tuples In-Reply-To: <ca471dc20604021038k4dde46e6tcb03863a555c4dc8@mail.gmail.com> References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> <ca471dc20604021038k4dde46e6tcb03863a555c4dc8@mail.gmail.com> Message-ID: <b348a0850604021454i1258e158s84a8dbb998cd7fe3@mail.gmail.com> On 4/2/06, Guido van Rossum <guido at python.org> wrote: > > I tried the change, and it turned out that I had to change cPickle a > > tiny bit: it uses a 2-tuple which is allocated when the module > > initializes to lookup tuples in a dict. I changed it to properly use > > PyTuple_New and Py_DECREF, and now the complete test suite passes. I > > run test_cpickle before the change and after it, and it took the same > > time (0.89 seconds on my computer). > > Not just cPickle. I believe enumerate() also reuses a tuple. Maybe it does, but I believe that it doesn't calculate the hash value of it - otherwise, the test suite would probably have failed. > > > What do you think? I see three possibilities: > > 1. Nothing should be done, everything is as it should be. > > 2. The cPickle module should be changed to not abuse the tuple, but > > there's no reason to add an extra word to the tuple structure and > > break binary backwards compatibility. > > 3. Both should be changed. > > I'm -1 on the change. Tuples are pretty fundamental in Python and > hashing them is relatively rare. I think the extra required space for > all tuples isn't worth the potential savings for some cases. That's fine with me. But what about option 2? Perhaps cPickle (and maybe enumerate) should properly discard their tuples, so that if someone in the future decides that saving the hash value is a good idea, he won't encounter strange bugs? At least in cPickle I didn't notice any loss of speed because of the change, and it's quite sensible, since there's a tuple-reuse mechanism anyway. Noam From tdelaney at avaya.com Sun Apr 2 23:59:01 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Mon, 3 Apr 2006 07:59:01 +1000 Subject: [Python-Dev] Bug Day on Friday, 31st of March Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E633@au3010avexu1.global.avaya.com> Georg Brandl wrote: > Tim Peters wrote: >> [/F] >>> so, how did it go? a status report / summary would be nice, I >>> think ? > > 19 bugs, 9 patches (which were mostly created to fix one of the bugs). > Not much, but better than nothing and there has been quite a > participation > from "newbies". I've just added 1462485 and 1462700 to be assessed for commit. So make that 11 patches (although technically not finished on the day :) Tim Delaney From walter at livinglogic.de Mon Apr 3 00:12:30 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Mon, 03 Apr 2006 00:12:30 +0200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> Message-ID: <44304C4E.8000707@livinglogic.de> Tim Peters wrote: > [Tim, gripes about ...] >>>> Author: walter.doerwald >>>> Date: Sat Apr 1 22:40:23 2006 >>>> New Revision: 43545 >>>> >>>> Modified: >>>> python/trunk/Doc/lib/libcalendar.tex >>>> python/trunk/Lib/calendar.py >>>> Log: >>>> Make firstweekday a simple attribute instead >>>> of hiding it behind a setter and a getter. > > [Walter][ >> This is because in 2.4 there where no Calendar objects and firstweekday was >> only setable and getable via module level functions. > > I didn't realize that, of course <blush>. Skipping the rest ;-), > then, it would be best to make firstweekday a property on the new base > class. > >> ... >> The only thing lost is the range check in the setter. > > Which isn't a good thing to lose. It's not good that the current > Calendar constructor skips that sanity check either ("errors should > never pass silently"). I've changed calendar so that firstweekday is only used modulo 7 everywhere (There was only one spot missing, all other cases used firstweekday modulo 7 anyway. >> ... >> Simple attribute access looks much more Pythonic to me than setters and gettes >> (especially as the attributes of subclasses are simple attributes). >> Or are you talking about the Calendar class itself? > > Yes, it would be best if Calendar had a property, so that sanity > checks were performed when setting `firstweekday`, and also if the > Calendar constructor performed that sanity check (which could happen > "by magic" if `firstweekday` were a property). Range checks should no longer be neccessary, as any value works now. Bye, Walter D?rwald From fredrik at pythonware.com Mon Apr 3 00:28:29 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 3 Apr 2006 00:28:29 +0200 Subject: [Python-Dev] I'm not getting email from SF when assigned abug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org><442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org><442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org><bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> Message-ID: <e0pj6g$onr$1@sea.gmane.org> > > Fredrik, if you would like to help move this all forward, great; I > > would appreciate the help. You can write a page scraper to get the > > data out of SF > > challenge accepted ;-) > > http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ > > contains three basic tools; getindex to grab index information from a > python tracker, getpages to get "raw" xhtml versions of the item pages, > and getfiles to get attached files. > > I'm currently downloading a tracker snapshot that could be useful for > testing; it'll take a few more hours before all data are downloaded > (provided that SF doesn't ban me, and I don't stumble upon more > cases where a certain rhettinger has pasted binary gunk into an > iso-8859-1 form ;-). alright, it took my poor computer nearly eight hours to grab all the data, and some tracker items needed special treatment to work around some interesting SF bugs, but I've finally managed to download *all* items available via the SF tracker index, and *all* data files available via the item pages: tracker-105470 (bugs) 6682 items 6682 pages (100%) 1912 files tracker-305470 (patches) 3610 items 3610 pages (100%) 4663 files tracker-355470 (feature requests) 430 items 430 pages (100%) 80 files the complete data set is about 300 megabytes uncompressed, and ~85 megabytes zipped. the scripts are designed to make it easy to update the dataset; adding new items and files only takes a couple of minutes; refreshing the item information may take a few hours. ::: I've also added a basic "extract" module which parses the XHTML pages and the data files. this module can be used by import scripts, or be used to convert the dataset into other formats (e.g. a single XML file) for further processing. the source code is available via the above link; I'll post the ZIP file some- where tomorrow (drop me a line if you want the URL). </F> From skip at pobox.com Mon Apr 3 00:59:44 2006 From: skip at pobox.com (skip at pobox.com) Date: Sun, 2 Apr 2006 17:59:44 -0500 Subject: [Python-Dev] Whole bunch of test failures on OSX Message-ID: <17456.22368.611952.670565@montanaro.dyndns.org> I'm not sure this is going to be all that helpful. If there's more I can do to help track down these problems, let me know. Last night I ran make test EXTRATESTOPTS='-R :: -uall -r' on my Mac laptop after a fresh svn up. I wasn't ready for how long that would run! I got plenty of test failures: 285 tests OK. 12 tests failed: test_codecencodings_cn test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_tw test_decimal test_difflib test_logging test_optparse test_warnings 15 tests skipped: test_al test_cd test_cl test_dl test_gdbm test_gl test_imgfile test_linuxaudiodev test_locale test_nis test_ossaudiodev test_pep277 test_sunaudiodev test_winreg test_winsound Those skips are all expected on darwin. The test_codecencodings_tw failure looks like this: File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 88, in test_customreplace_encode "test.xmlcharnamereplace")[0], sout) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 74, in xmlcharnamereplace if ord(c) in codepoint2name: File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 260, in ord return _ord(c) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 260, in ord return _ord(c) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 260, in ord return _ord(c) ... many more at the same line ... with "maximum recursion depth exceeded" at the bottom. Similar problem in test_codecmaps_hk except the recursion was in _unichr(): File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 299, in test_mapping_file unich = unichrs(data[1]) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 275, in <lambda> unichrs = lambda s: u''.join(map(unichr, map(eval, s.split('+')))) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 253, in unichr return _unichr(v) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 253, in unichr return _unichr(v) File "/Users/skip/src/python-svn/trunk/Lib/test/test_multibytecodec_support.py", line 253, in unichr return _unichr(v) ... The other codec-related failures looked the same to my casual eye. The test_difflib error was an assertion failure involving a big-ass chunk of HTML: test test_difflib failed -- Traceback (most recent call last): File "/Users/skip/src/python-svn/trunk/Lib/test/test_difflib.py", line 145, in test_html_diff self.assertEqual(actual,expect) AssertionError: '\n<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\n "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">\n\n<html>\n\n<head>\n <meta http-equiv="Content-Type"\n content="text/html; charset=ISO-8859-1" />\n <title></title>\n <style type="text/css">\n table.diff {font-family:Courier; border:medium;}\n .diff_header {background-color:#e0e0e0}\n td.diff_header {text-align:right}\n ... The test_optparse failure: test test_optparse failed -- Traceback (most recent call last): File "/Users/skip/src/python-svn/trunk/Lib/test/test_optparse.py", line 571, in test_float_default self.assertHelp(self.parser, expected_help) File "/Users/skip/src/python-svn/trunk/Lib/test/test_optparse.py", line 176, in assertHelp actual_help + '"\n') AssertionError: help text failure; expected: "usage: test [options] options: -h, --help show this help message and exit -p PROB, --prob=PROB blow up with probability PROB [default: 0.43] "; got: "usage: test [options] options: -h, --help show this help message and exit -p PROB, --prob=PROB blow up with probability PROB [default: 0.43] " Test_logging crashed: test test_logging crashed -- <class 'exceptions.KeyError'>: <logging.StreamHandler instance at 0x2088478> And though it didn't list test_bsddb3 as a failure, it got a bunch of DBLockDeadlockError exceptions. Here are a couple examples: test_bsddb3 Exception in thread reader 0: Traceback (most recent call last): File "/Users/skip/src/python-svn/trunk/Lib/threading.py", line 467, in __bootstrap self.run() File "/Users/skip/src/python-svn/trunk/Lib/threading.py", line 447, in run self.__target(*self.__args, **self.__kwargs) File "/Users/skip/src/python-svn/trunk/Lib/bsddb/test/test_thread.py", line 275, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/Users/skip/src/python-svn/trunk/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') Exception in thread reader 3: Traceback (most recent call last): File "/Users/skip/src/python-svn/trunk/Lib/threading.py", line 467, in __bootstrap self.run() File "/Users/skip/src/python-svn/trunk/Lib/threading.py", line 447, in run self.__target(*self.__args, **self.__kwargs) File "/Users/skip/src/python-svn/trunk/Lib/bsddb/test/test_thread.py", line 275, in readerThread rec = dbutils.DeadlockWrap(c.next, max_retries=10) File "/Users/skip/src/python-svn/trunk/Lib/bsddb/dbutils.py", line 62, in DeadlockWrap return function(*_args, **_kwargs) DBLockDeadlockError: (-30995, 'DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock') ... For test_decimal it printed: test test_decimal failed -- errors occurred; run in verbose mode for details but when I ran test_decimal manually it ran fine. Same thing for test_warnings: test test_warnings failed -- errors occurred in test.test_warnings.TestModule When I ran it manually it passed. Skip From brett at python.org Mon Apr 3 01:17:58 2006 From: brett at python.org (Brett Cannon) Date: Sun, 2 Apr 2006 16:17:58 -0700 Subject: [Python-Dev] I'm not getting email from SF when assigneda bug/patch In-Reply-To: <e0p14h$173$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> <44300534.4010002@v.loewis.de> <e0p14h$173$1@sea.gmane.org> Message-ID: <bbaeab100604021617nec62db9we5f961ea247a8954@mail.gmail.com> On 4/2/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > Martin v. L?wis wrote: > > > Yes. We found a way to export all data (except for file attachments), > > through a different exporter. This gives all data, unfortunately, it > > is ill-formed XML (& is not properly entity-referenced sometimes). > > so why didn't Brett know about this ? > I knew the export existed, but my understanding was that it was ill-formed as in truncated since it didn't have a close tag on the outermost XML tag. So I thought that the data was incomplete and thus the dump was mostly useless. Martin got another dump, and I asked if it contained all the data but just with a bad format, and he said he wasn't sure. That's why I was still planning on possibly writing a scraper if we didn't validate that the dump had all the tracker items. > > Anybody who wants to work with these data, please let me know; > > I made a snapshot a few days ago. > > the scraper I wrote in response to Brett's post has successfully down- > loaded 80% of the data at this very moment (including attachments), > so I'll probably keep playing with that one instead... I'll reply to this in the other email you sent. -Brett From brett at python.org Mon Apr 3 01:19:47 2006 From: brett at python.org (Brett Cannon) Date: Sun, 2 Apr 2006 16:19:47 -0700 Subject: [Python-Dev] I'm not getting email from SF when assigned a bug/patch In-Reply-To: <e0oqo6$c4n$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <e0g2tq$7d8$1@sea.gmane.org> <442C11E2.6050106@v.loewis.de> <e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> Message-ID: <bbaeab100604021619h3946eb77k89748a086daa2ca2@mail.gmail.com> On 4/2/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > Brett Cannon wrote: > > > > oh, I forgot that the Procrastination & Stop energy Foundation was involved > > > in this. > > > > Fredrik, if you would like to help move this all forward, great; I > > would appreciate the help. You can write a page scraper to get the > > data out of SF > > challenge accepted ;-) > > http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ > > contains three basic tools; getindex to grab index information from a > python tracker, getpages to get "raw" xhtml versions of the item pages, > and getfiles to get attached files. > > I'm currently downloading a tracker snapshot that could be useful for > testing; it'll take a few more hours before all data are downloaded > (provided that SF doesn't ban me, and I don't stumble upon more > cases where a certain rhettinger has pasted binary gunk into an > iso-8859-1 form ;-). > > $ python status.py > tracker-105470 > 6681 items > 1201 pages (17%) > 104 files > tracker-305470 > 3610 items > 0 pages (0%) > 0 files > tracker-355470 > 430 items > 430 pages (100%) > 80 files > > the final step is to finish the "off-line scraper" library (a straightfor- > ward ET hack), and make a snapshot archive available to interested > parties. (drop me a line if you want a copy) > > > If you would rather contribute by collecting a list of possible > > trackers along with who will maintain it, then please do. I am not > > going to dive into that quite yet, but if you want to parallelize the > > work needed then I would appreciate the help. > > that is what I expected the PSF infrastructure committee to do (I hope > you're not the only one in that committee?); it's a bit disappointing to > hear that we're still stuck on the SF export issue. > The reason I didn't want to deal with the trackers quite yet was that I could see people getting the trackers up and squared away, and then just get frustrated when we were unable to get the SF data to them quickly. I didn't want other people stuck spinning there wheels waiting on us. -Brett > (wasn't there someone with backchannel access to the SF data ?) > > > The tracker will need to be able to import the SF data somehow (probably will require a > > custom tool so the volunteers need to be aware of this), be able to > > export data (so we can back it up on a regular basis so we don't have > > to go through this again), and an email interface for at least > > replying to tracker items. A community-wide announcement will > > probably be needed to get a good group of volunteers together for any > > one non-commercial tracker. > > > But I am not procrastinating. I don't think I have ever come off as a > > procrastinator on this list and I don't think I deserve the label. > > I wasn't talking about individuals, I was referring to the trend where > PSF moves something off a public forum, and the work just ends up > going nowhere. > > </F> > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From brett at python.org Mon Apr 3 01:54:51 2006 From: brett at python.org (Brett Cannon) Date: Sun, 2 Apr 2006 16:54:51 -0700 Subject: [Python-Dev] I'm not getting email from SF when assigned abug/patch In-Reply-To: <e0pj6g$onr$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <442C11E2.6050106@v.loewis.de> <e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> <e0pj6g$onr$1@sea.gmane.org> Message-ID: <bbaeab100604021654o10c44378g56f3adfebd48312@mail.gmail.com> On 4/2/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > > > Fredrik, if you would like to help move this all forward, great; I > > > would appreciate the help. You can write a page scraper to get the > > > data out of SF > > > > challenge accepted ;-) > > Woohoo! > > http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ > > > > contains three basic tools; getindex to grab index information from a > > python tracker, getpages to get "raw" xhtml versions of the item pages, > > and getfiles to get attached files. > > > > I'm currently downloading a tracker snapshot that could be useful for > > testing; it'll take a few more hours before all data are downloaded > > (provided that SF doesn't ban me, and I don't stumble upon more > > cases where a certain rhettinger has pasted binary gunk into an > > iso-8859-1 form ;-). > > alright, it took my poor computer nearly eight hours to grab all the > data, and some tracker items needed special treatment to work around > some interesting SF bugs, but I've finally managed to download *all* > items available via the SF tracker index, and *all* data files available > via the item pages: > > tracker-105470 (bugs) > 6682 items > 6682 pages (100%) > 1912 files > tracker-305470 (patches) > 3610 items > 3610 pages (100%) > 4663 files > tracker-355470 (feature requests) > 430 items > 430 pages (100%) > 80 files > > the complete data set is about 300 megabytes uncompressed, and ~85 > megabytes zipped. > > the scripts are designed to make it easy to update the dataset; adding > new items and files only takes a couple of minutes; refreshing the item > information may take a few hours. > > ::: > > I've also added a basic "extract" module which parses the XHTML > pages and the data files. this module can be used by import scripts, > or be used to convert the dataset into other formats (e.g. a single > XML file) for further processing. > > the source code is available via the above link; I'll post the ZIP file some- > where tomorrow (drop me a line if you want the URL). > Wonderful, Fredrik! Thank you for doing this! When the data is available I will arrange to get it put on python.org somewhere and then start drafting the tracker announcement with where the data is and how to get at it. -Brett From tim.peters at gmail.com Mon Apr 3 02:57:50 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 2 Apr 2006 20:57:50 -0400 Subject: [Python-Dev] Whole bunch of test failures on OSX In-Reply-To: <17456.22368.611952.670565@montanaro.dyndns.org> References: <17456.22368.611952.670565@montanaro.dyndns.org> Message-ID: <1f7befae0604021757k2bfd8edx9416b49150f2acfd@mail.gmail.com> [skip at pobox.com] > I'm not sure this is going to be all that helpful. If there's more I can do > to help track down these problems, let me know. Sure: you can do _everything_ to track them down ;-) > Last night I ran > > make test EXTRATESTOPTS='-R :: -uall -r' > > on my Mac laptop after a fresh svn up. I wasn't ready for how long that > would run! I never tried it (specifically the '-R ::' bit), but I see what you mean. Note that we have an OSX buildbot slave that passes all the trunk tests. Can you say whether make test EXTRATESTOPTS='-uall -r' (without '-R ::') also fails on your box? The point is to separate what's unique to your Mac box from what's unique to '-R ::'. > .. > And though it didn't list test_bsddb3 as a failure, it got a bunch of > DBLockDeadlockError exceptions. That one's almost a FAQ here -- all platforms see those from time to time, and have for years (especially when the box is heavily loaded). > ... > For test_decimal it printed: > > test test_decimal failed -- errors occurred; run in verbose mode for details > > but when I ran test_decimal manually it ran fine. Did your manual run also include '-R ::'? When I run test_decimal in isolation on my Windows box with that option, it also fails here, on its second run: C:\Code\python\PCbuild>python_d -E -tt ../lib/test/regrtest.py -uall -R:: test_decimal test_decimal beginning 9 repetitions 123456789 test test_decimal failed -- errors occurred; run in verbose mode for details 1 test failed: test_decimal [29478 refs] > Same thing for test_warnings: > > test test_warnings failed -- errors occurred in test.test_warnings.TestModule > > When I ran it manually it passed. With or without -R::? This one also fails for me in isolation with '-R ::', and also on its second run: C:\Code\python\PCbuild>python_d -E -tt ../lib/test/regrtest.py -uall -R:: test_warnings test_warnings beginning 9 repetitions 123456789 test test_warnings failed -- errors occurred in test.test_warnings.TestModule 1 test failed: test_warnings [15467 refs] Does anyone else routinely use -R? If anyone does, do all the tests pass for them? From nnorwitz at gmail.com Mon Apr 3 03:07:49 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Apr 2006 17:07:49 -0800 Subject: [Python-Dev] Whole bunch of test failures on OSX In-Reply-To: <1f7befae0604021757k2bfd8edx9416b49150f2acfd@mail.gmail.com> References: <17456.22368.611952.670565@montanaro.dyndns.org> <1f7befae0604021757k2bfd8edx9416b49150f2acfd@mail.gmail.com> Message-ID: <ee2a432c0604021807g1089eb42r5be4ec4f3acc5979@mail.gmail.com> On 4/2/06, Tim Peters <tim.peters at gmail.com> wrote: > > Does anyone else routinely use -R? If anyone does, do all the tests > pass for them? Yes and no. Every 12 hours, see Misc/build.sh For the latest results, see: http://docs.python.org/dev/results/make-test-refleak.out Several tests fail consistently with -R. These are the most recent from the link above: test_decimal test_difflib test_logging test_optparse test_warnings. It would be great if someone would figure out why these tests fail when running under -R and fix them. n From anthony at interlink.com.au Mon Apr 3 03:47:25 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Mon, 3 Apr 2006 12:47:25 +1100 Subject: [Python-Dev] TRUNK FREEZE. 2.5a1, 00:00 UTC, Wednesday 5th of April. Message-ID: <200604031147.27455.anthony@interlink.com.au> Now that the bug day has been and gone, it's time to cut 2.5a1. Please consider the trunk FROZEN from 00:00 UTC/GMT on Wednesday the 5th of April. I'll post again when it's unfrozen. Please help in not making the release manager cry because the trunk is broken. Thanks, Anthony From tim.peters at gmail.com Mon Apr 3 04:14:12 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 2 Apr 2006 22:14:12 -0400 Subject: [Python-Dev] Whole bunch of test failures on OSX In-Reply-To: <ee2a432c0604021807g1089eb42r5be4ec4f3acc5979@mail.gmail.com> References: <17456.22368.611952.670565@montanaro.dyndns.org> <1f7befae0604021757k2bfd8edx9416b49150f2acfd@mail.gmail.com> <ee2a432c0604021807g1089eb42r5be4ec4f3acc5979@mail.gmail.com> Message-ID: <1f7befae0604021914k4dbc70bdj5f941d05be12b9ec@mail.gmail.com> [Neal Norwitz, on -R testing] > ... > For the latest results, see: > http://docs.python.org/dev/results/make-test-refleak.out > > Several tests fail consistently with -R. These are the most recent > from the link above: test_decimal test_difflib test_logging > test_optparse test_warnings. > > It would be great if someone would figure out why these tests fail > when running under -R and fix them. Like anyone wants to spend their life guessing what reload() actually does <0.6 wink>. C:\Code\python\PCbuild>python Python 2.5a0 (trunk:43548M, Apr 1 2006, 21:44:15) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from test import test_warnings >>> test_warnings.test_main() # first time it's fine test_filtering (test.test_warnings.TestModule) ... ok test_warn_default_category (test.test_warnings.TestModule) ... ok test_warn_specific_category (test.test_warnings.TestModule) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.031s OK >>> reload(test_warnings) # then the dreaded reload(), and it fails <module 'test.test_warnings' from 'C:\Code\python\lib\test\test_warnings.pyc'> >>> test_warnings.test_main() test_filtering (test.test_warnings.TestModule) ... ERROR test_warn_default_category (test.test_warnings.TestModule) ... ERROR test_warn_specific_category (test.test_warnings.TestModule) ... ERROR ====================================================================== ERROR: test_filtering (test.test_warnings.TestModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Code\python\lib\test\test_warnings.py", line 68, in test_filtering self.assertEqual(msg.message, text) AttributeError: WarningMessage instance has no attribute 'message' ====================================================================== ERROR: test_warn_default_category (test.test_warnings.TestModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Code\python\lib\test\test_warnings.py", line 42, in test_warn_default_category self.assertEqual(msg.message, text) AttributeError: WarningMessage instance has no attribute 'message' ====================================================================== ERROR: test_warn_specific_category (test.test_warnings.TestModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Code\python\lib\test\test_warnings.py", line 57, in test_warn_specific_category self.assertEqual(msg.message, text) AttributeError: WarningMessage instance has no attribute 'message' ---------------------------------------------------------------------- Ran 3 tests in 0.000s FAILED (errors=3) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Code\python\lib\test\test_warnings.py", line 85, in test_main test_support.run_unittest(TestModule) File "C:\Code\python\lib\test\test_support.py", line 300, in run_unittest run_suite(suite, testclass) File "C:\Code\python\lib\test\test_support.py", line 284, in run_suite raise TestFailed(msg) test.test_support.TestFailed: errors occurred in test.test_warnings.TestModule >>> Figuring out why that happens is pretty much a nightmare. "Fixing it" requires changes to both regrtest and test_warnings: """ Index: Lib/test/regrtest.py =================================================================== --- Lib/test/regrtest.py (revision 43548) +++ Lib/test/regrtest.py (working copy) @@ -536,12 +536,10 @@ sys.path_importer_cache.update(pic) dircache.reset() linecache.clearcache() - if indirect_test: - def run_the_test(): - indirect_test() - else: - def run_the_test(): - reload(the_module) + def run_the_test(): + reload(the_module) + if indirect_test: + getattr(the_module, "test_main")() deltas = [] repcount = huntrleaks[0] + huntrleaks[1] print >> sys.stderr, "beginning", repcount, "repetitions" Index: Lib/test/test_warnings.py =================================================================== --- Lib/test/test_warnings.py (revision 43548) +++ Lib/test/test_warnings.py (working copy) @@ -84,5 +84,9 @@ def test_main(verbose=None): test_support.run_unittest(TestModule) +# Obscure hack so that this test passes after reloads (regrtest -R). +if '__warningregistry__' in globals(): + del globals()['__warningregistry__'] + if __name__ == "__main__": test_main(verbose=True) """ The change to regrtest may fix other -R cases (for example, it appears to repair test_decimal), but may introduce new failures too. If you try it and find it's a net win ;-), feel free to check it in. From python at rcn.com Mon Apr 3 05:23:58 2006 From: python at rcn.com (Raymond Hettinger) Date: Sun, 2 Apr 2006 23:23:58 -0400 Subject: [Python-Dev] Saving the hash value of tuples References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> <20060401203701.GB13210@panix.com> Message-ID: <01ab01c656ce$0c564570$7472e145@RaymondLaptop1> >> I've found out that the hash value of tuples isn't saved after it's >> calculated. With strings it's different: the hash value of a string is >> calculated only on the first call to hash(string), and saved in the >> structure for future use. Saving the value makes dict lookup of tuples >> an operation with an amortized cost of O(1). >> [...] >> I will be happy to send a patch, if someone shows interest. > > Regardless of whether anyone shows interest, please submit a patch! Then > post the URL back here. That way if someone gets interested in the > future, your code is still available. FWIW, I think that is not a good idea. Guido shot it down for good reason. Once a patch is loaded, the question will continually resurface every few months and waste everyone's time re-hashing the issue. We have bigger dragons to slay. Raymond From martin at v.loewis.de Mon Apr 3 09:02:07 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 03 Apr 2006 09:02:07 +0200 Subject: [Python-Dev] Renaming sqlite3 Message-ID: <4430C86F.8090507@v.loewis.de> I just tried creating a pysqlite VS project, and ran into a naming conflict: the Windows DLL is called sqlite3.dll. So if it is on sys.path import sqlite3 might find the DLL, instead of finding the package. Python then finds that there is no entry point in sqlite3, and raises an ImportError. I see three options: 1. rename sqlite3 again 2. link sqlite3 statically into _sqlite3.pyd 3. stop treating .DLL files as extension modules I'm actually leaning towards option 3: what is the rationale for allowing Python extension modules to be named .DLL? Regards, Martin From nnorwitz at gmail.com Mon Apr 3 09:34:18 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 2 Apr 2006 23:34:18 -0800 Subject: [Python-Dev] outstanding items for 2.5 Message-ID: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> I updated the PEP to include owners. If this message is sent directly to you, you are an owner. http://www.python.org/dev/peps/pep-0356/ There are still some items without owners as I don't know who will be leading the charge to get some of the modules in the stdlib. If we don't have anyone pushing them, they won't go in. If they are going in, they should be committed soon after alpha1 is out the door. Please don't rush to get them in before the alpha, unless you're absolutely sure it won't screw things up. Review the PEP and let me know what needs to be changed. If your pet project isn't already in the PEP, assume it has been deferred until 2.6. Also, now would be a good time to see if you have any bugs/patches assigned to you: http://sourceforge.net/my/tracker.php Cheers, n From crutcher at gmail.com Mon Apr 3 09:48:03 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Mon, 3 Apr 2006 00:48:03 -0700 Subject: [Python-Dev] SF:1463370 add .format() method to str and unicode Message-ID: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> >From discussion on python-3000, it occured to me that this shouldn't break anything. This patch adds a .format() method to the string and unicode types. SF:1463370 -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From mwh at python.net Mon Apr 3 10:03:43 2006 From: mwh at python.net (Michael Hudson) Date: Mon, 03 Apr 2006 09:03:43 +0100 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> (Thomas Wouters's message of "Sat, 1 Apr 2006 12:19:34 +0200") References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> Message-ID: <2mlkunt7a8.fsf@starship.python.net> "Thomas Wouters" <thomas at python.org> writes: > While we're at it, I would like for the new __del__ (which would > probably have to be a new method) to disallow reviving self, just > because it makes it unnecessarily complicated and it's rarely > needed. I'm not sure the problem is so much that anyone _wants_ to support resurrection in __del__, it's just that it can't be prevented. l = [] class A(object): def __del__(self): l.append(self) a = A() a = 1 What would you have this do? And if we want to have a version of __del__ that can't reference 'self', we have it already: weakrefs with callbacks. What happened to the 'get rid of __del__ in py3k' idea? Cheers, mwh -- <freeside> On a scale of One to AWESOME, twisted.web is PRETTY ABSTRACT!!!! -- from Twisted.Quotes From amk at amk.ca Mon Apr 3 14:46:43 2006 From: amk at amk.ca (A.M. Kuchling) Date: Mon, 3 Apr 2006 08:46:43 -0400 Subject: [Python-Dev] outstanding items for 2.5 In-Reply-To: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> References: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> Message-ID: <20060403124643.GA21373@localhost.localdomain> On Sun, Apr 02, 2006 at 11:34:18PM -0800, Neal Norwitz wrote: > Review the PEP and let me know what needs to be changed. If your pet > project isn't already in the PEP, assume it has been deferred until > 2.6. I'd like to see Gregory K. Johnson's updated mailbox module (in sandbox/mailbox/) included. I mentored its development and think the code is ready for inclusion, but would like someone else to run an eye over it. Any volunteers? I can file a patch request for this item if desired. --amk From andymac at bullseye.apana.org.au Mon Apr 3 12:26:03 2006 From: andymac at bullseye.apana.org.au (Andrew MacIntyre) Date: Mon, 03 Apr 2006 21:26:03 +1100 Subject: [Python-Dev] Renaming sqlite3 In-Reply-To: <4430C86F.8090507@v.loewis.de> References: <4430C86F.8090507@v.loewis.de> Message-ID: <4430F83B.7040903@bullseye.apana.org.au> Martin v. L?wis wrote: > I see three options: > 1. rename sqlite3 again > 2. link sqlite3 statically into _sqlite3.pyd > 3. stop treating .DLL files as extension modules > > I'm actually leaning towards option 3: what is the rationale > for allowing Python extension modules to be named .DLL? A datapoint specific to OS/2 which probably has little relevance to Windows or to the specific case at hand: In order to get the curses_panel module to work, I have to forward the necessary curses entry points from the _curses module DLL. On OS/2, this only works for DLLs with the extension .DLL, so I ship _curses.pyd as _curses.dll. As a consequence, I can't implement option 3 for the OS/2 port but I can live with the nasty side-effects given the modest userbase and by documenting the issue in the port README. If you can make option 3 work for Windows, then I would do it now during the alpha to see whether it flushes any problems out. I must admit to being uncomfortable with including version numbers in module names, especially when they reflect a version outside the scope of Python. Ending up with a module name that can match a 3rd party dynamically linkable file would seem problematic no matter which way you look at it. FWIW, Andrew. ------------------------------------------------------------------------- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac at pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From theller at python.net Mon Apr 3 15:40:59 2006 From: theller at python.net (Thomas Heller) Date: Mon, 03 Apr 2006 15:40:59 +0200 Subject: [Python-Dev] Renaming sqlite3 In-Reply-To: <4430C86F.8090507@v.loewis.de> References: <4430C86F.8090507@v.loewis.de> Message-ID: <443125EB.7020403@python.net> Martin v. L?wis wrote: > I just tried creating a pysqlite VS project, and ran into a naming > conflict: the Windows DLL is called sqlite3.dll. So if it is on > sys.path > > import sqlite3 > > might find the DLL, instead of finding the package. Python then > finds that there is no entry point in sqlite3, and raises an > ImportError. > > I see three options: > 1. rename sqlite3 again > 2. link sqlite3 statically into _sqlite3.pyd > 3. stop treating .DLL files as extension modules > > I'm actually leaning towards option 3: what is the rationale > for allowing Python extension modules to be named .DLL? Don't know. But if you make the change to implement option 3, IMO it would be a good idea to add the Python version number to the .pyd basename as well. pywin32 already has to do this, since pythoncomXY.pyd and pywintypesXY.pyd have to live (if possible) in the windows directory. There have been other conflicts reported before: I remember Windows\system32\wmi.dll conflicting with Tim Golden's wmi.py module. In addition, wmi.dll is very special, since it doesn't have an import table, IIRC. Thomas From ncoghlan at gmail.com Mon Apr 3 15:52:36 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 03 Apr 2006 23:52:36 +1000 Subject: [Python-Dev] SF:1463370 add .format() method to str and unicode In-Reply-To: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> References: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> Message-ID: <443128A4.60803@gmail.com> Crutcher Dunnavant wrote: >>From discussion on python-3000, it occured to me that this shouldn't > break anything. > This patch adds a .format() method to the string and unicode types. > > SF:1463370 -1. For reasons I go into more on the Py3k list, I'd like to see this term associated with an enhanced version of PEP 292 style string formatting rather than the existing mod-style formatting. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Mon Apr 3 16:04:08 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 04 Apr 2006 00:04:08 +1000 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <44304C4E.8000707@livinglogic.de> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> Message-ID: <44312B58.3090900@gmail.com> Walter D?rwald wrote: > Tim Peters wrote: >> Which isn't a good thing to lose. It's not good that the current >> Calendar constructor skips that sanity check either ("errors should >> never pass silently"). > > I've changed calendar so that firstweekday is only used modulo 7 > everywhere (There was only one spot missing, all other cases used > firstweekday modulo 7 anyway. > >>> ... >>> Simple attribute access looks much more Pythonic to me than setters and gettes >>> (especially as the attributes of subclasses are simple attributes). >>> Or are you talking about the Calendar class itself? >> Yes, it would be best if Calendar had a property, so that sanity >> checks were performed when setting `firstweekday`, and also if the >> Calendar constructor performed that sanity check (which could happen >> "by magic" if `firstweekday` were a property). > > Range checks should no longer be neccessary, as any value works now. But now all *clients* of the Calendar class are forced to deal with the fact that "firstweekday" may not be greater than seven. If you want to accept any input value, why not use a property to force it to be modulo 7, rather than doing an actual range check? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From zpincus at stanford.edu Mon Apr 3 16:58:57 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 3 Apr 2006 09:58:57 -0500 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? Message-ID: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> Hi folks, I submitted a patch a little while ago to led Python on Darwin/OS X use the same code path to load extensions it uses on most other Unix- like platforms. (The reasons for this are several, and mentioned in the patch: http://sourceforge.net/tracker/index.php? func=detail&aid=1454844&group_id=5470&atid=305470 ). Anyhow, IMO if this patch is to be included at all (I rather think it should, and will happily discuss that on this list or on the patch comments to clarify why this is so), it probably ought to make it into python 2.5 earlier rather than later. While I'm almost certain that these changes will cause no issues (as Apple officially encourages use of dlopen() as a drop-in replacement for the officially 'discouraged' NeXT-derived functions that Python now uses to load extensions), it would seem more prudent to give a low-level change like this plenty of time to settle out in case I am wrong. I've run this by the python-mac folks, and there seemed to be some assent, or at least no complaint. Bob Ippolito appeared to think that this approach was the best to making Python on the Mac load extensions properly in some corner cases (see the patch description for more details), but he hasn't weighted in for a while. Sorry if it's bad form to ask about patches one has submitted -- let me know if that sort of discussion should be kept strictly on the patch tracker. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine From aahz at pythoncraft.com Mon Apr 3 17:09:53 2006 From: aahz at pythoncraft.com (Aahz) Date: Mon, 3 Apr 2006 08:09:53 -0700 Subject: [Python-Dev] SF:1463370 add .format() method to str and unicode In-Reply-To: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> References: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> Message-ID: <20060403150952.GA7928@panix.com> On Mon, Apr 03, 2006, Crutcher Dunnavant wrote: > > From discussion on python-3000, it occured to me that this shouldn't > break anything. > This patch adds a .format() method to the string and unicode types. > > SF:1463370 If you're serious, please write up a PEP. I recommend that you start posting to comp.lang.python first. (Changes to built-in types like this generally require a PEP.) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From walter at livinglogic.de Mon Apr 3 17:25:18 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Mon, 03 Apr 2006 17:25:18 +0200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <44312B58.3090900@gmail.com> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> <44312B58.3090900@gmail.com> Message-ID: <44313E5E.6040003@livinglogic.de> Nick Coghlan wrote: > Walter D?rwald wrote: >> [...] >> Range checks should no longer be neccessary, as any value works now. > > But now all *clients* of the Calendar class are forced to deal with the fact > that "firstweekday" may not be greater than seven. > > If you want to accept any input value, why not use a property to force it to > be modulo 7, rather than doing an actual range check? OK, the property setter does a "% 7" now. (But the global setfirstweekday() still does a range check). Bye. Walter D?rwald From guido at python.org Mon Apr 3 17:41:43 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 08:41:43 -0700 Subject: [Python-Dev] SF #1462485 - StopIteration raised in body of 'with' statement suppressed In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5F074364@au3010avexu1.global.avaya.com> References: <2773CAC687FD5F4689F526998C7E4E5F074364@au3010avexu1.global.avaya.com> Message-ID: <ca471dc20604030841x2ccaf094w72107b666a9912dd@mail.gmail.com> On 4/2/06, Delaney, Timothy (Tim) <tdelaney at avaya.com> wrote: > Given: > > @contextmanager > def gen(): > print '__enter__' > yield > print '__exit__' > > with gen(): > raise StopIteration('body') > > I would expect to get the StopIteration exception raised. Instead it's > suppressed by the @contextmanager decorator. Right. I'm not sure how to fix this (but I think Phillip probably can). > I think we should only suppress the exception if it's *not* the > exception passed into gen.throw() i.e. it's raised by the generator. > Does this sound like the correct behaviour? I've attached tests and a > fix implementing this to the bug report. Cool. > I can't confirm right now (at work, need to install 2.5) but I'm also > wondering what will happen if KeyboardInterrupt or SystemExit is raised > from inside the generator when it's being closed via __exit__. I suspect > a RuntimeError will be raised, whereas I think these should pass > through. I see no reason for this with the current code. Perhaps a previous version of contextlib.py had this problem? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at pythonware.com Mon Apr 3 17:50:02 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Mon, 3 Apr 2006 17:50:02 +0200 Subject: [Python-Dev] I'm not getting email from SF when assignedabug/patch References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com><1143648409.10799.159.camel@resist.wooz.org><ee2a432c0603292333s77b92240x53588f997162a992@mail.gmail.com><e0g2tq$7d8$1@sea.gmane.org><442C11E2.6050106@v.loewis.de><e0h4vs$26r$1@sea.gmane.org><442C1C7E.4050502@v.loewis.de><ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com><bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com><e0igo3$d13$1@sea.gmane.org><bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com><e0oqo6$c4n$1@sea.gmane.org> <e0pj6g$onr$1@sea.gmane.org> Message-ID: <e0rg7c$cjg$1@sea.gmane.org> > the source code is available via the above link; I'll post the ZIP file some- > where tomorrow (drop me a line if you want the URL). I found some free space on the effbot.org server, so anyone inter- ested can get the current ZIP file here: http://effbot.org/tracker-20060403.zip the zip file is ~85 megabytes, and expands to about 300 megabyte data. there are three tracker directories (for the bugs, patches, and feature re- quest trackers). for each item, there are at least two files: item-NNN.xml (index information, created by getindex.py) item-NNN-page.xml (xhtml pages, created by getpages.py) where NNN is the tracker item identifier. for items that have attached files, there's also one or more item-NNN-data-MMM.dat (data files, created by getfiles.py) where MMM is a file identifier (referred to by the page files). ::: the extract module available here: http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ can be used to extract information from the page.xml files (see the sanity check code at the end of that file for a usage example). to use this, you need ElementTree (a Python 2.5 pre-alpha should work) and/or cElementTree. ::: I'll post an export demo script later. cheers /F From trentm at ActiveState.com Mon Apr 3 18:38:03 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 3 Apr 2006 09:38:03 -0700 Subject: [Python-Dev] PEP to list externally maintained modules and where to report bugs? In-Reply-To: <bbaeab100604011522j546f8d30i37f09c4190b06c3b@mail.gmail.com> References: <bbaeab100604011522j546f8d30i37f09c4190b06c3b@mail.gmail.com> Message-ID: <20060403163803.GA14876@activestate.com> [Brett Cannon wrote] > Anyone else think we need a PEP to point to places where externally > maintained code should have bugs or patches reported? I don't want to > hunt down a URL for where to do this every time and so it would be > nice to have a list of what code needs bugs/patches reported where. > Otherwise I am prone to just check my changes into the tree and not > get them reported upstream since I want the warnings to go away. =) +1 Perhaps that could be merged with generic "how to report a bug" instructions. This might be helpful for a start: http://producingoss.com/html-chunk/bug-reporting.html Trent -- Trent Mick TrentM at ActiveState.com From trentm at ActiveState.com Mon Apr 3 18:43:25 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 3 Apr 2006 09:43:25 -0700 Subject: [Python-Dev] Firefox searchbar engine for Python bugs In-Reply-To: <200604021417.33521.anthony@interlink.com.au> References: <200604021417.33521.anthony@interlink.com.au> Message-ID: <20060403164325.GB14876@activestate.com> [Anthony Baxter wrote] > I've created a searchbar plugin for the firefox search bar that allows > you to search bugs. I think someone created one for the sidebar http://starship.python.net/~skippy/mozilla/ http://projects.edgewall.com/python-sidebar/ Trent -- Trent Mick TrentM at ActiveState.com From guido at python.org Mon Apr 3 20:40:42 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 11:40:42 -0700 Subject: [Python-Dev] SF:1463370 add .format() method to str and unicode In-Reply-To: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> References: <d49fe110604030048q257991ebj1c73ebecfaf87471@mail.gmail.com> Message-ID: <ca471dc20604031140i12093cbahe0c00bbe5ce1edc9@mail.gmail.com> On 4/3/06, Crutcher Dunnavant <crutcher at gmail.com> wrote: > >From discussion on python-3000, it occured to me that this shouldn't > break anything. > This patch adds a .format() method to the string and unicode types. > > SF:1463370 Hmm... Let's not jump to conclusions. While I like your patch, we need to have community consensus that s.format(x) is better than s%x, and we need to discuss alternatives such as a different format syntax. I guess I have to amend my process proposals (and yes, I know it's high time for me to get back on the wagon and start spending quality time with Python 3000). While I still believe that new features which can be introduced without backwards incompatibility are fair game for introduction in Python 2.x rather than waiting for 3.0 (and in fact, introduction in 2.x is perhaps preferable over waiting), the realities of community opinion and proposal history need to be taken into account. We also, in particular, need to be really careful that we don't introduce things into 2.x that we *think* we'll want in Py3k but which might turn out later to require more tweaks. For example, in the case of the formatting method, it would be tragic if Python 3000 switched to a different format syntax but we had already introduced s.format(x) in Python 2.x as an alias to s%x -- then the meaning of s.format(x) would change in Python 3000, while we might have had the opportunity of a 10)% *compatible* change if we had waited until the Python 3000 version of the feature had settled before rushing it into Python 2.x. Concluding, perhaps the right time to include certain features in Python 2.x is only *after* the feature has been discussed, specified, agreed upon, and implemented in Python 3000. Of course, this doesn't mean we shouldn't plan to add anything new to Python 2.x (even though that would greatly reduce merge problems with the Py3k branch ;-). I guess changes to 2.x should follow the established, 100% backwards compatible, evolutionary path: if python-dev agrees it's a good idea, it should probably go in. OTOH if it's a potentially disruptive change, or if it could benefit from synchronicity with other Py3k features, or perhaps even if it just adds a new way of saying something that might eventually mean the old way should be deprecated, it's better to refine the idea in a Python 3000 context first. It's going to be inevitable that we'll get the occasional idea first brought up on python-dev that makes more sense to move to Python 3000, or vice versa; let's all be mindful of such cases. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Mon Apr 3 20:45:09 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 11:45:09 -0700 Subject: [Python-Dev] Need Py3k group in trackers Message-ID: <ca471dc20604031145y64ddffdan1841cdf294bcaf6f@mail.gmail.com> Could one of the tracker admins add a Python-3000 group to the SF trackers (while we're still using them :-)? This is so we can easily move proposals between Python 3000 and Python 2.x status. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Mon Apr 3 20:52:37 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 11:52:37 -0700 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <2mlkunt7a8.fsf@starship.python.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> Message-ID: <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> On 4/3/06, Michael Hudson <mwh at python.net> wrote: > I'm not sure the problem is so much that anyone _wants_ to support > resurrection in __del__, it's just that it can't be prevented. Well, Java has an answer to that (at least I believe Tim Peters told me so years ago): it allows resurrection, but will only call the finalizer once. IOW if the resurrected object is GC'ed a second time, its finalizer won't be called. This would require a bit "__del__ already called" on an object, but don't we have a whole word of GC-related flags? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Mon Apr 3 20:55:54 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 11:55:54 -0700 Subject: [Python-Dev] outstanding items for 2.5 In-Reply-To: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> References: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> Message-ID: <ca471dc20604031155u6b13c4el94cb19161afb5d9a@mail.gmail.com> I checked what I owned. - pgen: yes, if I have time - GeneratorExit inheriting from BaseException: no, I've pronounced on this - StopIteration propagation from context managers: I'm giving this to Phillip --Guido On 4/3/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > I updated the PEP to include owners. If this message is sent directly > to you, you are an owner. > > http://www.python.org/dev/peps/pep-0356/ > > There are still some items without owners as I don't know who will be > leading the charge to get some of the modules in the stdlib. If we > don't have anyone pushing them, they won't go in. If they are going > in, they should be committed soon after alpha1 is out the door. > Please don't rush to get them in before the alpha, unless you're > absolutely sure it won't screw things up. > > Review the PEP and let me know what needs to be changed. If your pet > project isn't already in the PEP, assume it has been deferred until > 2.6. > > Also, now would be a good time to see if you have any bugs/patches > assigned to you: http://sourceforge.net/my/tracker.php > > Cheers, > n > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From fdrake at acm.org Mon Apr 3 20:56:09 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 3 Apr 2006 14:56:09 -0400 Subject: [Python-Dev] Need Py3k group in trackers In-Reply-To: <ca471dc20604031145y64ddffdan1841cdf294bcaf6f@mail.gmail.com> References: <ca471dc20604031145y64ddffdan1841cdf294bcaf6f@mail.gmail.com> Message-ID: <200604031456.09205.fdrake@acm.org> On Monday 03 April 2006 14:45, Guido van Rossum wrote: > Could one of the tracker admins add a Python-3000 group to the SF > trackers (while we're still using them :-)? This is so we can easily > move proposals between Python 3000 and Python 2.x status. Done. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From nas at arctrix.com Mon Apr 3 21:12:47 2006 From: nas at arctrix.com (Neil Schemenauer) Date: Mon, 3 Apr 2006 19:12:47 +0000 (UTC) Subject: [Python-Dev] reference leaks, __del__, and annotations References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> Message-ID: <e0rs3f$tb4$1@sea.gmane.org> Guido van Rossum <guido at python.org> wrote: > This would require a bit "__del__ already called" on an object, > but don't we have a whole word of GC-related flags? No. Neil From abkhd at hotmail.com Mon Apr 3 21:07:57 2006 From: abkhd at hotmail.com (A.B., Khalid) Date: Mon, 03 Apr 2006 19:07:57 +0000 Subject: [Python-Dev] posixmodule.c patch- revision 43586 Message-ID: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> According to MSDN, ShellExecute has only six parameters: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/shellcc/platform/shell/reference/functions/shellexecute.asp But in the posixmodule patch at: http://mail.python.org/pipermail/python-checkins/2006-April/050698.html it is passed seven: """ rc = ShellExecuteW((HWND)0, operation, PyUnicode_AS_UNICODE(unipath), PyUnicode_AS_UNICODE(woperation), NULL, NULL, SW_SHOWNORMAL); """ Shouldn't that part read as follows? Or am I missing something? """ rc = ShellExecuteW((HWND)0, PyUnicode_AS_UNICODE(woperation), PyUnicode_AS_UNICODE(unipath), NULL, NULL, SW_SHOWNORMAL); """ Regards, Khalid _________________________________________________________________ Don't just search. Find. Check out the new MSN Search! http://search.msn.com/ From aahz at pythoncraft.com Mon Apr 3 21:39:41 2006 From: aahz at pythoncraft.com (Aahz) Date: Mon, 3 Apr 2006 12:39:41 -0700 Subject: [Python-Dev] outstanding items for 2.5 In-Reply-To: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> References: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> Message-ID: <20060403193941.GA17036@panix.com> On Sun, Apr 02, 2006, Neal Norwitz wrote: > > I updated the PEP to include owners. If this message is sent directly > to you, you are an owner. > > http://www.python.org/dev/peps/pep-0356/ > > Review the PEP and let me know what needs to be changed. If your pet > project isn't already in the PEP, assume it has been deferred until > 2.6. Per "file() vs open(), round 7" at http://mail.python.org/pipermail/python-dev/2005-December/thread.html#59073 please add to list of planned features and list me as the owner. I'd calle this feature "make open() a factory function instead of an alias for file() (with suitable doc changes)" -- that parenthetical being the blocking factor here.... ;-) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From tim.peters at gmail.com Mon Apr 3 21:42:32 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 15:42:32 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> Message-ID: <1f7befae0604031242n26040335j239242901324e5b6@mail.gmail.com> [Michael Hudson] >> I'm not sure the problem is so much that anyone _wants_ to support >> resurrection in __del__, it's just that it can't be prevented. [Guido] > Well, Java has an answer to that (at least I believe Tim Peters told > me so years ago): it allows resurrection, but will only call the > finalizer once. IOW if the resurrected object is GC'ed a second time, > its finalizer won't be called. Right, that's a technical trick Java uses. Note that it doesn't stop resurrection: all the resurrection-related pitfalls remain. One good result is that cycles containing objects with finalizers don't stop gc progress forever; some progress can always be made, although it may be as little as reclaiming one object per full gc cycle (ignoring that "full gc cycle" is a fuzzy concept in a runs-in-parallel threaded gc). A bad result is an endless stream of nearly-impenetrable articles encouraging deep fear of Java finalizers ;-); e.g., http://www.devx.com/Java/Article/30192/0/page/1 > This would require a bit "__del__ already called" on an object, but don't we have > a whole word of GC-related flags? Nope! You're probably thinking of gc_refs. That's a Py_ssize_t today, and is overloaded to hold, at various times, a status enum (which only needs a few bits) or a copy of the object's refcount (which uses all the bits). From guido at python.org Mon Apr 3 21:42:39 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 12:42:39 -0700 Subject: [Python-Dev] outstanding items for 2.5 In-Reply-To: <20060403193941.GA17036@panix.com> References: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> <20060403193941.GA17036@panix.com> Message-ID: <ca471dc20604031242t1657b92ct195a325acd9fc8fc@mail.gmail.com> Done. What exactly do you plan to do apart from editing the docs to steer people away from file()? --Guido On 4/3/06, Aahz <aahz at pythoncraft.com> wrote: > On Sun, Apr 02, 2006, Neal Norwitz wrote: > > > > I updated the PEP to include owners. If this message is sent directly > > to you, you are an owner. > > > > http://www.python.org/dev/peps/pep-0356/ > > > > Review the PEP and let me know what needs to be changed. If your pet > > project isn't already in the PEP, assume it has been deferred until > > 2.6. > > Per "file() vs open(), round 7" at > http://mail.python.org/pipermail/python-dev/2005-December/thread.html#59073 > please add to list of planned features and list me as the owner. I'd > calle this feature "make open() a factory function instead of an alias > for file() (with suitable doc changes)" -- that parenthetical being the > blocking factor here.... ;-) > -- > Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ > > "Look, it's your affair if you want to play with five people, but don't > go calling it doubles." --John Cleese anticipates Usenet > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Mon Apr 3 21:45:10 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 3 Apr 2006 12:45:10 -0700 Subject: [Python-Dev] Saving the hash value of tuples In-Reply-To: <b348a0850604021454i1258e158s84a8dbb998cd7fe3@mail.gmail.com> References: <b348a0850604011232y7223c4bcife0c2808c29774c1@mail.gmail.com> <ca471dc20604021038k4dde46e6tcb03863a555c4dc8@mail.gmail.com> <b348a0850604021454i1258e158s84a8dbb998cd7fe3@mail.gmail.com> Message-ID: <ca471dc20604031245h62b6188aldf5cc1fa4c8ff198@mail.gmail.com> On 4/2/06, Noam Raphael <noamraph at gmail.com> wrote: > On 4/2/06, Guido van Rossum <guido at python.org> wrote: > > > I tried the change, and it turned out that I had to change cPickle a > > > tiny bit: it uses a 2-tuple which is allocated when the module > > > initializes to lookup tuples in a dict. I changed it to properly use > > > PyTuple_New and Py_DECREF, and now the complete test suite passes. I > > > run test_cpickle before the change and after it, and it took the same > > > time (0.89 seconds on my computer). > > > > Not just cPickle. I believe enumerate() also reuses a tuple. > > Maybe it does, but I believe that it doesn't calculate the hash value > of it - otherwise, the test suite would probably have failed. But someone else could. > > > What do you think? I see three possibilities: > > > 1. Nothing should be done, everything is as it should be. > > > 2. The cPickle module should be changed to not abuse the tuple, but > > > there's no reason to add an extra word to the tuple structure and > > > break binary backwards compatibility. > > > 3. Both should be changed. > > > > I'm -1 on the change. Tuples are pretty fundamental in Python and > > hashing them is relatively rare. I think the extra required space for > > all tuples isn't worth the potential savings for some cases. > > That's fine with me. But what about option 2? Perhaps cPickle (and > maybe enumerate) should properly discard their tuples, so that if > someone in the future decides that saving the hash value is a good > idea, he won't encounter strange bugs? At least in cPickle I didn't > notice any loss of speed because of the change, and it's quite > sensible, since there's a tuple-reuse mechanism anyway. No, these are carefully considered speed-ups. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.peters at gmail.com Mon Apr 3 21:47:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 15:47:18 -0400 Subject: [Python-Dev] posixmodule.c patch- revision 43586 In-Reply-To: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> References: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> Message-ID: <1f7befae0604031247g151401fexd60374c65550c0c3@mail.gmail.com> [A.B., Khalid] > According to MSDN, ShellExecute has only six parameters: > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/shellcc/platform/shell/reference/functions/shellexecute.asp > > But in the posixmodule patch at: > http://mail.python.org/pipermail/python-checkins/2006-April/050698.html > > it is passed seven: > """ > rc = ShellExecuteW((HWND)0, operation, > PyUnicode_AS_UNICODE(unipath), > PyUnicode_AS_UNICODE(woperation), > NULL, NULL, SW_SHOWNORMAL); > """ > > > Shouldn't that part read as follows? Or am I missing something? > > """ > rc = ShellExecuteW((HWND)0, > PyUnicode_AS_UNICODE(woperation), > PyUnicode_AS_UNICODE(unipath), > NULL, NULL, SW_SHOWNORMAL); > """ Well, _something's_ screwy with it. All the Windows buildbots are unhappy with that statement, giving 3 warnings: \Code\python\Modules\posixmodule.c(7487) : warning C4133: 'function' : incompatible types - from 'char *' to 'LPCWSTR' \Code\python\Modules\posixmodule.c(7490) : warning C4047: 'function' : 'INT' differs in levels of indirection from 'void *' \Code\python\Modules\posixmodule.c(7490) : warning C4020: 'ShellExecuteW' : too many actual parameters It would be worse, except all the Windows buildbot compiles are dying for a different reason: md5c.c c1 : fatal error C1083: Cannot open source file: '\Code\python\Modules\md5c.c': No such file or directory While we're at it, looks like all the 2.4 buildbots are failing test_email today. From foom at fuhm.net Mon Apr 3 21:49:58 2006 From: foom at fuhm.net (James Y Knight) Date: Mon, 3 Apr 2006 15:49:58 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <e0rs3f$tb4$1@sea.gmane.org> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> Message-ID: <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> On Apr 3, 2006, at 3:12 PM, Neil Schemenauer wrote: > Guido van Rossum <guido at python.org> wrote: >> This would require a bit "__del__ already called" on an object, >> but don't we have a whole word of GC-related flags? > > No. Actually there is. Kinda. Currently python's refcounting scheme uses 4 words per object (gc_next, gc_prev, gc_refs, ob_refcnt), and has one spare word in the padding of PyGC_Head that's just sitting there wasting memory. So really it's using up 5 words per object, and that 5th word could actually be used for flags... /* GC information is stored BEFORE the object structure. */ typedef union _gc_head { struct { union _gc_head *gc_next; union _gc_head *gc_prev; int gc_refs; } gc; long double dummy; /* force worst-case alignment */ } PyGC_Head; #define PyObject_HEAD \ _PyObject_HEAD_EXTRA \ int ob_refcnt; \ struct _typeobject *ob_type; typedef struct _object { PyObject_HEAD } PyObject; James From barry at python.org Mon Apr 3 21:53:11 2006 From: barry at python.org (Barry Warsaw) Date: Mon, 03 Apr 2006 15:53:11 -0400 Subject: [Python-Dev] posixmodule.c patch- revision 43586 In-Reply-To: <1f7befae0604031247g151401fexd60374c65550c0c3@mail.gmail.com> References: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> <1f7befae0604031247g151401fexd60374c65550c0c3@mail.gmail.com> Message-ID: <44317D27.6080405@python.org> Tim Peters wrote: > > While we're at it, looks like all the 2.4 buildbots are failing > test_email today. > _______________________________________________ > Anthony backported the patch that should fix this, so it should be showing up in 2.4 buildbots soon. -Barry From brett at python.org Mon Apr 3 21:55:35 2006 From: brett at python.org (Brett Cannon) Date: Mon, 3 Apr 2006 12:55:35 -0700 Subject: [Python-Dev] I'm not getting email from SF when assignedabug/patch In-Reply-To: <e0rg7c$cjg$1@sea.gmane.org> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> <e0pj6g$onr$1@sea.gmane.org> <e0rg7c$cjg$1@sea.gmane.org> Message-ID: <bbaeab100604031255q1ff0224btdfbd71684e04325@mail.gmail.com> On 4/3/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > > the source code is available via the above link; I'll post the ZIP file some- > > where tomorrow (drop me a line if you want the URL). > > I found some free space on the effbot.org server, so anyone inter- > ested can get the current ZIP file here: > > http://effbot.org/tracker-20060403.zip > > the zip file is ~85 megabytes, and expands to about 300 megabyte data. Can someone (Martin, Barry?) post this on python.org (I don't think this necessarily needs to be put into svn and I don't have any access but svn) so Fredrik can free up the space on his server? > there are three tracker directories (for the bugs, patches, and feature re- > quest trackers). for each item, there are at least two files: > > item-NNN.xml (index information, created by getindex.py) > > item-NNN-page.xml (xhtml pages, created by getpages.py) > > where NNN is the tracker item identifier. > > for items that have attached files, there's also one or more > > item-NNN-data-MMM.dat (data files, created by getfiles.py) > > where MMM is a file identifier (referred to by the page files). > > ::: > > the extract module available here: > > http://effbot.python-hosting.com/browser/stuff/sandbox/sourceforge/ > > can be used to extract information from the page.xml files (see the > sanity check code at the end of that file for a usage example). > > to use this, you need ElementTree (a Python 2.5 pre-alpha should work) > and/or cElementTree. > > ::: > > I'll post an export demo script later. > OK, great. I will send out an email to start hashing out what we want in the tracker call soon so we can start working on that while you type up the demo script. -Brett From tim.peters at gmail.com Mon Apr 3 22:02:06 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 16:02:06 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> Message-ID: <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> [Guido] >>> but don't we have a whole word of GC-related flags? [Neil S] >> No. [James Y Knight] > Actually there is. Kinda. Currently python's refcounting scheme uses > 4 words per object (gc_next, gc_prev, gc_refs, ob_refcnt), and has > one spare word in the padding of PyGC_Head that's just sitting there > wasting memory. Using which compiler? This varies across boxes. Most obviously, on a 64-bit box all these members are 8 bytes (note that ob_refcnt is Py_ssize_t in 2.5, not int anymore), but even on some 32-bit boxes the "long double" trick only forces 4-byte alignment. From tim.peters at gmail.com Mon Apr 3 22:06:17 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 16:06:17 -0400 Subject: [Python-Dev] posixmodule.c patch- revision 43586 In-Reply-To: <44317D27.6080405@python.org> References: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> <1f7befae0604031247g151401fexd60374c65550c0c3@mail.gmail.com> <44317D27.6080405@python.org> Message-ID: <1f7befae0604031306r56dec4a1h386c27e6caa922f3@mail.gmail.com> [Tim] >> While we're at it, looks like all the 2.4 buildbots are failing >> test_email today. [Barry] > Anthony backported the patch that should fix this, so it should be > showing up in 2.4 buildbots soon. ? Anthony's """ Changed by: anthony.baxter Changed at: Mon 03 Apr 2006 16:40:28 Branch: branches/release24-maint Revision: 43597 Changed files: * branches/release24-maint/Lib/email/_parseaddr.py Comments: backport of r43578 The email module's parsedate_tz function now sets the daylight savings flag to -1 (unknown) since it can't tell from the date whether it should be set. patch from Aldo Cortesi """ is in the blamelist for the runs where test_email _started_ failing in 2.4 today. From pje at telecommunity.com Mon Apr 3 23:14:12 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 03 Apr 2006 14:14:12 -0700 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.co m> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> Message-ID: <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> At 08:14 AM 3/31/2006, Tim Peters wrote: >[Phillip J. Eby] > > ... > > As Tim suggested, it'd be better to have the code be generator-specific, at > > least for now. That had actually been my original plan, to make it > > generator-specific, but I was afraid of breaking encapsulation in the > > garbage collector by having it know about generators. > >It sucks in a way, but so would adding yet another new slot just for >(at present, and possibly forever) making gc and generators play nicer >together. "Practicality beats purity" here. I'm trying to figure out how to implement this now, and running into a bit of a snag. It's easy enough for gcmodule.c to check if an object is a generator, but I'm not sure how safe the dynamic check actually is, since it depends on the generator's state. In principle, running other finalizers could cause the generator's state to change from a finalizer being required to not being required, or vice versa. Could this mess up the GC process? It seems to me that it's safe for a generator to say, "yes, I need finalization", because if it later turns out not to, it's just a waste. But if the generator says, "no, I don't need finalization", and then later turns out to need it, doesn't that leave an opportunity to screw things up if the GC does anything other than immediately clear the generator? As best I can tell, the only things that could cause arbitrary Python code to run, that could possibly result in generator state changes, are structure traversal and weakref callback handling. It's probably not sane for anybody to have structure traversal run arbitrary Python code, so I'm going to ignore that for the sake of my own sanity. :) Weakref callbacks are tougher, though; it seems possible that you could have one of those cause a generator to be advanced to a point where it now needs finalization. OTOH, if such a generator could be advanced by the callback, then wouldn't that mean the generator is reachable, and ergo, not garbage? That is, since only reachable weakref callbacks are run, they must by definition be unable to access any generator that declared itself finalizer-free. It does seem possible you could end up with a situation where an object with a finalizer is called after a generator it references is torn down, but that circumstance can occur in earlier versions of Python anyway, and in fact this behavior would be consistent. Okay, I *think* I've convinced myself that a dynamic state check is OK, but I'm hoping somebody with more GC experience can check my reasoning here for holes. From martin at v.loewis.de Mon Apr 3 23:51:00 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 03 Apr 2006 23:51:00 +0200 Subject: [Python-Dev] Renaming sqlite3 In-Reply-To: <443125EB.7020403@python.net> References: <4430C86F.8090507@v.loewis.de> <443125EB.7020403@python.net> Message-ID: <443198C4.6050301@v.loewis.de> Thomas Heller wrote: > But if you make the change to implement option 3, IMO it would be a > good idea to add the Python version number to the .pyd basename as > well. Can you please elaborate? In the name of what .pyd file do you want the Python version number? And why? And why is that related to not supporting extensions with .DLL names anymore? > pywin32 already has to do this, since pythoncomXY.pyd and > pywintypesXY.pyd have to live (if possible) in the windows directory. I think this is a very special case: it could have been implemented with separate DLLs which just provide the COM entry points, and find the location of pythoncom.pyd from the registry. I would discourage people from providing additional entry points to an extension module. Regards, Martin From tdelaney at avaya.com Tue Apr 4 00:32:43 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Tue, 4 Apr 2006 08:32:43 +1000 Subject: [Python-Dev] SF #1462485 - StopIteration raised in body of 'with' statement suppressed Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E63A@au3010avexu1.global.avaya.com> Guido van Rossum wrote: >> I can't confirm right now (at work, need to install 2.5) but I'm also >> wondering what will happen if KeyboardInterrupt or SystemExit is >> raised from inside the generator when it's being closed via >> __exit__. I suspect a RuntimeError will be raised, whereas I think >> these should pass through. > > I see no reason for this with the current code. Perhaps a previous > version of contextlib.py had this problem? Nah - that was me mis-remembering the contextlib code. They're handled properly. Tim Delaney From anthony at interlink.com.au Tue Apr 4 02:43:37 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Tue, 4 Apr 2006 10:43:37 +1000 Subject: [Python-Dev] posixmodule.c patch- revision 43586 In-Reply-To: <1f7befae0604031306r56dec4a1h386c27e6caa922f3@mail.gmail.com> References: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> <44317D27.6080405@python.org> <1f7befae0604031306r56dec4a1h386c27e6caa922f3@mail.gmail.com> Message-ID: <200604041043.40151.anthony@interlink.com.au> On Tuesday 04 April 2006 06:06, Tim Peters wrote: > backport of r43578 > The email module's parsedate_tz function now sets the daylight > savings flag to -1 (unknown) since it can't tell from the date > whether it should be set. > patch from Aldo Cortesi > """ > > is in the blamelist for the runs where test_email _started_ failing > in 2.4 today. Damnit. I see you fixed this. Sorry about that. From greg.ewing at canterbury.ac.nz Tue Apr 4 03:08:44 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 04 Apr 2006 13:08:44 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <2mlkunt7a8.fsf@starship.python.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> Message-ID: <4431C71C.2080703@canterbury.ac.nz> Michael Hudson wrote: > And if we want to have a version of __del__ that can't reference > 'self', we have it already: weakrefs with callbacks. Does that actually work at the moment? Last I heard, there was some issue with gc and weakref callbacks as well. Has that been resolved? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From greg.ewing at canterbury.ac.nz Tue Apr 4 03:49:03 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 04 Apr 2006 13:49:03 +1200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <44313E5E.6040003@livinglogic.de> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> <44312B58.3090900@gmail.com> <44313E5E.6040003@livinglogic.de> Message-ID: <4431D08F.8070603@canterbury.ac.nz> Walter D?rwald wrote: > OK, the property setter does a "% 7" now. (But the global > setfirstweekday() still does a range check). Wouldn't it be better for the setter to raise an exception if it's out of range? It probably indicates a bug in the caller's code. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From tim.peters at gmail.com Tue Apr 4 04:05:57 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 22:05:57 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> Message-ID: <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> [Phillip J. Eby] > I'm trying to figure out how to implement this now, and running into > a bit of a snag. It's easy enough for gcmodule.c to check if an > object is a generator, but I'm not sure how safe the dynamic check > actually is, since it depends on the generator's state. In > principle, running other finalizers could cause the generator's state > to change from a finalizer being required to not being required, or > vice versa. Could this mess up the GC process? Yup, although the tricky question is whether it's possible for other finalizers to do such a thing. > It seems to me that it's safe for a generator to say, "yes, I need finalization", > because if it later turns out not to, it's just a waste. Definitely safe. In effect, that's what happens right now (all generators say "I need finalization" now). > But if the generator says, "no, I don't need finalization", and then later > turns out to need it, doesn't that leave an opportunity to screw things up > if the GC does anything other than immediately clear the generator? > > As best I can tell, the only things that could cause arbitrary Python > code to run, that could possibly result in generator state changes, > are structure traversal and weakref callback handling. And __del__ methods. It's a common misconception that Python's cyclic gc won't ever clean up an object with a __del__ method. It can and routinely does. What it won't do is automagically break a _cycle_ containing an object with a __del__ method. It's quite possible to have any number of objects with __del__ methods reachable only _from_ a trash cycle containing no objects with __del__ methods, where those __del__-slinging objects are not themselves in a cycle. gc will break that cycle, and the __del__ methods on trash objects "hanging off" that cycle will get invoked as a normal side effect of their objects' refcounts falling to 0. >From a different POV, Python's gc never reclaims anything directly -- all it does is break cycles via calling tp_clear on trash objects, and whatever (if any) reclamation gets done happens as a side effect of Py_DECREF. Specifically, this one in delete_garbage(): if ((clear = op->ob_type->tp_clear) != NULL) { Py_INCREF(op); clear(op); Py_DECREF(op); } If it wouldn't spoil the fun, I'd be tempted to add a comment pointing out that the entire purpose of gcmodule.c is to execute that Py_DECREF safely :-) > It's probably not sane for anybody to have structure traversal run arbitrary Python > code, I'm not sure what you mean by "structure traversal". The only kind of traversing that should be going on during gc is running tp_traverse slots, and although I doubt it's written down anywhere, a tp_traverse slot shouldn't even do an incref, let alone call back into Python. A tp_traverse slot dare not release the GIL either. That last one is a subtlety that takes fixing a few critical bugs to fully appreciate: as soon as anything can call Python code, all bets are off, because any number of other threads can run then too, and do _almost_ anything whatsoever to the object graph. In particular, that Py_DECREF() above can trigger a chain of code that releases the GIL, so by the time we get to that loop it has to be impossible for any conceivable Python code to create any new problems for gc. > so I'm going to ignore that for the sake of my own sanity. :) Weakref callbacks > are tougher, though; it seems possible that you could have one of those cause a > generator to be advanced to a point where it now needs finalization. Not a _trash_ generator, though. While much of gc's behavior wrt weakref callbacks is more-than-less arbitrary, and so may change some day, for now a wr callback to a trash object is suppressed by gc if any trash objects are reachable from that callback. > OTOH, if such a generator could be advanced by the callback, then > wouldn't that mean the generator is reachable, Yes, but you have to qualify "reachable" to "reachable from the callback". > and ergo, not garbage? If the callback is itself trash, no, then G being reachable from the callback is not enough evidence to conclude that G is not garbage. The horrid bugs we've had come from things "just like that": messy interconnections among objects that _all_ look like trash. When they're in cycles, they can reach each other, and so their finalizers can see each other too, trash or not. We already endure lots of pain to ensure that a weakref callback that gets executed (not all do) can't see anything that looks like trash. > That is, since only reachable weakref callbacks are run, s/reachable/non-trash/ and that's true today. > they must by definition be unable to access any generator that > declared itself finalizer-free. Any trash object, period. > It does seem possible you could end up with a situation where an > object with a finalizer is called after a generator it references is > torn down, but that circumstance can occur in earlier versions of > Python anyway, and in fact this behavior would be consistent. That shouldn't be possible. Because the only reclamation done by gc is via Py_DECREF side effects, objects not in cycles are torn down in a topological-sort order of the "points-to" relation. If A points to B (B is directly reachable from A), and in the absence of cycles, and with everything driven by Py_DECREF, B's refcount can't fall to 0 before A's does. Therefore B is wholly intact when A's finalizer (if any) is invoked. That's a great "hidden" benefit of refcount-driven reclamation. If A and B are in a cycle, and A has a finalizer, then gc refuses to call tp_clear on either of them, and neither refcount falls to 0, so A's finalizer doesn't run at all. > Okay, I *think* I've convinced myself that a dynamic state check is > OK, but I'm hoping somebody with more GC experience can check my > reasoning here for holes. Let's take a peek at __del__ methods: C1 <-> C2 -> D -> G -> A C1 and C2 don't have finalizers and are in a cycle. D has a __del__ method. G is a generator that says "I don't need finalization". Suppose they're all trash, and these are all the objects that exist. gc moves D and everything transitively reachable from D to a special `finalizers` list. Only C1 and C2 are in the list of things gc will invoke tp_clear on. Say it does C1.tp_clear() first. That does Py_DECREF(C2) as a side effect. That in turn does Py_DECREF(D) as a side effect, and D.__del__() is invoked. Since G is reachable from D, _del__ may change G to a state where finalization is needed. But it doesn't seem to matter, since G and A weren't in the tp_clear candidate list to begin with. More generally, gc will not invoke tp_clear on anything transitively reachable from any object with a __del__ method. So if a generator is reachable from a __del__, gc won't invoke tp_clear on anything reachable from the generator. If the generator gets cleaned up at all, it's via "ordinary" Py_DECREF side effects, so nothing reachable from the generator will vanish either before the generator goes away. If the generator decides it needs to finalize after all, doesn't seem like it matters. Unless ... C1 <-> C2 -> D -> G <-> A [G and A are also in a cycle now] Now D.__del__ may or may not advance G to a "needs finalization" state, but D decref'ing G no longer drops G's refcount to 0 regardless. This _round_ of gc won't break the G<->A cycle regardless (since everything reachable from D is exempt from tp_clear). The G<->A cycle will end up in an older generation. Some number of gc rounds later (when the older generation containing G<->A gets collected), the cycle will be broken if G doesn't say it needs finalization, or G will be moved to gc.garbage if G says it does need finalization. In either case, D is long gone so can't make more trouble. No real harm there either -- although it may be suprising that G<->A collection (when possible) gets delayed, that's always been true of trash cycles hanging off a trash object with a __del__ method hanging off a reclaimable trash cycle. What is new is that G won't wind up in gc.garbage during the first round of gc if D.__del__() pushes G to a "needs finalization" state. Looks safe to me ;-) From tim.peters at gmail.com Tue Apr 4 04:35:24 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 3 Apr 2006 22:35:24 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <2mlkunt7a8.fsf@starship.python.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> Message-ID: <1f7befae0604031935n46aaedf3le41389c8ea0dace8@mail.gmail.com> [Michael Hudson] > ... > What happened to the 'get rid of __del__ in py3k' idea? Apart from its initial mention, every now & again someone asks what happened to it :-). From greg.ewing at canterbury.ac.nz Tue Apr 4 05:45:58 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 04 Apr 2006 15:45:58 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> Message-ID: <4431EBF6.9000706@canterbury.ac.nz> Tim Peters wrote: > We already endure lots of pain to ensure that a weakref callback that > gets executed (not all do) can't see anything that looks like trash. Okay, so would it be possible for a generator that needs finalisation to set up a weakref callback, suitably rooted somewhere so that the callback is reachable, that references enough stuff to clean up after the generator, without referencing the generator itself? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From nnorwitz at gmail.com Tue Apr 4 06:01:57 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 3 Apr 2006 20:01:57 -0800 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> Message-ID: <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> On 4/3/06, Zachary Pincus <zpincus at stanford.edu> wrote: > > Sorry if it's bad form to ask about patches one has submitted -- let > me know if that sort of discussion should be kept strictly on the > patch tracker. No, it's fine. Thanks for reminding us about this issue. Unfortunately, without an explicit ok from one of the Mac maintainers, I don't want to add this myself. If you can get Bob, Ronald, or Jack to say ok, I will apply the patch ASAP. I have a Mac OS X.4 box and can test it, but don't know the suitability of the patch. n From exarkun at divmod.com Tue Apr 4 06:14:24 2006 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Tue, 4 Apr 2006 00:14:24 -0400 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: 0 Message-ID: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> I tried out Twisted's test suite with a version of Python built from SVN trunk today and ran into a few problems. First, the test suite hung indefinitely using all available CPU time. This apparently was due to a change in the behavior of __import__: in Python 2.4, __import__('') raises a ValueError; in Python 2.5, it returns None. Once I hacked around this, the test suite ran to completion, though with over fifty failures. Some of these appear to be related to the conversion of the exception hierarchy to new-style classes, but I have not yet had a chance to examine them closely. Once I do have time to track down specifics, I'll file tickets as appropriate. For now I just wanted to point out the one detail I have tracked down, and give a heads up that there are likely some more to come. Of course anyone who is interested can run the Twisted test suite very easily and take a look at the failures themselves (if you have Twisted installed, "trial twisted" will do it). Jean-Paul From zpincus at stanford.edu Tue Apr 4 06:25:46 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Mon, 3 Apr 2006 23:25:46 -0500 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> Message-ID: <3E35807A-DFA1-45EB-9FAB-6B1810E08118@stanford.edu> > Thanks for reminding us about this issue. > Unfortunately, without an explicit ok from one of the Mac maintainers, > I don't want to add this myself. If you can get Bob, Ronald, or Jack > to say ok, I will apply the patch ASAP. I have a Mac OS X.4 box and > can test it, but don't know the suitability of the patch. Fair enough -- this seems reasonable. Now, there is one issue with this all that some general feedback from Python-Dev would be helpful with: how best to test such a patch? Specifically, this patch would change a core python code path. Now, I can see no reason why it would break anything -- but we know how flimsy such arguments are. More strong evidence is that python builds and tests flawlessly with this patch. Given that many of the tests involve loading C extension libs, that's a good sign. Moreover, I've been using patched versions of 2.4 and 2.5 for some time, and loading fairly extensive libs (numpy/scipy, as well as the more exotic extensions that drove me to uncover this problem before), all without issue. But it would be good to have a specific benchmark to know nothing will break. I personally sort of feel that if dlopen() works once or twice, it will probably always work, but there are those who probably understand better the failure modes of opening shared libs as python extensions, and could suggest some good things to test. Any thoughts? Zach From martin at v.loewis.de Tue Apr 4 08:29:44 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 08:29:44 +0200 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <3E35807A-DFA1-45EB-9FAB-6B1810E08118@stanford.edu> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> <3E35807A-DFA1-45EB-9FAB-6B1810E08118@stanford.edu> Message-ID: <44321258.2010101@v.loewis.de> Zachary Pincus wrote: > Specifically, this patch would change a core python code path. Why do you think so? I believe Python always passes absolute paths to dlopen, so any path resolution dlopen might do should be irrelevant. *If* you can get dlopen to look at directories outside sys.path, that would be a serious problem. You can use ktrace to find out what places it looks at. > But it would be good to have a specific benchmark to know nothing > will break. I personally sort of feel that if dlopen() works once or > twice, it will probably always work, but there are those who probably > understand better the failure modes of opening shared libs as python > extensions, and could suggest some good things to test. Running the test suite should already exercise this code a lot. You should run the test suite both in "working copy" mode, and in "make install" mode; if you know how to produce a Mac installer, testing it in such an installer ("framework mode"?) could also be done. Regards, Martin From martin at v.loewis.de Tue Apr 4 08:32:53 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 08:32:53 +0200 Subject: [Python-Dev] posixmodule.c patch- revision 43586 In-Reply-To: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> References: <BAY21-F5886114970151A5D84FA5ABD50@phx.gbl> Message-ID: <44321315.7000701@v.loewis.de> A.B., Khalid wrote: > Shouldn't that part read as follows? Or am I missing something? > > """ > rc = ShellExecuteW((HWND)0, > PyUnicode_AS_UNICODE(woperation), > PyUnicode_AS_UNICODE(unipath), > NULL, NULL, SW_SHOWNORMAL); > """ That's certainly better, though not correct, yet: woperation might be NULL. So I fixed all that (I hope). Regards, Martin From martin at v.loewis.de Tue Apr 4 08:37:38 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 08:37:38 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> Message-ID: <44321432.6040005@v.loewis.de> Jean-Paul Calderone wrote: > Once I do have time to track down specifics, I'll file tickets as > appropriate. For now I just wanted to point out the one detail I > have tracked down, and give a heads up that there are likely some > more to come. > > Of course anyone who is interested can run the Twisted test suite > very easily and take a look at the failures themselves (if you have > Twisted installed, "trial twisted" will do it). I think most of us would appreciate if you could file bug reports, or (even better) propose patches; the hope is certainly that more people start looking into compatibility of 2.5 after the first alpha is released. I would personally expect that the best feedback comes from people maintaining a large Python package or application, such as Twisted. Each of the incompatibilities might include tricky issues, where we have to find out whether Python is wrong for changing it, or the application is wrong for expecting things to behave exactly this way (or nobody is wrong, if this was an intentional breakage). Regards, Martin From mwh at python.net Tue Apr 4 08:48:38 2006 From: mwh at python.net (Michael Hudson) Date: Tue, 04 Apr 2006 07:48:38 +0100 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <4431C71C.2080703@canterbury.ac.nz> (Greg Ewing's message of "Tue, 04 Apr 2006 13:08:44 +1200") References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> Message-ID: <2mwte5sunt.fsf@starship.python.net> Greg Ewing <greg.ewing at canterbury.ac.nz> writes: > Michael Hudson wrote: > >> And if we want to have a version of __del__ that can't reference >> 'self', we have it already: weakrefs with callbacks. > > Does that actually work at the moment? Last I heard, > there was some issue with gc and weakref callbacks > as well. Has that been resolved? Talk about FUD. Yes, it works, as far as I know. Cheers, mwh -- <lament> Slashdot karma, unfortunately, is not real karma, because it doesn't involve the death of the people who have it -- from Twisted.Quotes From mwh at python.net Tue Apr 4 08:49:05 2006 From: mwh at python.net (Michael Hudson) Date: Tue, 04 Apr 2006 07:49:05 +0100 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0604031935n46aaedf3le41389c8ea0dace8@mail.gmail.com> (Tim Peters's message of "Mon, 3 Apr 2006 22:35:24 -0400") References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <1f7befae0604031935n46aaedf3le41389c8ea0dace8@mail.gmail.com> Message-ID: <2mslotsun2.fsf@starship.python.net> "Tim Peters" <tim.peters at gmail.com> writes: > [Michael Hudson] >> ... >> What happened to the 'get rid of __del__ in py3k' idea? > > Apart from its initial mention, every now & again someone asks what > happened to it :-). Good enough for me :) Cheers, mwh (not subscribed to python-3000) -- You're going to have to remember that I still think of Twisted as a big multiplayer game, and all this HTTP stuff is just kind of a grotty way to display room descriptions. -- Glyph Lefkowitz From nnorwitz at gmail.com Tue Apr 4 10:01:31 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Apr 2006 00:01:31 -0800 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <2mwte5sunt.fsf@starship.python.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> Message-ID: <ee2a432c0604040101l2f1eda2fu26aba5950b23c41e@mail.gmail.com> On 4/3/06, Michael Hudson <mwh at python.net> wrote: > Greg Ewing <greg.ewing at canterbury.ac.nz> writes: > > > Michael Hudson wrote: > > > >> And if we want to have a version of __del__ that can't reference > >> 'self', we have it already: weakrefs with callbacks. > > > > Does that actually work at the moment? Last I heard, > > there was some issue with gc and weakref callbacks > > as well. Has that been resolved? > > Talk about FUD. Yes, it works, as far as I know. Not sure if everyone is talking about the same thing. This is still a problem (at least for me): http://svn.python.org/projects/python/trunk/Lib/test/crashers/weakref_in_del.py It creates a weakref to self in __del__. There are 7 crashers, plus 5 more due to infinite recursion. :-( That doesn't include the parts of test_trace that are commented out. At least test_trace needs to be fixed prior to 2.5. n From walter at livinglogic.de Tue Apr 4 10:27:35 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Tue, 04 Apr 2006 10:27:35 +0200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <4431D08F.8070603@canterbury.ac.nz> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> <44312B58.3090900@gmail.com> <44313E5E.6040003@livinglogic.de> <4431D08F.8070603@canterbury.ac.nz> Message-ID: <44322DF7.1040603@livinglogic.de> Greg Ewing wrote: > Walter D?rwald wrote: > >> OK, the property setter does a "% 7" now. (But the global >> setfirstweekday() still does a range check). > > Wouldn't it be better for the setter to raise an exception > if it's out of range? It probably indicates a bug in the > caller's code. The day before Monday is -1, so it adds a little convenience. If this convenience is really worth it is a completely different topic. I think we've spent more time discussing the calendar module, than the Python community has spent using it! ;) Bye, Walter D?rwald From theller at python.net Tue Apr 4 10:32:03 2006 From: theller at python.net (Thomas Heller) Date: Tue, 04 Apr 2006 10:32:03 +0200 Subject: [Python-Dev] Renaming sqlite3 In-Reply-To: <443198C4.6050301@v.loewis.de> References: <4430C86F.8090507@v.loewis.de> <443125EB.7020403@python.net> <443198C4.6050301@v.loewis.de> Message-ID: <e0tau3$l02$1@sea.gmane.org> Martin v. L?wis wrote: > Thomas Heller wrote: >> But if you make the change to implement option 3, IMO it would be a >> good idea to add the Python version number to the .pyd basename as >> well. > > Can you please elaborate? In the name of what .pyd file do you want > the Python version number? And why? Since on Windows binary extensions have to be compiled for the exact major version of Python, it seems logical (to me, at least), to include that version number into the filename. Say, _socket25.pyd. > And why is that related to not supporting extensions with .DLL names > anymore? IMO changing that would require a PEP (but I may be wrong), and this is only another idea which should be considered when writing or discussing that. If you don't like the idea, or don't see any advantages in this, I retract the request. >> pywin32 already has to do this, since pythoncomXY.pyd and >> pywintypesXY.pyd have to live (if possible) in the windows >> directory. > > I think this is a very special case: it could have been implemented > with separate DLLs which just provide the COM entry points, and find > the location of pythoncom.pyd from the registry. I would discourage > people from providing additional entry points to an extension module. > > > Regards, Martin _______________________________________________ > Python-Dev mailing list Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: > http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org > > From mwh at python.net Tue Apr 4 14:19:36 2006 From: mwh at python.net (Michael Hudson) Date: Tue, 04 Apr 2006 13:19:36 +0100 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <ee2a432c0604040101l2f1eda2fu26aba5950b23c41e@mail.gmail.com> (Neal Norwitz's message of "Tue, 4 Apr 2006 00:01:31 -0800") References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> <ee2a432c0604040101l2f1eda2fu26aba5950b23c41e@mail.gmail.com> Message-ID: <2mk6a5sfc7.fsf@starship.python.net> "Neal Norwitz" <nnorwitz at gmail.com> writes: > On 4/3/06, Michael Hudson <mwh at python.net> wrote: >> Greg Ewing <greg.ewing at canterbury.ac.nz> writes: >> >> > Michael Hudson wrote: >> > >> >> And if we want to have a version of __del__ that can't reference >> >> 'self', we have it already: weakrefs with callbacks. >> > >> > Does that actually work at the moment? Last I heard, >> > there was some issue with gc and weakref callbacks >> > as well. Has that been resolved? >> >> Talk about FUD. Yes, it works, as far as I know. > > Not sure if everyone is talking about the same thing. This is still a > problem (at least for me): > http://svn.python.org/projects/python/trunk/Lib/test/crashers/weakref_in_del.py > > It creates a weakref to self in __del__. Yes, but that has nothing to do with the cycle collector. I even have a way to fix it, but I don't know if it breaks anything else... Cheers, mwh -- I wouldn't trust the Anglo-Saxons for much anything else. Given they way English is spelled, who could trust them on _anything_ that had to do with writing things down, anyway? -- Erik Naggum, comp.lang.lisp From zpincus at stanford.edu Tue Apr 4 15:38:21 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Tue, 4 Apr 2006 08:38:21 -0500 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <44321258.2010101@v.loewis.de> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> <3E35807A-DFA1-45EB-9FAB-6B1810E08118@stanford.edu> <44321258.2010101@v.loewis.de> Message-ID: <C27A7DC1-E763-48AB-BEFE-4DFB391EF957@stanford.edu> On Apr 4, 2006, at 1:29 AM, Martin v. L?wis wrote: > Zachary Pincus wrote: >> Specifically, this patch would change a core python code path. > > Why do you think so? I believe Python always passes absolute paths > to dlopen, so any path resolution dlopen might do should be > irrelevant. > *If* you can get dlopen to look at directories outside sys.path, > that would be a serious problem. You can use ktrace to find out > what places it looks at. Sorry, I meant path in a more metaphoric sense; e.g. on OS X / Darwin, Python used to go through the code in dynload_next.c every time it loaded an extension, now under the proposed patch it goes through dynload_shlib.c. > >> But it would be good to have a specific benchmark to know nothing >> will break. I personally sort of feel that if dlopen() works once or >> twice, it will probably always work, but there are those who probably >> understand better the failure modes of opening shared libs as python >> extensions, and could suggest some good things to test. > > Running the test suite should already exercise this code a lot. > You should run the test suite both in "working copy" mode, and > in "make install" mode; if you know how to produce a Mac installer, > testing it in such an installer ("framework mode"?) could also > be done. I'll do this, thanks. Zach From aleaxit at gmail.com Tue Apr 4 16:52:28 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Tue, 4 Apr 2006 07:52:28 -0700 Subject: [Python-Dev] tally (and other accumulators) Message-ID: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> It's a bit late for 2.5, of course, but, I thought I'd propose it anyway -- I noticed it on c.l.py. In 2.3/2.4 we have many ways to generate and process iterators but few "accumulators" -- functions that accept an iterable and produce some kind of "summary result" from it. sum, min, max, for example. And any, all in 2.5. The proposed function tally accepts an iterable whose items are hashable and returns a dict mapping each item to its count (number of times it appears). This is quite general and simple at the same time: for example, it was proposed originally to answer some complaint about any and all giving no indication of the count of true/false items: tally(bool(x) for x in seq) would give a dict with two entries, counts of true and false items. Just like the other accumulators mentioned above, tally is simple to implement, especially with the new collections.defaultdict: import collections def tally(seq): d = collections.defaultdict(int) for item in seq: d[item] += 1 return dict(d) Nevertheless, simplicity and generality make it advisable to supply it as part of the standard library (location TBD). A good alternative would be a classmethod tally within collections.defaultdict, building and returning a defaultdict as above (with a .factory left to int, for further possible use as a 'bag'/multiset data structure); this would solve the problem of where to locate tally if it were to be a function. defaultdict.tally would be logically quite similar to dict.fromkeys, except that keys happening repeatedly get counted (and so each associated to a value of 1 and upwards) rather than "collapsed". Alex From jeremy at alum.mit.edu Tue Apr 4 17:01:05 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 4 Apr 2006 11:01:05 -0400 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> Message-ID: <e8bf7a530604040801o5181b501ia6f53c6ed54b304@mail.gmail.com> On 4/4/06, Alex Martelli <aleaxit at gmail.com> wrote: > import collections > def tally(seq): > d = collections.defaultdict(int) > for item in seq: > d[item] += 1 > return dict(d) > > Nevertheless, simplicity and generality make it advisable to supply > it as part of the standard library (location TBD). > > A good alternative would be a classmethod tally within > collections.defaultdict, building and returning a defaultdict as > above (with a .factory left to int, for further possible use as a > 'bag'/multiset data structure); this would solve the problem of where > to locate tally if it were to be a function. defaultdict.tally would > be logically quite similar to dict.fromkeys, except that keys > happening repeatedly get counted (and so each associated to a value > of 1 and upwards) rather than "collapsed". Putting it somewhere in collections seems like a great idea. defaultdict is a bit odd, because the functionality doesn't have anything to do with defaults, just dicts. maybe a classmethod on regular dicts would make more sense? I write this function regularly, so I'd be happy to have it available directly. Jeremy From aleaxit at gmail.com Tue Apr 4 17:17:57 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Tue, 4 Apr 2006 08:17:57 -0700 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <e8bf7a530604040801o5181b501ia6f53c6ed54b304@mail.gmail.com> References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> <e8bf7a530604040801o5181b501ia6f53c6ed54b304@mail.gmail.com> Message-ID: <10537655-2A9E-4C43-8FB3-E02C0B831F9E@gmail.com> On Apr 4, 2006, at 8:01 AM, Jeremy Hylton wrote: > On 4/4/06, Alex Martelli <aleaxit at gmail.com> wrote: >> import collections >> def tally(seq): >> d = collections.defaultdict(int) >> for item in seq: >> d[item] += 1 >> return dict(d) ... > Putting it somewhere in collections seems like a great idea. > defaultdict is a bit odd, because the functionality doesn't have > anything to do with defaults, just dicts. maybe a classmethod on > regular dicts would make more sense? Good points: it should probably be a classmethod on dict, or a function in module collections. > I write this function regularly, so I'd be happy to have it > available directly. Heh, same here -- soon as I saw it proposed on c.l.py I recognized an old friend and it struck me that, simple but widely used, it should be somewhere in the standard library. Alex From tim.peters at gmail.com Tue Apr 4 18:01:33 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 12:01:33 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <2mwte5sunt.fsf@starship.python.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> Message-ID: <1f7befae0604040901v34f52664w118f6e506a9dd979@mail.gmail.com> [Michael Hudson] >>> And if we want to have a version of __del__ that can't reference >>> 'self', we have it already: weakrefs with callbacks. [Greg Ewing] >> Does that actually work at the moment? Last I heard, >> there was some issue with gc and weakref callbacks >> as well. Has that been resolved? [Michael] > Talk about FUD. Yes, it works, as far as I know. I'm sure Greg has in mind this thread (which was in fact also the thread that floated the idea of getting rid of __del__ in P3K): http://mail.python.org/pipermail/python-dev/2004-November/049744.html As that said, some weakref gc semantics are pretty arbitrary now, and it gave two patches that implemented distinct semantic variants. A problem is that the variant semantics also seem pretty arbitrary ;-), and there's a dearth of compelling use cases to guide a decision. If someone devoted enough time to seriously trying to get rid of __del__, I suspect compelling use cases would arise. I never use __del__ anyway, so my motivation to spend time on it is hard to detect. From goodger at python.org Tue Apr 4 18:03:37 2006 From: goodger at python.org (David Goodger) Date: Tue, 04 Apr 2006 12:03:37 -0400 Subject: [Python-Dev] r43613 - python/trunk/Doc/lib/libcsv.tex In-Reply-To: <443222F7.3020800@livinglogic.de> References: <20060404030544.D80A81E4013@bag.python.org> <443222F7.3020800@livinglogic.de> Message-ID: <443298D9.9010002@python.org> [regarding the examples in the last section of libcsv.tex] [Walter D?rwald] > These classes will have problems with various non-charmap encodings. > (E.g. the writer will write multiple BOMS for UTF-16). IMHO it would be > better to use the new incremental codecs in these examples. Could you fix it? I'm not familiar with the incremental codecs. -- David Goodger <http://python.net/~goodger> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature Url : http://mail.python.org/pipermail/python-dev/attachments/20060404/ee5284b8/attachment.pgp From walter at livinglogic.de Tue Apr 4 18:07:09 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Tue, 04 Apr 2006 18:07:09 +0200 Subject: [Python-Dev] r43613 - python/trunk/Doc/lib/libcsv.tex In-Reply-To: <443298D9.9010002@python.org> References: <20060404030544.D80A81E4013@bag.python.org> <443222F7.3020800@livinglogic.de> <443298D9.9010002@python.org> Message-ID: <443299AD.7030206@livinglogic.de> David Goodger wrote: > [regarding the examples in the last section of libcsv.tex] > > [Walter D?rwald] >> These classes will have problems with various non-charmap encodings. >> (E.g. the writer will write multiple BOMS for UTF-16). IMHO it would be >> better to use the new incremental codecs in these examples. > > Could you fix it? I'm not familiar with the incremental codecs. Sure! (Fixing the writer is simple, fixing the reader takes a bit more code) Bye, Walter D?rwald From thomas at python.org Tue Apr 4 18:34:09 2006 From: thomas at python.org (Thomas Wouters) Date: Tue, 4 Apr 2006 18:34:09 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> Message-ID: <9e804ac0604040934l45ed172ch956049a9dd9365f4@mail.gmail.com> On 4/4/06, Jean-Paul Calderone <exarkun at divmod.com> wrote: > > This apparently was due to a change in the behavior of __import__: in > Python 2.4, __import__('') raises a ValueError; in Python 2.5, it returns > None. Oops, that wasn't intended. I seem to recall seeing that before, and I thought I'd fixed it, but apparently I'm more senile than I thought. It's fixed in the trunk (right before alpha1 deadline ;) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060404/3acf93c0/attachment-0001.htm From ianb at colorstudy.com Tue Apr 4 18:54:09 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Tue, 04 Apr 2006 11:54:09 -0500 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> Message-ID: <4432A4B1.4090409@colorstudy.com> Alex Martelli wrote: > It's a bit late for 2.5, of course, but, I thought I'd propose it > anyway -- I noticed it on c.l.py. > > In 2.3/2.4 we have many ways to generate and process iterators but > few "accumulators" -- functions that accept an iterable and produce > some kind of "summary result" from it. sum, min, max, for example. > And any, all in 2.5. > > The proposed function tally accepts an iterable whose items are > hashable and returns a dict mapping each item to its count (number of > times it appears). > > This is quite general and simple at the same time: for example, it > was proposed originally to answer some complaint about any and all > giving no indication of the count of true/false items: > > tally(bool(x) for x in seq) > > would give a dict with two entries, counts of true and false items. > > Just like the other accumulators mentioned above, tally is simple to > implement, especially with the new collections.defaultdict: > > import collections > def tally(seq): > d = collections.defaultdict(int) > for item in seq: > d[item] += 1 > return dict(d) Or: import collections bag = collections.Bag([1, 2, 3, 2, 1]) assert bag.count(1) == 2 assert bag.count(0) == 0 assert 3 in bag # etc... -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From bob at redivi.com Tue Apr 4 18:57:40 2006 From: bob at redivi.com (Bob Ippolito) Date: Tue, 4 Apr 2006 09:57:40 -0700 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> Message-ID: <18AED6F0-32EB-4743-9AE9-1E060BA35C93@redivi.com> On Apr 3, 2006, at 9:01 PM, Neal Norwitz wrote: > On 4/3/06, Zachary Pincus <zpincus at stanford.edu> wrote: >> >> Sorry if it's bad form to ask about patches one has submitted -- let >> me know if that sort of discussion should be kept strictly on the >> patch tracker. > > No, it's fine. Thanks for reminding us about this issue. > Unfortunately, without an explicit ok from one of the Mac maintainers, > I don't want to add this myself. If you can get Bob, Ronald, or Jack > to say ok, I will apply the patch ASAP. I have a Mac OS X.4 box and > can test it, but don't know the suitability of the patch. The patch has my OK (I gave it a while ago on pythonmac-sig). -bob From walter at livinglogic.de Tue Apr 4 19:34:54 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Tue, 04 Apr 2006 19:34:54 +0200 Subject: [Python-Dev] r43613 - python/trunk/Doc/lib/libcsv.tex In-Reply-To: <443298D9.9010002@python.org> References: <20060404030544.D80A81E4013@bag.python.org> <443222F7.3020800@livinglogic.de> <443298D9.9010002@python.org> Message-ID: <4432AE3E.4000405@livinglogic.de> David Goodger wrote: > [regarding the examples in the last section of libcsv.tex] > > [Walter D?rwald] >> These classes will have problems with various non-charmap encodings. >> (E.g. the writer will write multiple BOMS for UTF-16). IMHO it would be >> better to use the new incremental codecs in these examples. > > Could you fix it? I'm not familiar with the incremental codecs. Done (in r43642). The new classes might be even useful as real code in the stdlib. Of course this isn't full Unicode support for csv yet, because the delimiter, quote and lineterminator characters can't be Unicode characters yet. Bye, Walter D?rwald From g.brandl at gmx.net Tue Apr 4 19:35:50 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 04 Apr 2006 19:35:50 +0200 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <10537655-2A9E-4C43-8FB3-E02C0B831F9E@gmail.com> References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> <e8bf7a530604040801o5181b501ia6f53c6ed54b304@mail.gmail.com> <10537655-2A9E-4C43-8FB3-E02C0B831F9E@gmail.com> Message-ID: <e0uapn$kln$1@sea.gmane.org> Alex Martelli wrote: > On Apr 4, 2006, at 8:01 AM, Jeremy Hylton wrote: > >> On 4/4/06, Alex Martelli <aleaxit at gmail.com> wrote: >>> import collections >>> def tally(seq): >>> d = collections.defaultdict(int) >>> for item in seq: >>> d[item] += 1 >>> return dict(d) > ... >> Putting it somewhere in collections seems like a great idea. >> defaultdict is a bit odd, because the functionality doesn't have >> anything to do with defaults, just dicts. maybe a classmethod on >> regular dicts would make more sense? > > Good points: it should probably be a classmethod on dict, or a > function in module collections. > >> I write this function regularly, so I'd be happy to have it >> available directly. > > Heh, same here -- soon as I saw it proposed on c.l.py I recognized an > old friend and it struck me that, simple but widely used, it should > be somewhere in the standard library. Why not make it collections.bag, like the following: class bag(dict): def __init__(self, iterable=None): dict.__init__(self) if iterable: self.update(iterable) def update(self, iterable): for item in iterable: self.add(item) def add(self, item): self[item] = self.get(item, 0) + 1 def remove(self, item): if self[item] == 1: del self[item] else: self[item] -= 1 def count(self, item): return self[item] (etc.) Georg From thomas at python.org Tue Apr 4 19:38:28 2006 From: thomas at python.org (Thomas Wouters) Date: Tue, 4 Apr 2006 19:38:28 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> Message-ID: <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> On 4/4/06, Jean-Paul Calderone <exarkun at divmod.com> wrote: > > > Of course anyone who is interested can run the Twisted test suite very > easily and take a look at the failures themselves (if you have Twisted > installed, "trial twisted" will do it). ... and can guess which errors/failures are specific to Python 2.5 (for instance, the lack of PyCrypto in my 2.5-alpha install generates a lot of failures.) My AMD64 machine was giving a _lot_ of errors on zip(xrange( sys.maxint), iterable), which I now fixed in trunk. I'm re-running the tests to find out which ones are left over, but I'm having a hard time figuring out what to look at in trial's rather verbose logfile. There seem to be quite a few tracebacks involving Exception subclasses, in any case. Perhaps changing Exception's type in 2.5 wasn't a good idea after all (but hey, that's what alphas are for ;) Oh, goodie, a segmentation fault. Let's see if I can reproduce it ;P -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060404/fc10075b/attachment.html From thomas at python.org Tue Apr 4 19:45:17 2006 From: thomas at python.org (Thomas Wouters) Date: Tue, 4 Apr 2006 19:45:17 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> Message-ID: <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> On 4/4/06, Thomas Wouters <thomas at python.org> wrote: > > > Oh, goodie, a segmentation fault. Let's see if I can reproduce it ;P > And so I could. test_banana.CananaTestCase.testCrashNegativeLong crashes, because it calls PyString_AsStringAndSize() with an int-ptr as second argument (an adjacent ptr variable becomes garbage.) That's certainly a problem with the Py_ssize_t change. Martin, aren't all output variables (or ptr-variables, rather) supposed to be controlled by the 'PY_SSIZE_T_CLEAN' #define? People aren't going to notice their compiler warnings; I know I didn't :) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060404/5aa8df81/attachment.htm From nas at arctrix.com Tue Apr 4 19:50:22 2006 From: nas at arctrix.com (Neil Schemenauer) Date: Tue, 4 Apr 2006 17:50:22 +0000 (UTC) Subject: [Python-Dev] reference leaks, __del__, and annotations References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> <4431EBF6.9000706@canterbury.ac.nz> Message-ID: <e0ubku$m0n$1@sea.gmane.org> Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > Okay, so would it be possible for a generator that > needs finalisation to set up a weakref callback, suitably > rooted somewhere so that the callback is reachable, > that references enough stuff to clean up after the > generator, without referencing the generator itself? I think so but it depends on what the finalizer needs to reference. If it references into the cycle then we are back to the same problem. I think you could also do the same thing with a sub-object that has the __del__ method. Maybe the generator would only create the sub-object if it needed to perform finalization. Neil From nnorwitz at gmail.com Tue Apr 4 19:51:55 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Apr 2006 09:51:55 -0800 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> Message-ID: <ee2a432c0604041051s7c3ebbbcga180060ef75dd6c7@mail.gmail.com> On 4/4/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/4/06, Thomas Wouters <thomas at python.org> wrote: > > > > > > Oh, goodie, a segmentation fault. Let's see if I can reproduce it ;P > > > And so I could. > test_banana.CananaTestCase.testCrashNegativeLong crashes, > because it calls PyString_AsStringAndSize() with an int-ptr as second > argument (an adjacent ptr variable becomes garbage.) That's certainly a > problem with the Py_ssize_t change. Martin, aren't all output variables (or > ptr-variables, rather) supposed to be controlled by the 'PY_SSIZE_T_CLEAN' > #define? People aren't going to notice their compiler warnings; I know I > didn't :) I don't think so. I think that macro controls Py_BuildValue etc (the APIs in modsupport.h) I don't think it controls anything else. So what that means is that Py_BuildValue("l") means something different depending on ssize_t or not. n From martin at v.loewis.de Tue Apr 4 20:28:06 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 20:28:06 +0200 Subject: [Python-Dev] Renaming sqlite3 In-Reply-To: <e0tau3$l02$1@sea.gmane.org> References: <4430C86F.8090507@v.loewis.de> <443125EB.7020403@python.net> <443198C4.6050301@v.loewis.de> <e0tau3$l02$1@sea.gmane.org> Message-ID: <4432BAB6.6070807@v.loewis.de> Thomas Heller wrote: > Since on Windows binary extensions have to be compiled for the exact > major version of Python, it seems logical (to me, at least), to include > that version number into the filename. Say, _socket25.pyd. This would be a major change; it might break the build process of many existing extensions. Also, I fail to see the point: since _socket.pyd for Python 2.5 is in a directory that is only on the Python 2.5 sys.path, it is unlikely that the files are ever confused for belonging to a different version. >> And why is that related to not supporting extensions with .DLL names >> anymore? > > IMO changing that would require a PEP (but I may be wrong), and this is only > another idea which should be considered when writing or discussing that. > If you don't like the idea, or don't see any advantages in this, I retract the > request. Ok - it's not that I dislike it. It would be technically feasible. It would be some effort to implement. However, I would expect that other people object more strongly. Regards, Martin From python at rcn.com Tue Apr 4 20:26:17 2006 From: python at rcn.com (Raymond Hettinger) Date: Tue, 4 Apr 2006 11:26:17 -0700 Subject: [Python-Dev] tally (and other accumulators) References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> Message-ID: <013301c65815$44682030$6c3c0a0a@RaymondLaptop1> [Alex] > This is quite general and simple at the same time: for example, it > was proposed originally to answer some complaint about any and all > giving no indication of the count of true/false items: > > tally(bool(x) for x in seq) > > would give a dict with two entries, counts of true and false items. FWIW, sum() works nicely for counting true entries: >>> sum(x%3==0 for x in range(100)) 34 Raymond From martin at v.loewis.de Tue Apr 4 20:37:55 2006 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 04 Apr 2006 20:37:55 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> Message-ID: <4432BD03.7020305@v.loewis.de> Thomas Wouters wrote: > And so I could. test_banana.CananaTestCase.testCrashNegativeLong > crashes, because it calls PyString_AsStringAndSize() with an int-ptr as > second argument (an adjacent ptr variable becomes garbage.) That's > certainly a problem with the Py_ssize_t change. Martin, aren't all > output variables (or ptr-variables, rather) supposed to be controlled by > the 'PY_SSIZE_T_CLEAN' #define? People aren't going to notice their > compiler warnings; I know I didn't :) No: this is the discussion I had with MAL. You have to watch for compiler warnings talking about incorrect pointer types. These days, you can apply Fredrik's checker to find out that you are using functions that output Py_ssize_t. Ignoring the warning might cause crashes on 64-bit machines. On 32-bit machines, there should be any negative consequence. Regards, Martin From martin at v.loewis.de Tue Apr 4 20:39:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 20:39:35 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <ee2a432c0604041051s7c3ebbbcga180060ef75dd6c7@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> <ee2a432c0604041051s7c3ebbbcga180060ef75dd6c7@mail.gmail.com> Message-ID: <4432BD67.20000@v.loewis.de> Neal Norwitz wrote: > I don't think so. I think that macro controls Py_BuildValue etc (the > APIs in modsupport.h) I don't think it controls anything else. So > what that means is that Py_BuildValue("l") means something different > depending on ssize_t or not. Actually, it *only* affects s#, z#, t#. "l" will always go along with a "long int" argument. Regards, Martin From brett at python.org Tue Apr 4 21:43:10 2006 From: brett at python.org (Brett Cannon) Date: Tue, 4 Apr 2006 12:43:10 -0700 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> Message-ID: <bbaeab100604041243r45b8acb9ka9ce19f57fcf8238@mail.gmail.com> On 4/4/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/4/06, Jean-Paul Calderone <exarkun at divmod.com> wrote: > > > > Of course anyone who is interested can run the Twisted test suite very > easily and take a look at the failures themselves (if you have Twisted > installed, "trial twisted" will do it). > > > ... and can guess which errors/failures are specific to Python 2.5 (for > instance, the lack of PyCrypto in my 2.5-alpha install generates a lot of > failures.) My AMD64 machine was giving a _lot_ of errors on > zip(xrange(sys.maxint), iterable), which I now fixed in trunk. I'm > re-running the tests to find out which ones are left over, but I'm having a > hard time figuring out what to look at in trial's rather verbose logfile. > There seem to be quite a few tracebacks involving Exception subclasses, in > any case. Perhaps changing Exception's type in 2.5 wasn't a good idea after > all (but hey, that's what alphas are for ;) No, the idea was a wonderful idea! =) Are the errors because of something in Python, or because Twisted has not been changed to handle the new semantics? I know so far test failures have come from people making overly strict assumptions about the output (e.g., I think it was Thomas or Greg who was getting failures because ``type(Exception)`` outputted a different string for doctest stuff). Is that the case here? -Brett > > Oh, goodie, a segmentation fault. Let's see if I can reproduce it ;P > > > -- > Thomas Wouters < thomas at python.org> > > Hi! I'm a .signature virus! copy me into your .signature file to help me > spread! > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > > > From jeremy at alum.mit.edu Tue Apr 4 21:45:16 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 4 Apr 2006 15:45:16 -0400 Subject: [Python-Dev] line numbers, pass statements, implicit returns In-Reply-To: <ca471dc20604012218n6c34b2fau7dd070a60481bc1a@mail.gmail.com> References: <e8bf7a530604010917gf0200aen98a26daa1257622e@mail.gmail.com> <ca471dc20604012218n6c34b2fau7dd070a60481bc1a@mail.gmail.com> Message-ID: <e8bf7a530604041245j6fef007bjf2253d5c814cb5d6@mail.gmail.com> On 4/2/06, Guido van Rossum <guido at python.org> wrote: > On 4/1/06, Jeremy Hylton <jeremy at alum.mit.edu> wrote: > > There are several test cases in test_trace that are commented out. We > > did this when we merged the ast-branch and promised to come back to > > them. I'm coming back to them now, but the test aren't documented > > well and the feature they test isn't specified well. > > > > The failing tests I've looked at so far involving pass statements and > > implicit "return None" statements generated by the compiler. The > > tests seem to verify that > > > > 1) if you have a function that ends with an if/else where the body > > of the else is pass, > > there is no line number associated with the return > > 2) if you have a function that ends with a try/except where the > > body of the except is pass, > > there is a line number associated with the return. > > > > Here's a failing example > > > > def ireturn_example(): > > a = 5 > > b = 5 > > if a == b: > > b = a+1 > > else: > > pass > > > > The code is traced and test_trace verifies that the return is > > associated with line 4! > > > > In these cases, the ast compiler will always associate a line number > > with the return. (Technically with the LOAD_CONST preceding the > > RETURN_VALUE.) This happens pretty much be accident. It always > > associates a line number with the first opcode generated after a new > > statement is visited. Since a Pass statement node has no opcode, the > > return generates the first opcode. > > > > Now I could add some special cases to the compiler to preserve the old > > behavior, the question is: Why bother? It's such an unlikely case (an > > else that has no effect). Does anyone depend on the current behavior > > for the ireturn_example()? It seems sensible to me to always generate > > a line number for the pass + return case, just so you see the control > > flow as you step through the debugger. > > Makes sense to me. I can't imagine what this was testing except > perhaps a corner case in the algorithm for generating the (insanely > complicated) linenumber mapping table. > > > The other case that has changed is that the new compiler does not > > generate code for "while 0:" I don't remember why <0.5 wink>. There > > are several test cases that verify line numbers for code using this > > kind of bogus construct. There are no lines anymore, so I would > > change the tests so that they don't expect the lines in question. But > > I have no idea what they are trying to test. Does anyone know? > > Not me. This is definitely not part of the language spec! :-) It needs to be part of some spec, I think. It's probably part of the implementation rather than the language, but we need some description of what the trace hooks are supposed to do. The unittests are some kind of spec, but not a great one (because they don't seem exhaustive and don't explain why they work the way they do). Should we write an informational PEP, then, that explains the conditions under which you get a line event in trace mode? And how much do we care about compatibility from release to release? Here's another case that came up while pursuing these tests. It looks like the old compiler generated lnotab entries with a 0 change in line number, merely so that the trace code would generate an event on that line. The problem case is a for loop, where there is a line number set for the SETUP_LOOP instruction. You'd like a line event for the last time through the for loop, when the iterator is consumed, and to get that it appears that the lnotab has an entry with a 0 increment in line number and a non-zero increment in bytecode offset. There's no comment in the lnotab code that mentions that as a possibility, so it came as a surprise to me :-). The question, then, is how many cases are there like this. It may only be possible to answer this by walking through the old compiler code and noting all the uncommented special cases. Looks like a project for alpha 2. Jeremy From nnorwitz at gmail.com Tue Apr 4 21:56:29 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Apr 2006 11:56:29 -0800 Subject: [Python-Dev] chilling svn for 2.5a1 Message-ID: <ee2a432c0604041256o229d417bnfd2384d2bcbbff89@mail.gmail.com> The freeze is still a few hours away, but please refrain from modifications unless they are really, really important for the release. The tests are just starting to turn green again. I'm not as worried about doc changes. Andrew feel free to keep updating whatsnew. Cheers, n From foom at fuhm.net Tue Apr 4 21:58:38 2006 From: foom at fuhm.net (James Y Knight) Date: Tue, 4 Apr 2006 15:58:38 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> Message-ID: <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> On Apr 3, 2006, at 4:02 PM, Tim Peters wrote: > Using which compiler? This varies across boxes. Most obviously, on a > 64-bit box all these members are 8 bytes (note that ob_refcnt is > Py_ssize_t in 2.5, not int anymore), but even on some 32-bit boxes the > "long double" trick only forces 4-byte alignment. Hm, yes, my mistake. I see that on linux/x86, long double only takes 12 bytes and is 4-byte aligned. Even though the actual CPU really wants it 8-byte aligned, the ABI has not been changed to allow that. On OSX/ppc32, OSX/ppc64 and linux/x86-64, doubles are 16 bytes, and 8- byte aligned. So the struct uses 16 bytes or 32 bytes, and has the extra word of unused space. All right, then, you could use the top bit of the ob_refcnt field. There's no way to possibly have 2**32 objects on a 32bit system anyhow. James From thomas at python.org Tue Apr 4 22:01:26 2006 From: thomas at python.org (Thomas Wouters) Date: Tue, 4 Apr 2006 22:01:26 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <4432BD03.7020305@v.loewis.de> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> <4432BD03.7020305@v.loewis.de> Message-ID: <9e804ac0604041301i3498b10al7d2fcd15effbb588@mail.gmail.com> On 4/4/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > Thomas Wouters wrote: > > And so I could. test_banana.CananaTestCase.testCrashNegativeLong > > crashes, because it calls PyString_AsStringAndSize() with an int-ptr as > > second argument (an adjacent ptr variable becomes garbage.) That's > > certainly a problem with the Py_ssize_t change. Martin, aren't all > > output variables (or ptr-variables, rather) supposed to be controlled by > > the 'PY_SSIZE_T_CLEAN' #define? People aren't going to notice their > > compiler warnings; I know I didn't :) > > No: this is the discussion I had with MAL. You have to watch for > compiler warnings talking about incorrect pointer types. These > days, you can apply Fredrik's checker to find out that you are > using functions that output Py_ssize_t. > > Ignoring the warning might cause crashes on 64-bit machines. On > 32-bit machines, there should be any negative consequence. I assume you meant "shouldn't be any negative consequences" there ;) I thought this over during dinner (mmmm, curry) and I agree that we shouldn't make PY_SSIZE_T_CLEAN change output variables that the compiler can catch. Too much effort, and in the end it'll just cause extensions compiled for 64-bit python to not use 64-bit values 'silently' (although eventually something'll get truncated and people will get confused.) This at least gives a compile-time hint that, on 64-bit platforms, things aren't quite right. It's just too bad that it's an easily overlooked hint, and that you can't get that hint when compiling for 32-bit platforms. Extension writers with 64-bit hardware access and the desire to compile, let alone test, on said hardware is still rare. If we did make PY_SSIZE_T_CLEAN adjust all Py_ssize_t*-using functions (which would mean a *lot* of API duplication), we could add a #warning for every use of the old API calls, so everyone sees it, even on 32-bit platforms. The warning should contain a pointer to a document describing how to conveniently support both Python 2.5+ and 2.4-, though. Too much work for alpha1, in any case, and probably too much work period. The webpage should be made, though, if just to refer to in the release notes. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060404/80112663/attachment-0001.htm From aleaxit at gmail.com Tue Apr 4 22:12:51 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Tue, 4 Apr 2006 13:12:51 -0700 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <013301c65815$44682030$6c3c0a0a@RaymondLaptop1> References: <7C31DFD6-8D63-4158-B32A-D374E8C72723@gmail.com> <013301c65815$44682030$6c3c0a0a@RaymondLaptop1> Message-ID: <e8a0972d0604041312y5fa47844p4599810c6a4d6336@mail.gmail.com> On 4/4/06, Raymond Hettinger <python at rcn.com> wrote: > [Alex] > > This is quite general and simple at the same time: for example, it > > was proposed originally to answer some complaint about any and all > > giving no indication of the count of true/false items: > > > > tally(bool(x) for x in seq) > > > > would give a dict with two entries, counts of true and false items. > > FWIW, sum() works nicely for counting true entries: > > >>> sum(x%3==0 for x in range(100)) > 34 Sure, and also works fine for counting false ones, thanks to 'not', but if you need both counts sum doesn't work (not without dirty tricks that can't be recommended;-). Alex From fredrik at pythonware.com Tue Apr 4 22:22:15 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 4 Apr 2006 22:22:15 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm><9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com><9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com><4432BD03.7020305@v.loewis.de> <9e804ac0604041301i3498b10al7d2fcd15effbb588@mail.gmail.com> Message-ID: <e0ukhm$phr$1@sea.gmane.org> Thomas Wouters wrote: > The webpage should be made, though, if just to refer to in the > release notes. the web page does exist: http://www.python.org/dev/peps/pep-0353/#conversion-guidelines </F> From aahz at pythoncraft.com Tue Apr 4 22:34:38 2006 From: aahz at pythoncraft.com (Aahz) Date: Tue, 4 Apr 2006 13:34:38 -0700 Subject: [Python-Dev] outstanding items for 2.5 In-Reply-To: <ca471dc20604031242t1657b92ct195a325acd9fc8fc@mail.gmail.com> References: <ee2a432c0604030034l5b339821t58581e14a19c01f8@mail.gmail.com> <20060403193941.GA17036@panix.com> <ca471dc20604031242t1657b92ct195a325acd9fc8fc@mail.gmail.com> Message-ID: <20060404203438.GA16643@panix.com> On Mon, Apr 03, 2006, Guido van Rossum wrote: > > Done. What exactly do you plan to do apart from editing the docs to > steer people away from file()? For the initial checkin, the dirt-simple: def open(filename, *args, **kwargs): return file(filename, *args, **kwargs) At this point, the sole purpose is to kill open() as a simple alias for file() and turn it into a factory function that can be documented separately (and have its own help() info). Further extensions can be done later. Because this is file I/O, there is no reason to even think of performance improvements in the call. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From martin at v.loewis.de Tue Apr 4 22:36:01 2006 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 04 Apr 2006 22:36:01 +0200 Subject: [Python-Dev] Twisted and Python 2.5a0r43587 In-Reply-To: <9e804ac0604041301i3498b10al7d2fcd15effbb588@mail.gmail.com> References: <20060404041424.22481.49118229.divmod.quotient.9813@ohm> <9e804ac0604041038u3d59b661t33b3a18b8ed3ee69@mail.gmail.com> <9e804ac0604041045p4a7a925by4777508343563335@mail.gmail.com> <4432BD03.7020305@v.loewis.de> <9e804ac0604041301i3498b10al7d2fcd15effbb588@mail.gmail.com> Message-ID: <4432D8B1.5010906@v.loewis.de> Thomas Wouters wrote: > It's just too bad that it's an easily overlooked hint, and that you > can't get that hint when compiling for 32-bit platforms. That all depends on the compiler, of course. Both MSVC and the Intel compiler support a /Wp64 flag, which tells you about "questionable" conversions between types even if the types currently happen to be the same. Python's PC/pyconfig.h defines Py_ssize_t (indirectly) as typedef _W64 int ssize_t; where _W64 expands to __w64, which clues the compiler that ssize_t (and thus Py_ssize_t) would be 64 bits on Win64, and that truncation and pointer type conflicts can arise when this is mixed with int (resp. int*). This is the second part (you can get the hint on a 32-bit system if you use the right compiler). On the first issue, I wish it was possible to tell the compiler that this kind of pointer conversion is really an error. For gcc, this is possible to do: -pedantic-errors changes all warnings that are considered "pedantic" to errors. A pedantic warning is a diagnostic that ISO C requires from a conforming implementation. The specific problem (converting type pointers in parameter passing without explicit cast) is such a required diagnostic, and thus causes compilation to abort. Unfortunately, this option is *really* pedantic. It first complains about "long long", which can be overridden with -std=gnu99; it then aborts at Objects/methodobject.c: In function 'meth_hash': Objects/methodobject.c:232: error: ISO C forbids conversion of function pointer to object pointer type This kind of conversion is frequent in Python (e.g. in typeobject.c), so for Python itself, this is no easy option. To fix that, Python would have to replace void* with, say, void(*)(void) in all places where it passes function pointers around. > Extension > writers with 64-bit hardware access and the desire to compile, let alone > test, on said hardware is still rare. If we did make PY_SSIZE_T_CLEAN > adjust all Py_ssize_t*-using functions (which would mean a *lot* of API > duplication), we could add a #warning for every use of the old API > calls, so everyone sees it, even on 32-bit platforms. How would you do that? You could trigger a linker warning on platforms that support that, but that would be different from #warning - I doubt you can cause a compiler warning if a certain function is called. Regards, Martin From martin at v.loewis.de Tue Apr 4 22:46:02 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 04 Apr 2006 22:46:02 +0200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> Message-ID: <4432DB0A.5010903@v.loewis.de> James Y Knight wrote: > All right, then, you could use the top bit of the ob_refcnt field. > There's no way to possibly have 2**32 objects on a 32bit system anyhow. Of this kind, we have several spare bits: there are atleast two bits in the ob_type field, since the type pointer is atleast 4-aligned on all platforms we support. OTOH, using these bits would break existing code. Regards, Martin From nnorwitz at gmail.com Tue Apr 4 23:31:22 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 4 Apr 2006 13:31:22 -0800 Subject: [Python-Dev] current 2.5 status Message-ID: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> Debian (ia64, ppc) is broken for several reasons, including the following test failures: test_ctypes test_curses test_sqlite. Some of the reasons vary depending on architecture. Ubuntu on hppa seems to have problems with at least: test_wait[34] seems to crash. cygwin has major problems (ie, it crashes) with test_bsddb3. This may actually be exposing a bug in bsddb. None of these are show stoppers IMO. Coverity didn't run last night. The only outstanding issues were due to sqlite. Many, perhaps all, of these issues have been addressed. The following systems are green: Sparc Solaris 10 amd64 & x86 gentoo PPC OS X.4 Alpha Tru64 5.1 x86 Windows 2k & XP (though there were some problems due to previous runs) x86 OpenBSD n From tim.peters at gmail.com Tue Apr 4 23:42:16 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 17:42:16 -0400 Subject: [Python-Dev] current 2.5 status In-Reply-To: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> References: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> Message-ID: <1f7befae0604041442m770d1c35q440dc266995483dc@mail.gmail.com> [Neal Norwitz] > ... > The following systems are green: > ... > x86 Windows 2k & XP (though there were some problems due to previous runs) There are no problems on one of the XP slaves. test_sqlite never ran on Win2K, although it's unlikely it will fail there (given that test_sqlite has run successfully on WinXP). For all platforms, note that the buildbot currently doesn't test release builds, or -O, at all. From tim.peters at gmail.com Tue Apr 4 23:48:30 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 17:48:30 -0400 Subject: [Python-Dev] current 2.5 status In-Reply-To: <1f7befae0604041442m770d1c35q440dc266995483dc@mail.gmail.com> References: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> <1f7befae0604041442m770d1c35q440dc266995483dc@mail.gmail.com> Message-ID: <1f7befae0604041448g2e6b2c72rdfc17becacf8913a@mail.gmail.com> Trent, FYI, on my box the invariable cause for LINK : fatal error LNK1104: cannot open file './python25_d.dll' in a failed buildbot Windows compile step is that some previous run left behind a python_d.exe process that won't go away. I don't know why it won't go away on its own, or how it gets into that state, but I do know that once it happens it never fixes itself. You have to find the python_d process(es) and kill it yourself. Then the buildbot dance works fine again. From trentm at ActiveState.com Wed Apr 5 00:04:47 2006 From: trentm at ActiveState.com (Trent Mick) Date: Tue, 4 Apr 2006 15:04:47 -0700 Subject: [Python-Dev] current 2.5 status In-Reply-To: <1f7befae0604041448g2e6b2c72rdfc17becacf8913a@mail.gmail.com> References: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> <1f7befae0604041442m770d1c35q440dc266995483dc@mail.gmail.com> <1f7befae0604041448g2e6b2c72rdfc17becacf8913a@mail.gmail.com> Message-ID: <20060404220447.GD11723@activestate.com> [Tim Peters wrote] > Trent, FYI, on my box the invariable cause for > > LINK : fatal error LNK1104: cannot open file './python25_d.dll' > > in a failed buildbot Windows compile step is that some previous run > left behind a python_d.exe process that won't go away. I don't know > why it won't go away on its own, or how it gets into that state, but I > do know that once it happens it never fixes itself. You have to find > the python_d process(es) and kill it yourself. Then the buildbot > dance works fine again. In this case it was because python_d.exe crashed and Windows then popped up a system modal dialog to tell me: hanging the process. I wonder if it would be possible to write a "reaper" script that used some combination of EnumWindows, SendKeys, the Performance Monitoring APIs (Pdh* function) and some elbow grease to click these crash dialogs away. Trent -- Trent Mick TrentM at ActiveState.com From martin at v.loewis.de Wed Apr 5 00:10:09 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 05 Apr 2006 00:10:09 +0200 Subject: [Python-Dev] current 2.5 status In-Reply-To: <20060404220447.GD11723@activestate.com> References: <ee2a432c0604041431j69d44845g662f9772eaf9f274@mail.gmail.com> <1f7befae0604041442m770d1c35q440dc266995483dc@mail.gmail.com> <1f7befae0604041448g2e6b2c72rdfc17becacf8913a@mail.gmail.com> <20060404220447.GD11723@activestate.com> Message-ID: <4432EEC1.3060500@v.loewis.de> Trent Mick wrote: > I wonder if it would be possible to write a "reaper" script that used > some combination of EnumWindows, SendKeys, the Performance Monitoring > APIs (Pdh* function) and some elbow grease to click these crash dialogs > away. The buildbot invokes itself a "reaper" procedure if the current build step doesn't produce any output for some period of time (I think 2000s). This would be the place where actions should be added, IMO. For clicking crash dialogs, I wonder whether running the processes in their own window station (CreateWindowStation) would help: I would hope that the dialogs get forcefully closed when the window station get closed. Regards, Martin From tdelaney at avaya.com Wed Apr 5 00:41:33 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Wed, 5 Apr 2006 08:41:33 +1000 Subject: [Python-Dev] current 2.5 status Message-ID: <2773CAC687FD5F4689F526998C7E4E5F074367@au3010avexu1.global.avaya.com> Trent Mick wrote: > I wonder if it would be possible to write a "reaper" script that used > some combination of EnumWindows, SendKeys, the Performance Monitoring > APIs (Pdh* function) and some elbow grease to click these crash > dialogs away. I've found for this type of thing AutoIt <http://www.autoitscript.com/autoit3/> works well. An example that might be applicable: I've got an HTML bandwidth monitor I wrote that retrieves data from my router and my ISP using XMLHTTP. Sometimes it takes a long time to come back and the script pops up a dialog with "A script is making IE run slow" (or something to that effect - forget the exact error message). I've got an AutoIt script which waits until that windows appears, closes it, closes the bandwidth monitor, starts it back up, then goes back to waiting. The script is about 10 lines long. AutoIt scripts can also be saved (and redistributed) as a standalone app (approx 120kB in size). So part of setting up a Windows buildbot slave could be to add such a script to run at startup. However, if there's some way of doing it without installing extra stuff on the slave, that would be preferable. Tim Delaney From greg.ewing at canterbury.ac.nz Wed Apr 5 00:50:52 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 10:50:52 +1200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <44322DF7.1040603@livinglogic.de> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> <44312B58.3090900@gmail.com> <44313E5E.6040003@livinglogic.de> <4431D08F.8070603@canterbury.ac.nz> <44322DF7.1040603@livinglogic.de> Message-ID: <4432F84C.5080404@canterbury.ac.nz> Walter D?rwald wrote: > Greg Ewing wrote: > >> Wouldn't it be better for the setter to raise an exception >> if it's out of range? It probably indicates a bug in the >> caller's code. > > The day before Monday is -1, so it adds a little convenience. In that case, why doesn't the global function allow the same convenience? > I think we've spent more time discussing the calendar module, than the > Python community has spent using it! ;) It makes a nice change from adaptation and generic functions. :-) -- Greg From greg.ewing at canterbury.ac.nz Wed Apr 5 01:18:25 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 11:18:25 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0604040901v34f52664w118f6e506a9dd979@mail.gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> <1f7befae0604040901v34f52664w118f6e506a9dd979@mail.gmail.com> Message-ID: <4432FEC1.3020800@canterbury.ac.nz> Tim Peters wrote: > A problem is that the variant semantics also seem pretty arbitrary ;-), > and there's a dearth of compelling use cases to guide a decision. At the time I think I suggested that it would be reasonable if weakref callbacks were guaranteed to be called as long as they weren't trash themselves and they didn't reference any trash. From what you said in a recent message, it sounds like that's the way it currently works. These semantics would be sufficient to be able to use weakrefs to register finalisers, I think. You keep a global list of weakrefs with callbacks that call the finalizer and then remove the weakref from the global list. I'll put together a prototype some time and see if it works. I actually have a use case at the moment -- a couple of types that need to release external resources, and since they're part of a library I'm distributing, I can't be sure people won't put them in cycles. -- Greg From greg.ewing at canterbury.ac.nz Wed Apr 5 01:29:07 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 11:29:07 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <e0ubku$m0n$1@sea.gmane.org> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <fb6fbf560603301716x13c4cda7x7fd5e462850b5a03@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> <4431EBF6.9000706@canterbury.ac.nz> <e0ubku$m0n$1@sea.gmane.org> Message-ID: <44330143.4030401@canterbury.ac.nz> Neil Schemenauer wrote: > I think so but it depends on what the finalizer needs to reference. That's really what I'm asking -- what *does* a generator finaliser need to reference? Or does it depend on what the generator's code does? -- Greg From greg.ewing at canterbury.ac.nz Wed Apr 5 02:11:14 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 12:11:14 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> Message-ID: <44330B22.60402@canterbury.ac.nz> James Y Knight wrote: > All right, then, you could use the top bit of the ob_refcnt field. > There's no way to possibly have 2**32 objects on a 32bit system anyhow. That would slow down every Py_INCREF and Py_DECREF, which would need to be careful to exclude the top bit from their operations. -- Greg From greg.ewing at canterbury.ac.nz Wed Apr 5 02:17:59 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 12:17:59 +1200 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <4432DB0A.5010903@v.loewis.de> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442D2E47.8070809@gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> <4432DB0A.5010903@v.loewis.de> Message-ID: <44330CB7.5010208@canterbury.ac.nz> Martin v. L?wis wrote: > Of this kind, we have several spare bits: there are atleast two bits > in the ob_type field, And that would require changing all code that dereferenced the ob_type field! This would be much worse -- at least Py_INCREF and Py_DECREF are macros... -- Greg From crutcher at gmail.com Wed Apr 5 02:27:03 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Tue, 4 Apr 2006 17:27:03 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? Message-ID: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> While nocking together a framework today, I ran into the amazing limitations of issubclass(). A) issubclass() throws a TypeError if the object being checked is not a class, which seems very strange. It is a predicate, and lacking a isclass() method, it should just return False. B) issubclass() won't work on a list of classs, the way isinstance() does. Is there any support for patches which fix A and/or B? -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From greg.ewing at canterbury.ac.nz Wed Apr 5 02:46:53 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 05 Apr 2006 12:46:53 +1200 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> Message-ID: <4433137D.5040901@canterbury.ac.nz> Crutcher Dunnavant wrote: > A) issubclass() throws a TypeError if the object being checked is not > a class, which seems very strange. If I ever pass a non-class to issubclass() it's almost certainly a bug in my code, and I'd want to know about it. On the rare occasions when I don't want this, I'm happy to write isinstance(c, type) and issubclass(c, d) > B) issubclass() won't work on a list of classs, > the way isinstance() does. That sounds more reasonable. I can't think of any reason why it shouldn't work. -- Greg From crutcher at gmail.com Wed Apr 5 02:55:32 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Tue, 4 Apr 2006 17:55:32 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <4433137D.5040901@canterbury.ac.nz> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> Message-ID: <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> On 4/4/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > Crutcher Dunnavant wrote: > > > A) issubclass() throws a TypeError if the object being checked is not > > a class, which seems very strange. > > If I ever pass a non-class to issubclass() it's almost > certainly a bug in my code, and I'd want to know about > it. Perhaps, but is it worth distorting a predicate? > On the rare occasions when I don't want this, I'm > happy to write > > isinstance(c, type) and issubclass(c, d) This doesn't work, did you mean? isinstance(c, types.ClassType) and issubclass(c, d) > > B) issubclass() won't work on a list of classs, > > the way isinstance() does. > > That sounds more reasonable. I can't think of any > reason why it shouldn't work. > > -- > Greg > -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From tim.peters at gmail.com Wed Apr 5 04:32:48 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 22:32:48 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <4432FEC1.3020800@canterbury.ac.nz> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> <1f7befae0604040901v34f52664w118f6e506a9dd979@mail.gmail.com> <4432FEC1.3020800@canterbury.ac.nz> Message-ID: <1f7befae0604041932v3af905ack84752bdbcb79ed27@mail.gmail.com> [Tim Peters] >> A problem is that the variant semantics also seem pretty arbitrary ;-), >> and there's a dearth of compelling use cases to guide a decision. [Greg Ewing] |> At the time I think I suggested that it would be > reasonable if weakref callbacks were guaranteed to > be called as long as they weren't trash themselves > and they didn't reference any trash. From what you > said in a recent message, it sounds like that's > the way it currently works. Nope, and that was the point of the first message in the thread I referenced. The issues were explained well in that thread, so I don't want to repeat it all here again. > These semantics would be sufficient to be able to > use weakrefs to register finalisers, I think. You > keep a global list of weakrefs with callbacks that > call the finalizer and then remove the weakref > from the global list. > > I'll put together a prototype some time and see if > it works. I actually have a use case at the moment -- > a couple of types that need to release external > resources, and since they're part of a library > I'm distributing, I can't be sure people won't put > them in cycles. Note that it's very easy to do this with __del__. The trick is for your type not to have a __del__ method itself, but to point to a simple "cleanup object" with a __del__ method. Give that "contained" object references to the resources you want to close, and you're done. Because your "real" objects don't have __del__ methods, cycles involving them can't inhibit gc. The cleanup object's only purpose in life is to close resources. Like: class _RealTypeResourceCleaner: def __init__(self, *resources): self.resources = resources def __del__(self): if self.resources is not None: for r in self.resources: r.close() self.resources = None # and typically no other methods are needed, or desirable, in # this helper class class RealType: def __init__(*args): ... # and then, e.g., self.cleaner = _ResourceCleaner(resource1, resource2) ... tons of stuff, but no __del__ .... That's the simple, general way to mix cycles and __del__ without problems. From anthony at interlink.com.au Wed Apr 5 04:34:53 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 5 Apr 2006 12:34:53 +1000 Subject: [Python-Dev] Reminder: TRUNK FREEZE. 2.5a1, 00:00 UTC, Wednesday 5th of April. Message-ID: <200604051234.55884.anthony@interlink.com.au> Just a reminder - the trunk is currently frozen for 2.5a1. Please don't check anything into it until the release is done. I'll send an update when this is good. -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. -------------- next part -------------- An embedded message was scrubbed... From: Anthony Baxter <anthony at interlink.com.au> Subject: [Python-Dev] TRUNK FREEZE. 2.5a1, 00:00 UTC, Wednesday 5th of April. Date: Mon, 3 Apr 2006 12:47:25 +1100 Size: 3658 Url: http://mail.python.org/pipermail/python-dev/attachments/20060405/c247afdd/attachment-0001.mht From tim.peters at gmail.com Wed Apr 5 04:40:17 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 22:40:17 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <44330143.4030401@canterbury.ac.nz> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <n2m-g.Xns97975DB5C8B83duncanrcpcouk@127.0.0.1> <442D2E47.8070809@gmail.com> <5.1.1.6.0.20060331115351.039bc6b8@mail.telecommunity.com> <1f7befae0603310914q644aafb4t2ffc4efaa0718415@mail.gmail.com> <7.0.1.0.0.20060403135726.021448a8@telecommunity.com> <1f7befae0604031905j7994da1eraacd25aa05dcc09d@mail.gmail.com> <4431EBF6.9000706@canterbury.ac.nz> <e0ubku$m0n$1@sea.gmane.org> <44330143.4030401@canterbury.ac.nz> Message-ID: <1f7befae0604041940r7b5aa5eaj87adf57a3fe448b1@mail.gmail.com> [Greg Ewing] > That's really what I'm asking -- what *does* a > generator finaliser need to reference? Or does > it depend on what the generator's code does? A generator finalizer can execute arbitrary Python code. Note that even if that was limited (which it isn't -- there are no limitations) to executing "pass", it wouldn't be exploitable, because the GIL can be released during any Python code, allowing other threads to run anything before the finalizer returns. From steve at holdenweb.com Wed Apr 5 04:50:35 2006 From: steve at holdenweb.com (Steve Holden) Date: Tue, 04 Apr 2006 22:50:35 -0400 Subject: [Python-Dev] The "Need for Speed" Sprint, Reykjavik, Iceland, May 21-28, 2006 Message-ID: <e0vb9i$of9$1@sea.gmane.org> EWT LLC and CCP h.f., having specific commercial interests in doing so, have decided to sponsor a sprint on the Python programming language with specific short- and medium-term acceleration goals. The sprint is also intended to stimulate broad consideration of the future of Python, and specifically to try and identify the most promising routes to improved performance. Selected members of the Python developer community will receive direct invitations, and their attendance will be sponsored, to try to ensure the most effective possible result. In the spirit of open source events the sprint is open to all. The venue is the Grand Hotel, Reykjavik, http://www.grand.is/. The following objectives are of interest to the sponsors. * Improving the decimal module by implementing portions in C * Investigate whether RPython offers sufficient speedup over the regular CPython interpreter to replace tailored C and C++ code in MMP gaming applications * Implement an ordered dictionary in both C and Python * Implement data-structure-specific algorithms, which rely heavily on certain data structures, as RPython * Adding an iterator interface (similar to re.finditer) to the struct module * Further refine the PyPy LLVM back end to improve general execution speeds * Offer the PyPy team a sprint venue to continue their development work on Python implementations written in Python * Create a string subtype that provides lazy slicing without copying Sprinters are, as always, free to choose the aspects of Python they choose to work on. Flights should be booked to and from Keflavik airport (KEF). There will be a mid-week ?let your hair down? social event to which all sprint participants are cordially invited. In order to ensure that sufficient meeting space is available, please notify Steve Holden of your intention to attend by emailing steve at holdenweb.com. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC/Ltd www.holdenweb.com Love me, love my blog holdenweb.blogspot.com From tim.peters at gmail.com Wed Apr 5 04:52:19 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 4 Apr 2006 22:52:19 -0400 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <44330B22.60402@canterbury.ac.nz> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <ca471dc20604031152x3095d278laf15e57ef9d07bf8@mail.gmail.com> <e0rs3f$tb4$1@sea.gmane.org> <B79D0A6D-2273-4BBA-81EB-373D9CC26453@fuhm.net> <1f7befae0604031302n6c3f543ax26971fb8aaaa4e39@mail.gmail.com> <DD7D5826-8388-4230-A69B-F8EA729D75B8@fuhm.net> <44330B22.60402@canterbury.ac.nz> Message-ID: <1f7befae0604041952o176502c0y5ac3736797210007@mail.gmail.com> [various people debating how to steal a bit from an existing PyObject member] Let's stop this. It's a "bike shed" argument: even if we had 1000 spare bits, it wouldn't do any good. Having a bit to record whether an object has been finalized doesn't solve any problem that's been raised recently (it doesn't cure any problems with resurrection, for example). Even if we wanted it just to support a slow, dubious way to collect trash cycles containing objects with __del__ methods, gcmodule.c would need major rewriting to make that happen. If someone is interested enough to do that, fine, but it sure ain't me ;-) From guido at python.org Wed Apr 5 05:01:35 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 4 Apr 2006 20:01:35 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> Message-ID: <ca471dc20604042001m474d4ad0gad945a1760a4eab2@mail.gmail.com> On 4/4/06, Crutcher Dunnavant <crutcher at gmail.com> wrote: > On 4/4/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > > Crutcher Dunnavant wrote: > > > > > A) issubclass() throws a TypeError if the object being checked is not > > > a class, which seems very strange. > > > > If I ever pass a non-class to issubclass() it's almost > > certainly a bug in my code, and I'd want to know about > > it. > > Perhaps, but is it worth distorting a predicate? Certainly. In other languages this would be a compile-time error. There's no rule that predicate cannot raise an exception. If you're not sure whether something is a class or not, you should first sniff it out for its class-ness before checking whether it's a subclass of something. I recommend hasattr(x, "__bases__") which is more likely to recognize classes than isinstance(x, type) -- the latter only works for standard new-style classes. > > On the rare occasions when I don't want this, I'm > > happy to write > > > > isinstance(c, type) and issubclass(c, d) > > This doesn't work, did you mean? > isinstance(c, types.ClassType) and issubclass(c, d) > > > > > B) issubclass() won't work on a list of classs, > > > the way isinstance() does. > > > > That sounds more reasonable. I can't think of any > > reason why it shouldn't work. Agreed. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From crutcher at gmail.com Wed Apr 5 05:18:39 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Tue, 4 Apr 2006 20:18:39 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <ca471dc20604042001m474d4ad0gad945a1760a4eab2@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> <ca471dc20604042001m474d4ad0gad945a1760a4eab2@mail.gmail.com> Message-ID: <d49fe110604042018u39ef286cu5967606e4f231ce6@mail.gmail.com> On 4/4/06, Guido van Rossum <guido at python.org> wrote: > On 4/4/06, Crutcher Dunnavant <crutcher at gmail.com> wrote: > > On 4/4/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > > > Crutcher Dunnavant wrote: > > > > > > > A) issubclass() throws a TypeError if the object being checked is not > > > > a class, which seems very strange. > > > > > > If I ever pass a non-class to issubclass() it's almost > > > certainly a bug in my code, and I'd want to know about > > > it. > > > > Perhaps, but is it worth distorting a predicate? > > Certainly. In other languages this would be a compile-time error. In other, statically typed languges. Someone hands me X, I want to know if X is a subclass of Y. Yes or No. If X is not a class, then it is not a subclass of Y, hence issubclass(X, Y) should return False. > There's no rule that predicate cannot raise an exception. No, but it makes many applications (such as using it as a test in list comprehensions) difficult enough to not be worth it. > If you're not sure whether something is a class or not, you should > first sniff it out for its class-ness before checking whether it's a > subclass of something. I recommend hasattr(x, "__bases__") which is > more likely to recognize classes than isinstance(x, type) -- the > latter only works for standard new-style classes. > > > > On the rare occasions when I don't want this, I'm > > > happy to write > > > > > > isinstance(c, type) and issubclass(c, d) > > > > This doesn't work, did you mean? > > isinstance(c, types.ClassType) and issubclass(c, d) > > > > > > > > B) issubclass() won't work on a list of classs, > > > > the way isinstance() does. > > > > > > That sounds more reasonable. I can't think of any > > > reason why it shouldn't work. > > Agreed. > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From aleaxit at gmail.com Wed Apr 5 05:41:32 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Tue, 4 Apr 2006 20:41:32 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <d49fe110604042018u39ef286cu5967606e4f231ce6@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> <ca471dc20604042001m474d4ad0gad945a1760a4eab2@mail.gmail.com> <d49fe110604042018u39ef286cu5967606e4f231ce6@mail.gmail.com> Message-ID: <10337509-A6D1-473B-8DE2-18249CBC77B8@gmail.com> On Apr 4, 2006, at 8:18 PM, Crutcher Dunnavant wrote: ... >> There's no rule that predicate cannot raise an exception. > > No, but it makes many applications (such as using it as a test in list > comprehensions) difficult enough to not be worth it. IMHO, the solution to THAT very real problem is to supply a built-in function that catches some exceptions and maps them into suitable return values. Simplest would be something like: def catch(excepts, default, f, *a, **k): try: return f(*a, **k) except excepts: return default and then, the LC you're after is easy: subints = [x for x in y if catch(TypeError, False, issubclass, x, int)] though I'm sure we can get better syntax if we turn 'catch' into some kind of syntactic special form. My point is that there are umpteen predicates one can write which would have to be distorted to ensure they can't raise -- better to let them raise if they must, and allow catching the expected exception(s), somewhat like this example. Alex From foom at fuhm.net Wed Apr 5 06:59:41 2006 From: foom at fuhm.net (James Y Knight) Date: Wed, 5 Apr 2006 00:59:41 -0400 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <10337509-A6D1-473B-8DE2-18249CBC77B8@gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> <ca471dc20604042001m474d4ad0gad945a1760a4eab2@mail.gmail.com> <d49fe110604042018u39ef286cu5967606e4f231ce6@mail.gmail.com> <10337509-A6D1-473B-8DE2-18249CBC77B8@gmail.com> Message-ID: <FC183F10-DB09-4C97-8950-A0D2656A83E4@fuhm.net> On Apr 4, 2006, at 11:41 PM, Alex Martelli wrote: > IMHO, the solution to THAT very real problem is to supply a built-in > function that catches some exceptions and maps them into suitable > return values. Simplest would be something like: [...] > though I'm sure we can get better syntax if we turn 'catch' into some > kind of syntactic special form. My point is that there are umpteen > predicates one can write which would have to be distorted to ensure > they can't raise -- better to let them raise if they must, and allow > catching the expected exception(s), somewhat like this example. And while you're at it, how about some syntax to use try/finally and while inside expressions too. That will complete the duplication of flow control syntax between statement and expression form. James From fdrake at acm.org Wed Apr 5 07:26:08 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 5 Apr 2006 01:26:08 -0400 Subject: [Python-Dev] Reminder: TRUNK FREEZE. 2.5a1, 00:00 UTC, Wednesday 5th of April. In-Reply-To: <200604051234.55884.anthony@interlink.com.au> References: <200604051234.55884.anthony@interlink.com.au> Message-ID: <200604050126.08686.fdrake@acm.org> On Tuesday 04 April 2006 22:34, Anthony Baxter wrote: > Just a reminder - the trunk is currently frozen for 2.5a1. Please > don't check anything into it until the release is done. I'll send an > update when this is good. Documentation packages have been uploaded to the download area. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From jaustin at post.harvard.edu Wed Apr 5 07:53:38 2006 From: jaustin at post.harvard.edu (Jess Austin) Date: Tue, 4 Apr 2006 22:53:38 -0700 Subject: [Python-Dev] tally (and other accumulators) Message-ID: <28119.1144216418@post.harvard.edu> Alex wrote: > import collections > def tally(seq): > d = collections.defaultdict(int) > for item in seq: > d[item] += 1 > return dict(d) I'll stop lurking and submit the following: def tally(seq): return dict((group[0], len(tuple(group[1]))) for group in itertools.groupby(sorted(seq))) In general itertools.groupby() seems like a very clean way to do this sort of thing, whether you want to end up with a dict or not. I'll go so far as to suggest that the existence of groupby() obviates the proposed tally(). Maybe I've just coded too much SQL and it has warped my brain... OTOH the latter definition of tally won't cope with iterables, and it seems like O(nlogn) rather than O(n). cheers, Jess From aleaxit at gmail.com Wed Apr 5 08:08:05 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Tue, 4 Apr 2006 23:08:05 -0700 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <28119.1144216418@post.harvard.edu> References: <28119.1144216418@post.harvard.edu> Message-ID: <CA8882ED-9B5C-4ADF-9C15-B3082096FF27@gmail.com> On Apr 4, 2006, at 10:53 PM, Jess Austin wrote: > Alex wrote: >> import collections >> def tally(seq): >> d = collections.defaultdict(int) >> for item in seq: >> d[item] += 1 >> return dict(d) > > I'll stop lurking and submit the following: > > def tally(seq): > return dict((group[0], len(tuple(group[1]))) > for group in itertools.groupby(sorted(seq))) > > In general itertools.groupby() seems like a very clean way to do this > sort of thing, whether you want to end up with a dict or not. I'll go > so far as to suggest that the existence of groupby() obviates the > proposed tally(). Maybe I've just coded too much SQL and it has > warped > my brain... > > OTOH the latter definition of tally won't cope with iterables, and it > seems like O(nlogn) rather than O(n). It will cope with any iterable just fine (not sure why you think otherwise), but the major big-O impact seems to me to rule it out as a general solution. Alex From jaustin at post.harvard.edu Wed Apr 5 08:47:07 2006 From: jaustin at post.harvard.edu (Jess Austin) Date: Tue, 4 Apr 2006 23:47:07 -0700 Subject: [Python-Dev] tally (and other accumulators) Message-ID: <29419.1144219627@post.harvard.edu> Alex wrote: > On Apr 4, 2006, at 10:53 PM, Jess Austin wrote: > > Alex wrote: > >> import collections > >> def tally(seq): > >> d = collections.defaultdict(int) > >> for item in seq: > >> d[item] += 1 > >> return dict(d) > > > > I'll stop lurking and submit the following: > > > > def tally(seq): > > return dict((group[0], len(tuple(group[1]))) > > for group in itertools.groupby(sorted(seq))) > > > > In general itertools.groupby() seems like a very clean way to do this > > sort of thing, whether you want to end up with a dict or not. I'll go > > so far as to suggest that the existence of groupby() obviates the > > proposed tally(). Maybe I've just coded too much SQL and it has > > warped my brain... > > > > OTOH the latter definition of tally won't cope with iterables, and it > > seems like O(nlogn) rather than O(n). > > It will cope with any iterable just fine (not sure why you think > otherwise), but the major big-O impact seems to me to rule it out as > a general solution. You're right in that it won't raise an exception on an iterator, but the sorted() means that it's much less memory efficient than your version for iterators. Another reason to avoid sorted() for this application, besides the big-O. Anyway, I still like groupby() for this sort of thing, with the aforementioned caveats. Functional code seems a little clearer to me, although I realize that preference is not held universally. cheers, Jess From g.brandl at gmx.net Wed Apr 5 08:49:59 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 05 Apr 2006 08:49:59 +0200 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> Message-ID: <e0vpan$lq$1@sea.gmane.org> Crutcher Dunnavant wrote: > While nocking together a framework today, I ran into the amazing > limitations of issubclass(). > > A) issubclass() throws a TypeError if the object being checked is not > a class, which seems very strange. It is a predicate, and lacking a > isclass() method, it should just return False. > B) issubclass() won't work on a list of classs, the way isinstance() does. > > Is there any support for patches which fix A and/or B? I don't think B is broken: Python 2.4.2 [...] >>> issubclass(int, 0) Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: issubclass() arg 2 must be a class or tuple of classes >>> issubclass(int, (int, str)) True >>> issubclass(str, (int, str)) True >>> issubclass(dict, (int, str)) False >>> (both isinstance() and issubclass() don't work on a _list_ as second argument) Georg From walter at livinglogic.de Wed Apr 5 10:09:24 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Wed, 05 Apr 2006 10:09:24 +0200 Subject: [Python-Dev] [Python-checkins] r43545 - in python/trunk: Doc/lib/libcalendar.tex Lib/calendar.py In-Reply-To: <4432F84C.5080404@canterbury.ac.nz> References: <20060401204023.804801E4006@bag.python.org> <1f7befae0604011443v52d051cfvcd0b01773ece3ec0@mail.gmail.com> <61055.89.54.32.106.1143966244.squirrel@isar.livinglogic.de> <1f7befae0604021405g16193eadicfef4bf7e4d280e6@mail.gmail.com> <44304C4E.8000707@livinglogic.de> <44312B58.3090900@gmail.com> <44313E5E.6040003@livinglogic.de> <4431D08F.8070603@canterbury.ac.nz> <44322DF7.1040603@livinglogic.de> <4432F84C.5080404@canterbury.ac.nz> Message-ID: <44337B34.4060903@livinglogic.de> Greg Ewing wrote: > Walter D?rwald wrote: >> Greg Ewing wrote: >> >>> Wouldn't it be better for the setter to raise an exception >>> if it's out of range? It probably indicates a bug in the >>> caller's code. >> >> The day before Monday is -1, so it adds a little convenience. > > In that case, why doesn't the global function > allow the same convenience? For backwards compatibility reasons. >> I think we've spent more time discussing the calendar module, than the >> Python community has spent using it! ;) > > It makes a nice change from adaptation and generic > functions. :-) ;) Bye, Walter D?rwald From ncoghlan at gmail.com Wed Apr 5 13:58:25 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 05 Apr 2006 21:58:25 +1000 Subject: [Python-Dev] reference leaks, __del__, and annotations In-Reply-To: <1f7befae0604041932v3af905ack84752bdbcb79ed27@mail.gmail.com> References: <fb6fbf560603301707t553d82c3k1681318b64aba15e@mail.gmail.com> <442DCEC7.9090906@canterbury.ac.nz> <442E0F51.7030608@gmail.com> <442E229A.8030608@canterbury.ac.nz> <9e804ac0604010219x2c4f5306w5053570eae1de04d@mail.gmail.com> <2mlkunt7a8.fsf@starship.python.net> <4431C71C.2080703@canterbury.ac.nz> <2mwte5sunt.fsf@starship.python.net> <1f7befae0604040901v34f52664w118f6e506a9dd979@mail.gmail.com> <4432FEC1.3020800@canterbury.ac.nz> <1f7befae0604041932v3af905ack84752bdbcb79ed27@mail.gmail.com> Message-ID: <4433B0E1.2030208@gmail.com> Tim Peters wrote: > Note that it's very easy to do this with __del__. The trick is for > your type not to have a __del__ method itself, but to point to a > simple "cleanup object" with a __del__ method. Give that "contained" > object references to the resources you want to close, and you're done. > Because your "real" objects don't have __del__ methods, cycles > involving them can't inhibit gc. The cleanup object's only purpose in > life is to close resources. Like: > > class _RealTypeResourceCleaner: > def __init__(self, *resources): > self.resources = resources > > def __del__(self): > if self.resources is not None: > for r in self.resources: > r.close() > self.resources = None > > # and typically no other methods are needed, or desirable, in > # this helper class > > class RealType: > def __init__(*args): > ... > # and then, e.g., > self.cleaner = _ResourceCleaner(resource1, resource2) > > ... tons of stuff, but no __del__ .... > > That's the simple, general way to mix cycles and __del__ without problems. So, stealing this trick for generators would involve a "helper" object with a close() method, a __del__ method that invoked it, and access to the generator's frame stack (rather than to the generator itself). class _GeneratorCleaner: __slots__ = ["_gen_frame"] def __init__(self, gen_frame): self._gen_frame = gen_frame def close(self): # Do whatever gen.close() currently does to the # raise GeneratorExit in the frame stack # and catch it again def __del__(self): self.close() The generator's close() method would then change to be: def close(self): self._cleaner.close() Would something like that eliminate the current cycle problem? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From anthony at python.org Wed Apr 5 14:48:00 2006 From: anthony at python.org (Anthony Baxter) Date: Wed, 5 Apr 2006 22:48:00 +1000 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 1) Message-ID: <200604052248.19094.anthony@python.org> On behalf of the Python development team and the Python community, I'm happy to announce the first alpha release of Python 2.5. This is an *alpha* release of Python 2.5, and is the *first* alpha release. As such, it is not suitable for a production environment. It is being released to solicit feedback and hopefully discover bugs, as well as allowing you to determine how changes in 2.5 might impact you. If you find things broken or incorrect, please log a bug on Sourceforge. In particular, note that changes to improve Python's support of 64 bit systems might require authors of C extensions to change their code. More information (as well as source distributions and Windows installers) are available from the 2.5 website: http://www.python.org/2.5/ The plan from here is for a number of additional alpha releases, followed by one or more beta releases and moving to a 2.5 final release around August. PEP 356 includes the schedule and will be updated as the schedule evolves. The new features in Python 2.5 are described in Andrew Kuchling's What's New In Python 2.5. It's available from the 2.5 web page. Amongst the language features added include conditional expressions, the with statement, the merge of try/except and try/finally into try/except/finally, enhancements to generators to produce a coroutine kind of functionality, and a brand new AST-based compiler implementation. New major modules added include hashlib, ElementTree, sqlite3 and ctypes. In addition, a new profiling module cProfile was added. A large number of bugs, regressions and reference leaks have been fixed since Python 2.4. See the release notes for more. Enjoy this new (alpha!) release, Anthony Anthony Baxter anthony at python.org Python Release Manager (on behalf of the entire python-dev team) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20060405/62a7b8ae/attachment.pgp From p.f.moore at gmail.com Wed Apr 5 15:16:09 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 5 Apr 2006 14:16:09 +0100 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 1) In-Reply-To: <200604052248.19094.anthony@python.org> References: <200604052248.19094.anthony@python.org> Message-ID: <79990c6b0604050616y41de7f3cn41d2b6f3f0c31e09@mail.gmail.com> On 4/5/06, Anthony Baxter <anthony at python.org> wrote: > On behalf of the Python development team and the Python > community, I'm happy to announce the first alpha release > of Python 2.5. Excellent! Downloading it now for a test run... One (possibly very minor) point - the web page offers Windows 64-bit MSI installers for Itanium and AMD64, but these seem to point to the same file! I'm not a Win64 user, so I don't know if this is actually wrong, but it's confusing at least... Paul. From anthony at interlink.com.au Wed Apr 5 15:20:50 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 5 Apr 2006 23:20:50 +1000 Subject: [Python-Dev] TRUNK is UNFROZEN Message-ID: <200604052320.51869.anthony@interlink.com.au> Python 2.5a1 is done. Please feel free to checkin to the trunk again. I should note here - the ubuntu dapper x86 buildbot is now running with a compiler of "icc -Wp64". This is Intel's C compiler, with warnings about potential 64 bit issues turned on. I tried with -Wall, but the icc compiler's -Wall mode is unbelievably picky and whiny, and the output was less than useful. www.python.org/dev/buildbot/all, and select one of the compile logs from the "ubuntu dapper x86 (icc)" column. Be aware that this is an old 500MHz box I had lying around that's found a new use as a buildslave, so it's not the fastest box in the world... Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at python.org Wed Apr 5 15:22:11 2006 From: anthony at python.org (Anthony Baxter) Date: Wed, 5 Apr 2006 23:22:11 +1000 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 1) In-Reply-To: <79990c6b0604050616y41de7f3cn41d2b6f3f0c31e09@mail.gmail.com> References: <200604052248.19094.anthony@python.org> <79990c6b0604050616y41de7f3cn41d2b6f3f0c31e09@mail.gmail.com> Message-ID: <200604052322.13551.anthony@python.org> On Wednesday 05 April 2006 23:16, Paul Moore wrote: > One (possibly very minor) point - the web page offers Windows > 64-bit MSI installers for Itanium and AMD64, but these seem to > point to the same file! I'm not a Win64 user, so I don't know if > this is actually wrong, but it's confusing at least... Damn. Missed that one. Fixed, will be visible again when the website auto-rebuilds (5-10 minutes). Anthony -- Anthony Baxter <anthony at python.org> It's never too late to have a happy childhood. From ndbecker2 at gmail.com Wed Apr 5 17:11:18 2006 From: ndbecker2 at gmail.com (Neal Becker) Date: Wed, 05 Apr 2006 11:11:18 -0400 Subject: [Python-Dev] suggest: nowait option in subprocess.communicate Message-ID: <e10mmb$cgl$1@sea.gmane.org> I'd like to start several processes, each a pipe reading from my python main process. It looks like I want to write all my data to each process, then use communicate(), but I don't want to wait for each process yet, since then they would block each other. Why not add a nowait option to communicate? From anthony at interlink.com.au Wed Apr 5 17:20:10 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 6 Apr 2006 01:20:10 +1000 Subject: [Python-Dev] TRUNK is UNFROZEN In-Reply-To: <200604052320.51869.anthony@interlink.com.au> References: <200604052320.51869.anthony@interlink.com.au> Message-ID: <200604060120.14058.anthony@interlink.com.au> On Wednesday 05 April 2006 23:20, Anthony Baxter wrote: > www.python.org/dev/buildbot/all, That should be www.python.org/dev/buildbot/all/ (needs the trailing /) Anthony From skip at pobox.com Wed Apr 5 18:12:07 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 5 Apr 2006 11:12:07 -0500 Subject: [Python-Dev] Suggestion: Please login to wiki when you make changes Message-ID: <17459.60503.215178.405841@montanaro.dyndns.org> I know it's a minor point, but for those of us who monitor changes to the wiki I think it would make our task a bit easier if people within the Python developer community were logged in when they made changes. That way, instead of seeing a mail subject like [PythonInfo Wiki] Update of "BuildBot" by 24.6.150.200 I would see a person or login I recognize, like [PythonInfo Wiki] Update of "BuildBot" by SkipMontanaro Seeing a recognized name, I could just delete the mail without considering if it was wiki spam. It's a small point, but when you get hundreds of emails a day, speed is everything... Thx, Skip From abo at minkirri.apana.org.au Wed Apr 5 18:47:50 2006 From: abo at minkirri.apana.org.au (Donovan Baarda) Date: Wed, 05 Apr 2006 17:47:50 +0100 Subject: [Python-Dev] strftime/strptime locale funnies... Message-ID: <1144255670.3991.62.camel@warna.dub.corp.google.com> G'day, Just noticed on Debian (testing), Ubuntu (warty?), and RedHat (old) based systems Python's time.strptime() seems to ignore the environment's Locale and just uses "C". Last time I looked at this, time.strptime() leveraged off the platform's strptime(), which meant it had all the extra features, bugs and missingness of the platform's implementation. We now seem to be using a Python implementation in _strptime.py. This implementation does Locale's by feeding a magic date to time.strftime() and figuring out how it formats it. This revealed that time.strftime() is not honouring the Locale settings, which is causing the new Python strptime() to also get it wrong. $ set | grep "^LC\|LANG" GDM_LANG=en_AU.UTF-8 LANG=en_AU.UTF-8 LANGUAGE=en_AU.UTF-8 LC_COLLATE=C $ date -d "1999-02-22" +%x 22/02/99 $ python ... >>> import time >>> time.strftime("%x", time.strptime("1999-02-22","%Y-%m-%d")) '02/22/99' This is consistent across all three platforms for multiple Python versions, including 2.1 and 1.5 (where they were available) which BTW don't use the Python implementation of strptime(). This suggests that all three of these platforms have a broken libc strftime() implementation... but all three? And why does date work? Can others reproduce this? Have I done something stupid? Is this a bug, and in what, libc or Python? Slightly OT, is it wise to use a Python strptime() on platforms that have a perfectly good one in libc? The Python reverse-engineering of libc's strftime() output to figure out locale formatting is clever, but... I see there have already been bugs submitted about strftime/strptime non-symmetry for things like support of extensions. There has also been a bug against strptime() Locale switching not working because of caching Locale formatting info from the strftime() analysis, but I can't seem to get non-C Locale's working at all... -- Donovan Baarda <abo at minkirri.apana.org.au> http://minkirri.apana.org.au/~abo/ From rwgk at yahoo.com Wed Apr 5 19:54:13 2006 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Wed, 5 Apr 2006 10:54:13 -0700 (PDT) Subject: [Python-Dev] PY_SSIZE_T_MIN? Message-ID: <20060405175413.56695.qmail@web31501.mail.mud.yahoo.com> Congratulations to the Python 2.5a1 release! I started adjusting Boost.Python to work with this new release and it is going very well. I noticed this #define in pyport.h: #define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1)) However, I couldn't find a corresponding PY_SSIZE_T_MIN which would come in handy to adjust old code using INT_MIN (from limits.h). Are there arguments against defining PY_SSIZE_T_MIN? Or is this just an oversight? Objects/longobject.c uses: return -PY_SSIZE_T_MAX-1; To maximize consistency this would seem ideal to me: pyport.h: #define PY_SSIZE_T_MIN (-PY_SSIZE_T_MAX-1) longobject.c: return PY_SSIZE_T_MIN; Cheers, Ralf __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From benji at benjiyork.com Wed Apr 5 20:10:38 2006 From: benji at benjiyork.com (Benji York) Date: Wed, 05 Apr 2006 14:10:38 -0400 Subject: [Python-Dev] 2.5a1 Performance Message-ID: <4434081E.1000806@benjiyork.com> Realizing that early releases don't normally perform as well as final releases, I ran pystone for 2.5a1 and compared with 2.4.2 (what I had handy). 2.5a1 got slightly more than 30k, while 2.4.2 gets slightly more than 35k (1.4 GHz, Pentium M, 1 Meg L2 cache). I also ran a large test suite for a project with both 2.5a1 and 2.4.2 and got nearly identical times. I haven't seen general performance mentioned here lately, so I thought I'd share those numbers. On a related note: it might be nice to put a pystone run in the buildbot so it'd be easier to compare pystones across different releases, different architectures, and between particular changes to the code. (That's assuming that the machines are otherwise idle, though.) -- Benji York From jeremy at alum.mit.edu Wed Apr 5 20:19:16 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 5 Apr 2006 14:19:16 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4434081E.1000806@benjiyork.com> References: <4434081E.1000806@benjiyork.com> Message-ID: <e8bf7a530604051119r636ec38bod2f8692557ce701c@mail.gmail.com> On 4/5/06, Benji York <benji at benjiyork.com> wrote: > Realizing that early releases don't normally perform as well as final > releases, I ran pystone for 2.5a1 and compared with 2.4.2 (what I had > handy). 2.5a1 got slightly more than 30k, while 2.4.2 gets slightly > more than 35k (1.4 GHz, Pentium M, 1 Meg L2 cache). We should verify that all the compiler optimizations that existed in Python 2.4 still exist in Python 2.5. We didn't expend much effort in that area during the compiler development. Not sure if there are any that would affect pystone or not. Jeremy > > I also ran a large test suite for a project with both 2.5a1 and 2.4.2 > and got nearly identical times. > > I haven't seen general performance mentioned here lately, so I thought > I'd share those numbers. > > On a related note: it might be nice to put a pystone run in the buildbot > so it'd be easier to compare pystones across different releases, > different architectures, and between particular changes to the code. > (That's assuming that the machines are otherwise idle, though.) > -- > Benji York > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu > From martin at v.loewis.de Wed Apr 5 20:22:27 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 05 Apr 2006 20:22:27 +0200 Subject: [Python-Dev] PY_SSIZE_T_MIN? In-Reply-To: <20060405175413.56695.qmail@web31501.mail.mud.yahoo.com> References: <20060405175413.56695.qmail@web31501.mail.mud.yahoo.com> Message-ID: <44340AE3.7080907@v.loewis.de> Ralf W. Grosse-Kunstleve wrote: > #define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1)) > > However, I couldn't find a corresponding PY_SSIZE_T_MIN which would come in > handy to adjust old code using INT_MIN (from limits.h). Are there arguments > against defining PY_SSIZE_T_MIN? Or is this just an oversight? That's just an oversight; I just added it. Thanks for pointing that out, Martin From martin at v.loewis.de Wed Apr 5 20:27:14 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 05 Apr 2006 20:27:14 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4434081E.1000806@benjiyork.com> References: <4434081E.1000806@benjiyork.com> Message-ID: <44340C02.9050207@v.loewis.de> Benji York wrote: > Realizing that early releases don't normally perform as well as final > releases, I ran pystone for 2.5a1 and compared with 2.4.2 (what I had > handy). 2.5a1 got slightly more than 30k, while 2.4.2 gets slightly > more than 35k (1.4 GHz, Pentium M, 1 Meg L2 cache). What operating system and compiler? > On a related note: it might be nice to put a pystone run in the buildbot > so it'd be easier to compare pystones across different releases, > different architectures, and between particular changes to the code. > (That's assuming that the machines are otherwise idle, though.) That would do it across different architectures and particular changes to the code (although then something should keep a record of what pystone had been observed at what time). It won't do it across different releases, because we don't have Python binaries for each release available on each slave. Unfortunately, there can't be a guarantee that the slaves are idle. Several of them fulfill some primary function for whoever contributed the slave installation (I know *my* Solaris machine has to do other things, as well). Regards, Martin From jcarlson at uci.edu Wed Apr 5 21:39:15 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 05 Apr 2006 11:39:15 -0800 Subject: [Python-Dev] suggest: nowait option in subprocess.communicate In-Reply-To: <e10mmb$cgl$1@sea.gmane.org> References: <e10mmb$cgl$1@sea.gmane.org> Message-ID: <20060405104809.0659.JCARLSON@uci.edu> Neal Becker <ndbecker2 at gmail.com> wrote: > > I'd like to start several processes, each a pipe reading from my python main > process. It looks like I want to write all my data to each process, then > use communicate(), but I don't want to wait for each process yet, since > then they would block each other. Why not add a nowait option to > communicate? Alternatively, someone could commit some rough equivalent of this to the subprocess module: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/440554 Yes, I wrote it, and previously suggested it within the previous thread "Add timeout to subprocess.py?". Guido had previously privately asked if it could be phased as a patch to subprocess.py in such a way to be 2.2 compatible. Aside from fcntl module semantics (I don't know if they changed, whether it should be using FCNTL instead, ...), and/or the 3 additional functions from pywin32 that probably should be included in the _subprocess module on Windows, the core functionality of a pollable subprocess is available in the above recipe. - Josiah From benji at benjiyork.com Wed Apr 5 20:55:37 2006 From: benji at benjiyork.com (Benji York) Date: Wed, 05 Apr 2006 14:55:37 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <44340C02.9050207@v.loewis.de> References: <4434081E.1000806@benjiyork.com> <44340C02.9050207@v.loewis.de> Message-ID: <443412A9.1020201@benjiyork.com> Martin v. L?wis wrote: > What operating system and compiler? Oops, should have included that: Ubuntu Breezy, Kernel 2.6.12-10-686 GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9) > It won't do it across different > releases, because we don't have Python binaries for each release > available on each slave. I was thinking of "active" branches that there are buildbot slaves dedicated to (2.4 at the moment) and the trunk, but (as I mentioned) non-idleness pretty much kills that idea. I wonder if the slaves that are known to be dedicated to running buildbot and nothing else could be flagged as such. -- Benji York From theller at python.net Wed Apr 5 20:57:23 2006 From: theller at python.net (Thomas Heller) Date: Wed, 05 Apr 2006 20:57:23 +0200 Subject: [Python-Dev] How to determine if char is signed or unsigned? Message-ID: <e113un$18t$1@sea.gmane.org> Is there are #define symbol which allows to determine if 'char' is signed or unsigned? __CHAR_UNSIGNED__, maybe? I guess the buildbot failures on the ppc debian box are caused by ctypes using signed chars always. Thanks, Thomas From martin at v.loewis.de Wed Apr 5 21:00:10 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 05 Apr 2006 21:00:10 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <443412A9.1020201@benjiyork.com> References: <4434081E.1000806@benjiyork.com> <44340C02.9050207@v.loewis.de> <443412A9.1020201@benjiyork.com> Message-ID: <443413BA.3080804@v.loewis.de> Benji York wrote: > I was thinking of "active" branches that there are buildbot slaves > dedicated to (2.4 at the moment) and the trunk, but (as I mentioned) > non-idleness pretty much kills that idea. I wonder if the slaves that > are known to be dedicated to running buildbot and nothing else could be > flagged as such. The problem is that even these could do different things simultaneously: I still haven't figured out how to mutually lock out builders that are on the same slave. This is a frequent thing to happen, as people often check-in trunk and backported branch patches nearly simultaneously (which is fine, of course - the machines just have to cater with that). Regards, Martin From brett at python.org Wed Apr 5 21:13:35 2006 From: brett at python.org (Brett Cannon) Date: Wed, 5 Apr 2006 12:13:35 -0700 Subject: [Python-Dev] strftime/strptime locale funnies... In-Reply-To: <1144255670.3991.62.camel@warna.dub.corp.google.com> References: <1144255670.3991.62.camel@warna.dub.corp.google.com> Message-ID: <bbaeab100604051213m7f103b19ufd819ca8b95674fe@mail.gmail.com> On 4/5/06, Donovan Baarda <abo at minkirri.apana.org.au> wrote: > G'day, > > Just noticed on Debian (testing), Ubuntu (warty?), and RedHat (old) > based systems Python's time.strptime() seems to ignore the environment's > Locale and just uses "C". > > Last time I looked at this, time.strptime() leveraged off the platform's > strptime(), which meant it had all the extra features, bugs and > missingness of the platform's implementation. > > We now seem to be using a Python implementation in _strptime.py. This > implementation does Locale's by feeding a magic date to time.strftime() > and figuring out how it formats it. > The Python implementation of time.strptime() has been in Python since summer 2002 so it was first introduced in 2.3 (I can't believe I have been on python-dev that long!). > This revealed that time.strftime() is not honouring the Locale settings, > which is causing the new Python strptime() to also get it wrong. > This isn't time.strftime() . If you look at Modules/timemodule.c:450 you will find ``buflen = strftime(outbuf, i, fmt, &buf);`` for the actual strftime() call. Before that the only things that could possibly change the locale are localtime() or gettmarg(). Everything else is error-checking of arguments. > $ set | grep "^LC\|LANG" > GDM_LANG=en_AU.UTF-8 > LANG=en_AU.UTF-8 > LANGUAGE=en_AU.UTF-8 > LC_COLLATE=C > > $ date -d "1999-02-22" +%x > 22/02/99 > > $ python > ... > >>> import time > >>> time.strftime("%x", time.strptime("1999-02-22","%Y-%m-%d")) > '02/22/99' > > This is consistent across all three platforms for multiple Python > versions, including 2.1 and 1.5 (where they were available) which BTW > don't use the Python implementation of strptime(). > > This suggests that all three of these platforms have a broken libc > strftime() implementation... but all three? And why does date work? > Beats me. This could be a locale thing. If I remember correctly Python assumes the C locale on some things. I suspect the reason for this is in the locale module or libc. But you can't even find the word 'locale' or 'Locale' in timemodule.c nor do I know of any calls that mess with the locale, so I doubt 'time' is at fault for this. > Can others reproduce this? Have I done something stupid? Is this a bug, > and in what, libc or Python? > > Slightly OT, is it wise to use a Python strptime() on platforms that > have a perfectly good one in libc? The Python reverse-engineering of > libc's strftime() output to figure out locale formatting is clever, > but... > The reason it was made the default implementation is for consistency across platforms. Since the trouble to implement a pure Python version was done and it is not a performance critical operation, consistency across platforms was deemed more important than allowing various platforms to support whatever directives they chose and having people writing code the relied upon it. Plus it has been in there for so long there would be backwards-compatibility issues if we switched this now. > I see there have already been bugs submitted about strftime/strptime > non-symmetry for things like support of extensions. There has also been > a bug against strptime() Locale switching not working because of caching > Locale formatting info from the strftime() analysis, None open, right? Those should all be closed and fixed. -Brett From zpincus at stanford.edu Wed Apr 5 21:28:11 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Wed, 5 Apr 2006 14:28:11 -0500 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <18AED6F0-32EB-4743-9AE9-1E060BA35C93@redivi.com> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <ee2a432c0604032101o7a8d0819m9761f3636daf454f@mail.gmail.com> <18AED6F0-32EB-4743-9AE9-1E060BA35C93@redivi.com> Message-ID: <695AC05F-E36B-4A21-9DD1-D4572C4846BE@stanford.edu> Hello folks, I just ran all the test iterations Martin suggested on Py2.5a1. That is, I did a normal build and ran 'make test', then installed and ran 'import test.regrtest; test.regrtest.main()', and then I did the whole thing over again with a framework build and install. All four test runs worked out fine. So at least on 10.4.5, this patch seems to be as tested as it's going to be. I've had some reports that it works fine on 10.3, and the patch doesn't change anything on 10.2. So given Bob's previous OK, this is probably ready to be applied -- unless anyone else has further concerns? Zach PS. I should mention as an aside that test_startfile.py is reported as 'failing unexpectedly on darwin', but since startfile is a windows thing, it really should be added to the expected tests in Lib/test/ regrtest.py. My patch didn't mess this up, though -- the startfile test is absent from the 'exclude' list in the SVN repository. On Apr 4, 2006, at 11:57 AM, Bob Ippolito wrote: >> No, it's fine. Thanks for reminding us about this issue. >> Unfortunately, without an explicit ok from one of the Mac >> maintainers, >> I don't want to add this myself. If you can get Bob, Ronald, or Jack >> to say ok, I will apply the patch ASAP. I have a Mac OS X.4 box and >> can test it, but don't know the suitability of the patch. > > The patch has my OK (I gave it a while ago on pythonmac-sig). > > -bob > From martin at v.loewis.de Wed Apr 5 21:30:48 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 05 Apr 2006 21:30:48 +0200 Subject: [Python-Dev] How to determine if char is signed or unsigned? In-Reply-To: <e113un$18t$1@sea.gmane.org> References: <e113un$18t$1@sea.gmane.org> Message-ID: <44341AE8.9050400@v.loewis.de> Thomas Heller wrote: > Is there are #define symbol which allows to determine if > 'char' is signed or unsigned? __CHAR_UNSIGNED__, maybe? You could define an autoconf test for that. There is no predefined symbol. > I guess the buildbot failures on the ppc debian box are caused by > ctypes using signed chars always. So you think char is unsigned on ppc debian? I would find that very surprising. Regards, Martin From tim.peters at gmail.com Wed Apr 5 21:54:16 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 5 Apr 2006 15:54:16 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4434081E.1000806@benjiyork.com> References: <4434081E.1000806@benjiyork.com> Message-ID: <1f7befae0604051254g7b27f84fsf3b7001d0893fb7@mail.gmail.com> FYI, on my WinXP box, there appears to be about a 1% pystone difference: best seen for 2.4.3: 48118.9 best seen for trunk: 47629.8 While tiny, the difference "looked real", as many runs on 2.4.3 broke 48000 but none did on the trunk. Note that pystone uses wall-clock time on Windows (with sub-microsecond resolution), but CPU time on Linux (with resolution that varies by Linux flavor, but maybe no better than 0.01 second) -- that's all inherited from time.clock. As a result, I never see the same pystone result twice on Windows :-) When thinking about fiddling the buildbots, remember that we only do debug builds now, and pystone is much slower then. For example, on this box, the best trunk debug-build pystone result I got was 17484.3 (a factor of 2.7x smaller). From Scott.Daniels at Acm.Org Wed Apr 5 21:56:14 2006 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Wed, 05 Apr 2006 12:56:14 -0700 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <29419.1144219627@post.harvard.edu> References: <29419.1144219627@post.harvard.edu> Message-ID: <e117c3$e6d$1@sea.gmane.org> Jess Austin wrote: > Alex wrote: >> On Apr 4, 2006, at 10:53 PM, Jess Austin wrote: >>> Alex wrote: >>>> import collections >>>> def tally(seq): >>>> d = collections.defaultdict(int) >>>> for item in seq: >>>> d[item] += 1 >>>> return dict(d) [Jess again] >>> def tally(seq): >>> return dict((group[0], len(tuple(group[1]))) >>> for group in itertools.groupby(sorted(seq))) >>> In general itertools.groupby() seems like a very clean way to do this >>> sort of thing, whether you want to end up with a dict or not. I'll go >>> so far as to suggest that the existence of groupby() obviates the >>> proposed tally(). Maybe I've just coded too much SQL and it has >>> warped my brain... > You're right in that it won't raise an exception on an iterator, but the > sorted() means that it's much less memory efficient than your version > for iterators. Another reason to avoid sorted() for this application, > besides the big-O. Anyway, I still like groupby() for this sort of > thing, with the aforementioned caveats. Functional code seems a little > clearer to me, although I realize that preference is not held > universally. However, sorted requires ordering. Try seq = [1, 1j, -1, -1j] * 5 Alex's tally works, but yours does not. -- -- Scott David Daniels Scott.Daniels at Acm.Org From nnorwitz at gmail.com Wed Apr 5 22:05:24 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 5 Apr 2006 13:05:24 -0700 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <443412A9.1020201@benjiyork.com> References: <4434081E.1000806@benjiyork.com> <44340C02.9050207@v.loewis.de> <443412A9.1020201@benjiyork.com> Message-ID: <ee2a432c0604051305g5b9092edw9f9b10967e8de7be@mail.gmail.com> On 4/5/06, Benji York <benji at benjiyork.com> wrote: > Martin v. L?wis wrote: > > What operating system and compiler? > > Oops, should have included that: > Ubuntu Breezy, Kernel 2.6.12-10-686 > GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9) 32-bit or 64-bit? I would expect a modest diff on 64-bit between 2.4 and 2.5. You built both HEAD and 2.4 from scratch, right? To get more consistent results, you need to pass an option to gcc to tell it to use consistent branch prediction (or something like that). Otherwise, you will not get consistent results with the same source code. This option is not in the build system, you'll need to research it. I don't expect that to make more than a couple tenths of a percent diff though. I haven't done any profiling and I'm not likely to before 2.5 is released. It would be good if someone took a look at this though. n From benji at benjiyork.com Wed Apr 5 22:12:38 2006 From: benji at benjiyork.com (Benji York) Date: Wed, 05 Apr 2006 16:12:38 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <ee2a432c0604051305g5b9092edw9f9b10967e8de7be@mail.gmail.com> References: <4434081E.1000806@benjiyork.com> <44340C02.9050207@v.loewis.de> <443412A9.1020201@benjiyork.com> <ee2a432c0604051305g5b9092edw9f9b10967e8de7be@mail.gmail.com> Message-ID: <443424B6.4030301@benjiyork.com> Neal Norwitz wrote: > 32-bit or 64-bit? I would expect a modest diff on 64-bit between 2.4 and 2.5. 32-bit; don't know of any 64-bit Pentium Ms :) > You built both HEAD and 2.4 from scratch, right? Right. -- Benji York From Scott.Daniels at Acm.Org Wed Apr 5 22:31:10 2006 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Wed, 05 Apr 2006 13:31:10 -0700 Subject: [Python-Dev] Should issubclass() be more like isinstance()? In-Reply-To: <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> References: <d49fe110604041727l6ecb9776kbb65f01b6e0335f9@mail.gmail.com> <4433137D.5040901@canterbury.ac.nz> <d49fe110604041755w5a4c8ed2q94038398dd6c5b2d@mail.gmail.com> Message-ID: <e119di$mgc$1@sea.gmane.org> Crutcher Dunnavant wrote: > On 4/4/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: >> Crutcher Dunnavant wrote: >>> B) issubclass() won't work on a list of classs, >> > the way isinstance() does. >> >> That sounds more reasonable. I can't think of any >> reason why it shouldn't work. There is an issue of confusion: class CheeseShop(Sketch, ByCleese, ByGraham): pass You need to be careful that the user understands issubclass(SillyWalks, (Sketch, ByCleese, ByGraham)) is not simply testing that SillyWalks has the same ancestors as CheeseShop, but is True simply because issubclass(SillyWalks, Sketch) is True. More a document issue than anything, but to be considered. --Scott David Daniels Scott.Daniels at Acm.Org From warner at lothar.com Wed Apr 5 23:01:26 2006 From: warner at lothar.com (Brian Warner) Date: Wed, 05 Apr 2006 14:01:26 -0700 (PDT) Subject: [Python-Dev] 2.5a1 Performance Message-ID: <20060405.140126.73022967.warner@lothar.com> > I still haven't figured out how to mutually lock out builders that are > on the same slave. This is a frequent thing to happen, as people often > check-in trunk and backported branch patches nearly simultaneously > (which is fine, of course - the machines just have to cater with that). You can tell the Builder to claim a slave-wide "Lock" before starting the build. Only one build can claim this Lock at a time, which will probably accomplish what you want for performance testing (assuming the host isn't doing anything other than buildbot work, of course). I don't know what the python buildbot's master.cfg looks like, but you'll probably want to add something like this (taken from the buildbot.texinfo user's manual) from buildbot import locks cpulock = locks.SlaveLock("cpu") b1 = {'name': 'buildername', 'slavename': 'bot-1', 'factory': f, 'locks': [cpulock]} c['builders'].append(b1) The name of the lock is meant for debugging, so you can tell why a given builder is stalled. Each buildslave gets its own lock, and the builder claims every lock listed in the 'locks' key before starting. The one big problem with this approach is that the build looks like it's stalled while it waits for the lock: you get a big yellow box between the time it is supposed to start and the time it finally claims the lock. Oh, and obviously you need to list the lock in all builders that are supposed to compete for it. cheers, -Brian From jepler at unpythonic.net Wed Apr 5 23:16:19 2006 From: jepler at unpythonic.net (Jeff Epler) Date: Wed, 5 Apr 2006 16:16:19 -0500 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4434081E.1000806@benjiyork.com> References: <4434081E.1000806@benjiyork.com> Message-ID: <20060405211619.GA16703@unpythonic.net> I compiled 2.4 and 2.5 from svn. The machine is Fedora Core 4, AMD64. I built both with ./configure && make (which gives a "64-bit" binary) and then ran pystone with 200000 iterations 10 times: for i in `seq 1 10`; do ./python Lib/test/pystone.py 200000 ; done The machine was "near idle" at the time. The best result for 2.4 (svn revision 43663) was 4.49 seconds (44343.4 pystones/second), which occurred on 4 of the runs. The best result for 2.5 (svn revision 43675) was 4.27 seconds (46620 pystones/second), which occurred on only one run. 4.29 seconds, the next best time, occurred on 3 runs. I'm not trivially able to try a 32-bit build, but for my system it appears that 2.5 is moderately faster than 2.4 when built with all the defaults. Jeff From benji at benjiyork.com Thu Apr 6 00:04:00 2006 From: benji at benjiyork.com (Benji York) Date: Wed, 05 Apr 2006 18:04:00 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <20060405211619.GA16703@unpythonic.net> References: <4434081E.1000806@benjiyork.com> <20060405211619.GA16703@unpythonic.net> Message-ID: <44343ED0.1060208@benjiyork.com> Jeff Epler wrote: > I'm not trivially able to try a 32-bit build, but for my system it > appears that 2.5 is moderately faster than 2.4 when built with all the > defaults. OK, this prompted me to question my sanity. Being on a laptop the default is to do frequency scaling (different speeds depending on CPU load). When running pystone I've always seen an initial run that was much slower than subsequent runs, so I'd run several and throw out the first few. This gives a max of 35k pystones with 2.4.2 and 30k with 2.5a1. I decided to force the CPU freq. to the maximum (1.4 GHz) and remeasure. 2.5a1 reported the same pystones (30k) and, to my surprise, 2.4 reported 31k (about 5k *less* than with freq. scaling on). So there seems to be some interaction between frequency scaling and pystones. Benchmarking is hard, let's go shopping! -- Benji York From p.f.moore at gmail.com Thu Apr 6 00:17:15 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 5 Apr 2006 23:17:15 +0100 Subject: [Python-Dev] Possible issue with 2.5a1 Win32 binary Message-ID: <79990c6b0604051517w118041dbsb822ea4fae8cd23d@mail.gmail.com> Can someone check http://www.python.org/sf/1465093 for me? It looks like a fairly serious issue with the Windows binaries - pywin32 is a pretty important package on Windows. I've verified it on 2 machines, but can't work out what the issue might be. I've assigned it to Martin, as the owner of the MSI files, but if someone else is better placed to check it out, please do. I'm happy to run tests, or whatever to help locate the issue, but I don't have a Windows build environment to build Python myself. Thanks, Paul. From python at rcn.com Thu Apr 6 00:21:55 2006 From: python at rcn.com (Raymond Hettinger) Date: Wed, 5 Apr 2006 15:21:55 -0700 Subject: [Python-Dev] 2.5a1 Performance References: <4434081E.1000806@benjiyork.com><20060405211619.GA16703@unpythonic.net> <44343ED0.1060208@benjiyork.com> Message-ID: <00fe01c658ff$59b97930$6c3c0a0a@RaymondLaptop1> > Benchmarking is hard, let's go shopping! Quick reminder: pystone is mostly useful for predicting Python's relative performance across various machines and operating systems. For benchmarking Python itself, pystone is a seriously impaired tool. For one, it exercises only a tiny subset of the language. For another, it times an empty loop and subtracts that from the result of loops with bodies -- that means that improvements/impairments to the eval-loop get netted-out of the result. Let's stop talking about pystone in this thread and focus on meaningful metrics instead. If you want some good measurements of the eval-loop speed and a few simple instructions, use timeit.py. The results should be directly comparable between Py2.4 and Py2.5a. If you want good measurements that specifically exercise a wide gamut of commonly used functions, then use pybench.py. If you want to thoroughly exercise the language, use the parrot benchmark in the sandbox. Of course, the only truly useful benchmark is how Python performs on your own apps. Raymond From tdelaney at avaya.com Thu Apr 6 00:29:57 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Thu, 6 Apr 2006 08:29:57 +1000 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 1) Message-ID: <2773CAC687FD5F4689F526998C7E4E5F07436A@au3010avexu1.global.avaya.com> Anthony Baxter wrote: > On behalf of the Python development team and the Python > community, I'm happy to announce the first alpha release > of Python 2.5. I noticed in PEP 356 Open Issues "StopIteration should propagate from context managers" that there's a still a question (from Jim Jewett) about whether the fix is correct. We'll need to get confirmation from PJE, but I'm pretty sure this can be removed from open issues - returning False from __exit__ will result in the original exception being re-raised. Tim Delaney From pje at telecommunity.com Thu Apr 6 01:01:13 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 05 Apr 2006 16:01:13 -0700 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 1) In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5F07436A@au3010avexu1.global .avaya.com> References: <2773CAC687FD5F4689F526998C7E4E5F07436A@au3010avexu1.global.avaya.com> Message-ID: <7.0.1.0.0.20060405160009.021273e0@telecommunity.com> At 03:29 PM 4/5/2006, Delaney, Timothy (Tim) wrote: >Anthony Baxter wrote: > > > On behalf of the Python development team and the Python > > community, I'm happy to announce the first alpha release > > of Python 2.5. > >I noticed in PEP 356 Open Issues "StopIteration should propagate from >context managers" that there's a still a question (from Jim Jewett) >about whether the fix is correct. > >We'll need to get confirmation from PJE, but I'm pretty sure this can be >removed from open issues - returning False from __exit__ will result in >the original exception being re-raised. It is correct, and for the reason you describe. __exit__ *must* return True or False unless the generator raises a new, non-StopIteration exception. All other exceptions must be swallowed in __exit__. From greg.ewing at canterbury.ac.nz Thu Apr 6 02:21:06 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 06 Apr 2006 12:21:06 +1200 Subject: [Python-Dev] tally (and other accumulators) In-Reply-To: <28119.1144216418@post.harvard.edu> References: <28119.1144216418@post.harvard.edu> Message-ID: <44345EF2.7010304@canterbury.ac.nz> Jess Austin wrote: > I'll go > so far as to suggest that the existence of groupby() obviates the > proposed tally(). Except that it requires building a list of values in each group when all you want at the end is the length of the list. -- Greg From martin at v.loewis.de Thu Apr 6 02:36:43 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 06 Apr 2006 02:36:43 +0200 Subject: [Python-Dev] Possible issue with 2.5a1 Win32 binary In-Reply-To: <79990c6b0604051517w118041dbsb822ea4fae8cd23d@mail.gmail.com> References: <79990c6b0604051517w118041dbsb822ea4fae8cd23d@mail.gmail.com> Message-ID: <4434629B.7020701@v.loewis.de> Paul Moore wrote: > Can someone check http://www.python.org/sf/1465093 for me? It looks > like a fairly serious issue with the Windows binaries - pywin32 is a > pretty important package on Windows. My feeling is that this is a very shallow, easily fixed problem. > I've verified it on 2 machines, but can't work out what the issue > might be. I've assigned it to Martin, as the owner of the MSI files, > but if someone else is better placed to check it out, please do. I'm > happy to run tests, or whatever to help locate the issue, but I don't > have a Windows build environment to build Python myself. What happens when you run D:\Apps\Python25\python.exe -Wi D:\Apps\Python25\Lib\compileall.py -f -x badsyntax D:\Apps\Python25\Lib and look at the status of the program? I think also excluding bad_coding might already help. Regards, Martin From anthony at interlink.com.au Thu Apr 6 03:08:26 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 6 Apr 2006 11:08:26 +1000 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4434081E.1000806@benjiyork.com> References: <4434081E.1000806@benjiyork.com> Message-ID: <200604061108.30416.anthony@interlink.com.au> On Thursday 06 April 2006 04:10, Benji York wrote: > On a related note: it might be nice to put a pystone run in the > buildbot so it'd be easier to compare pystones across different > releases, different architectures, and between particular changes > to the code. (That's assuming that the machines are otherwise idle, > though.) -- -1. A bad benchmark (which pystone is) is much worse than no benchmark. -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Thu Apr 6 03:11:49 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 6 Apr 2006 11:11:49 +1000 Subject: [Python-Dev] Use dlopen() on Darwin/OS X to load extensions? In-Reply-To: <695AC05F-E36B-4A21-9DD1-D4572C4846BE@stanford.edu> References: <6C165D9B-CD78-4CC5-BB35-1A3FB6C4FFE3@stanford.edu> <18AED6F0-32EB-4743-9AE9-1E060BA35C93@redivi.com> <695AC05F-E36B-4A21-9DD1-D4572C4846BE@stanford.edu> Message-ID: <200604061111.50789.anthony@interlink.com.au> On Thursday 06 April 2006 05:28, Zachary Pincus wrote: > PS. I should mention as an aside that test_startfile.py is reported > as 'failing unexpectedly on darwin', but since startfile is a > windows thing, it really should be added to the expected tests in > Lib/test/ regrtest.py. My patch didn't mess this up, though -- the > startfile test is absent from the 'exclude' list in the SVN > repository. I fixed this shortly after Py2.5a1. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From greg.ewing at canterbury.ac.nz Thu Apr 6 05:30:06 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 06 Apr 2006 15:30:06 +1200 Subject: [Python-Dev] elementtree in stdlib Message-ID: <44348B3E.2000607@canterbury.ac.nz> A while ago there was some discussion about including elementtree in the std lib. I can't remember what the conclusion about that was, but if it does go ahead, I'd like to suggest that it be reorganised a bit. I've just started playing with it, and having a package called elementtree containing a module called ElementTree containing a class called ElementTree is just too confusing for words! -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From aleaxit at gmail.com Thu Apr 6 06:02:40 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Wed, 5 Apr 2006 21:02:40 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <44348B3E.2000607@canterbury.ac.nz> References: <44348B3E.2000607@canterbury.ac.nz> Message-ID: <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> On Apr 5, 2006, at 8:30 PM, Greg Ewing wrote: > A while ago there was some discussion about including > elementtree in the std lib. I can't remember what the > conclusion about that was, but if it does go ahead, > I'd like to suggest that it be reorganised a bit. > > I've just started playing with it, and having a > package called elementtree containing a module > called ElementTree containing a class called > ElementTree is just too confusing for words! Try the 2.5 alpha 1 just released, and you'll see that the toplevel package is now xml.etree. The module and class are still called ElementTree, though. Alex From bob at redivi.com Thu Apr 6 06:49:28 2006 From: bob at redivi.com (Bob Ippolito) Date: Wed, 5 Apr 2006 21:49:28 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> Message-ID: <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> On Apr 5, 2006, at 9:02 PM, Alex Martelli wrote: > > On Apr 5, 2006, at 8:30 PM, Greg Ewing wrote: > >> A while ago there was some discussion about including >> elementtree in the std lib. I can't remember what the >> conclusion about that was, but if it does go ahead, >> I'd like to suggest that it be reorganised a bit. >> >> I've just started playing with it, and having a >> package called elementtree containing a module >> called ElementTree containing a class called >> ElementTree is just too confusing for words! > > Try the 2.5 alpha 1 just released, and you'll see that the toplevel > package is now xml.etree. The module and class are still called > ElementTree, though. It would be nice to have new code be PEP 8 compliant.. Specifically: Modules should have short, lowercase names, without underscores. -bob From fredrik at pythonware.com Thu Apr 6 07:53:05 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 6 Apr 2006 07:53:05 +0200 Subject: [Python-Dev] elementtree in stdlib References: <44348B3E.2000607@canterbury.ac.nz><9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> Message-ID: <e12ac2$hjf$1@sea.gmane.org> Bob Ippolito wrote: > > Try the 2.5 alpha 1 just released, and you'll see that the toplevel > > package is now xml.etree. The module and class are still called > > ElementTree, though. > > It would be nice to have new code be PEP 8 compliant.. it's not new code, and having *different* module names for the same well-established library isn't very nice to anyone. > Modules should have short, lowercase names, without underscores. the PEP changes over time. the ElementTree module was perfectly PEP-compatible when it was written. </F> From martin at v.loewis.de Thu Apr 6 08:09:32 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 06 Apr 2006 08:09:32 +0200 Subject: [Python-Dev] Buildbot slave locks (Was: 2.5a1 Performance) In-Reply-To: <20060405.140126.73022967.warner@lothar.com> References: <20060405.140126.73022967.warner@lothar.com> Message-ID: <4434B09C.9070700@v.loewis.de> Brian Warner wrote: > I don't know what the python buildbot's master.cfg looks like, but you'll > probably want to add something like this (taken from the buildbot.texinfo > user's manual) Thanks, I have now done that, and it seems to work. It would be nice if the builder status would indicate that it is waiting for a lock. > The name of the lock is meant for debugging, so you can tell why a given > builder is stalled. Each buildslave gets its own lock, and the builder claims > every lock listed in the 'locks' key before starting. It wasn't first clear to me what "each buildslave gets its own lock"; I assumed it was my job to create a lock for each buildslave. I only then found out that a SlaveLock is always per slave (unlike a master lock, which is for the entire installation). Regards, Martin From p.f.moore at gmail.com Thu Apr 6 10:18:59 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 6 Apr 2006 09:18:59 +0100 Subject: [Python-Dev] Possible issue with 2.5a1 Win32 binary In-Reply-To: <4434629B.7020701@v.loewis.de> References: <79990c6b0604051517w118041dbsb822ea4fae8cd23d@mail.gmail.com> <4434629B.7020701@v.loewis.de> Message-ID: <79990c6b0604060118u4bf010a5v5484112828ca783f@mail.gmail.com> On 4/6/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > What happens when you run > > D:\Apps\Python25\python.exe -Wi D:\Apps\Python25\Lib\compileall.py -f -x > badsyntax D:\Apps\Python25\Lib > > and look at the status of the program? I think also excluding bad_coding > might already help. Status was 1. With -x "bad.*" it's zero, so it looks like that's the issue. I guess that means the pywin32 problem is unrelated - I'll raise that as a separate bug. Thanks for the help with this! Paul. From greg.ewing at canterbury.ac.nz Thu Apr 6 10:23:26 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 06 Apr 2006 20:23:26 +1200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <e12ac2$hjf$1@sea.gmane.org> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> Message-ID: <4434CFFE.1080607@canterbury.ac.nz> Fredrik Lundh wrote: > it's not new code, and having *different* module names for the same > well-established library isn't very nice to anyone. > > > Modules should have short, lowercase names, without underscores. But if we don't start becoming stricter about the naming of things added to the stdlib, consistency of naming is never going to improve. Or should this wait for Py3k? -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From ncoghlan at gmail.com Thu Apr 6 13:12:08 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 06 Apr 2006 21:12:08 +1000 Subject: [Python-Dev] Possible issue with 2.5a1 Win32 binary In-Reply-To: <79990c6b0604060118u4bf010a5v5484112828ca783f@mail.gmail.com> References: <79990c6b0604051517w118041dbsb822ea4fae8cd23d@mail.gmail.com> <4434629B.7020701@v.loewis.de> <79990c6b0604060118u4bf010a5v5484112828ca783f@mail.gmail.com> Message-ID: <4434F788.5030408@gmail.com> Paul Moore wrote: > On 4/6/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: >> What happens when you run >> >> D:\Apps\Python25\python.exe -Wi D:\Apps\Python25\Lib\compileall.py -f -x >> badsyntax D:\Apps\Python25\Lib >> >> and look at the status of the program? I think also excluding bad_coding >> might already help. > > Status was 1. With -x "bad.*" it's zero, so it looks like that's the issue. FWIW, this breaks for me too and the output of: C:\Python25\python.exe -Wi -m compileall -f C:\Python25\Lib includes the expected SyntaxErrors from the bad_coding and badsyntax test modules. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From steve at holdenweb.com Thu Apr 6 13:44:27 2006 From: steve at holdenweb.com (Steve Holden) Date: Thu, 06 Apr 2006 07:44:27 -0400 Subject: [Python-Dev] Don Beaudry Message-ID: <e12uug$ge7$1@sea.gmane.org> Does anyone have a current email address for Don? I've had a bounce from dvcorp.com and I need to get in touch with him. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC/Ltd www.holdenweb.com Love me, love my blog holdenweb.blogspot.com From aahz at pythoncraft.com Thu Apr 6 16:22:48 2006 From: aahz at pythoncraft.com (Aahz) Date: Thu, 6 Apr 2006 07:22:48 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <4434CFFE.1080607@canterbury.ac.nz> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> <4434CFFE.1080607@canterbury.ac.nz> Message-ID: <20060406142248.GB23101@panix.com> On Thu, Apr 06, 2006, Greg Ewing wrote: > Fredrik Lundh wrote: >> >> it's not new code, and having *different* module names for the same >> well-established library isn't very nice to anyone. >> >>> Modules should have short, lowercase names, without underscores. > > But if we don't start becoming stricter about the naming of things > added to the stdlib, consistency of naming is never going to improve. > > Or should this wait for Py3k? For contributions that are also maintained separately from Python, I think compatibility with the external code has to have some importance. I vote we wait for Py3k. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From abo at minkirri.apana.org.au Thu Apr 6 16:33:06 2006 From: abo at minkirri.apana.org.au (Donovan Baarda) Date: Thu, 06 Apr 2006 15:33:06 +0100 Subject: [Python-Dev] Default Locale, was; Re: strftime/strptime locale funnies... In-Reply-To: <bbaeab100604051213m7f103b19ufd819ca8b95674fe@mail.gmail.com> References: <1144255670.3991.62.camel@warna.dub.corp.google.com> <bbaeab100604051213m7f103b19ufd819ca8b95674fe@mail.gmail.com> Message-ID: <1144333986.21683.109.camel@warna.dub.corp.google.com> On Wed, 2006-04-05 at 12:13 -0700, Brett Cannon wrote: > On 4/5/06, Donovan Baarda <abo at minkirri.apana.org.au> wrote: > > G'day, > > > > Just noticed on Debian (testing), Ubuntu (warty?), and RedHat (old) > > based systems Python's time.strptime() seems to ignore the environment's > > Locale and just uses "C". [...] > Beats me. This could be a locale thing. If I remember correctly > Python assumes the C locale on some things. I suspect the reason for > this is in the locale module or libc. But you can't even find the > word 'locale' or 'Locale' in timemodule.c nor do I know of any calls > that mess with the locale, so I doubt 'time' is at fault for this. OK, I've found and confirmed what it is with a quick C program. The default Locale for lib C is 'C'. It is up the program to set its locale to match the environment using; setlocale(LC_ALL,""); The Python locale module documents this, and recommends putting; import locale locale.setlocale(locale.LC_ALL, '') At the top of programs to make them use your locale as specified in your environment. Note that locale.resetlocale() is documented as "resets the locale to the default settings", where the default is determined by locale.getdefaultlocale(), which uses the environment. So the "default" is determined from your environment, but "C" is used by default... nice and confusing :-) Should Python do setlocale(LC_ALL,"") on startup so that the "default" locale is used by default? -- Donovan Baarda <abo at minkirri.apana.org.au> http://minkirri.apana.org.au/~abo/ From blais at furius.ca Thu Apr 6 17:48:13 2006 From: blais at furius.ca (Martin Blais) Date: Thu, 6 Apr 2006 11:48:13 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings Message-ID: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> Hi all I got an evil idea for Python this morning -- Guido: no, it's not about linked lists :-) -- , and I'd like to bounce it here. But first, a bit of context. In the context of writing i18n apps, programmers have to "mark" strings that may be internationalized in a way that - a special hook gets called at runtime to perform the lookup in a catalog of translations setup for a specific language; - they can be extracted by an external tool to produce the keys of all the catalogs, so that translators can update the list of keys to translate and produce the values in the target languages. Usually, you bind a function to a short name, like _() and N_(), and it looks kind-of like this:: _("My string to translate.") or N_("This is marked for translation") # N_() is a noop. pygettext does the work of extracting those patterns from the files, doing all the parsing manually, i..e it does not use runtime Python introspection to do this at all, it is simply a simple text parsing algorithm (which works pretty well). I'm simplifying things a bit, but that is the jist of how it works, for those not familiar with i18n. This morning I woke up staring at the ceiling and the only thing in my mind was "my web app code is ugly". I had visions of LISP parentheses with constructs like ... A(P(_("Click here to forget"), href="... ... (In my example, I built a library not unlike stan for creating HTML, which is where classes A and P come from.) I find the i18n markup a bit annoying, especially when there are many i18n strings close together. My point is: adding parentheses around almost all strings gets tiresome and "charges" the otherwise divine esthetics of Python source code. (Okie, that's enough for context.) So I had the following idea: would it not be nice if there existed a string-prefix 'i' -- a string prefix like for the raw (r'...') and unicode (u'...') strings -- that would mark the string as being for i18n? Something like this (reusing my example above):: A(P(i"Click here to forget", href="... Notes: - We could then use the spiffy new AST to build a better parser to extract those strings from the source code as well. - We could also have a prefix "I" for strings to be marked but not runtime-translated, to replace the N_() strings. - This implies that we would have to introduce some way for these strings to call a custom function at runtime. - My impression is that this process of i18n is common enough that it does not "move" very much, and that there aren't 18 ways to go about it either, so that it would be reasonable to consider adding it to the language. This may be completely wrong, I am by no means an i18n expert, please show if this is not the case. - Potential issue: do we still need other prefixes when 'i' is used, and if so, how do we combine them... Okay, let's push it further a bit: how about if there was some kind of generic mechanism built-in in Python for adding new string-prefixes which invoke callbacks when the string with the prefix is evaluated? This could be used to implement what I'm suggesting above, and beyond. Something like this:: import i18n i18n.register_string_prefix('i', _) i18n.register_string_prefix('I', N_) I'm not sure what else we might be able to do with this, you may have other useful ideas. Any comments welcome. cheers, From g.brandl at gmx.net Thu Apr 6 18:44:25 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 06 Apr 2006 18:44:25 +0200 Subject: [Python-Dev] dis module and new-style classes Message-ID: <e13gh9$knj$1@sea.gmane.org> Hi, dis.dis currently handles new-style classes stepmotherly: given class C(object): def Cm(): pass class D(object): def Dm(): pass dis.dis(C) doesn't touch D, and dis.dis(C()) doesn't touch anything. Should it be fixed? It may need some reworking in dis.dis. Georg From aleaxit at gmail.com Thu Apr 6 19:15:55 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Thu, 6 Apr 2006 10:15:55 -0700 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> Message-ID: <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> On 4/6/06, Martin Blais <blais at furius.ca> wrote: ... > So I had the following idea: would it not be nice if there existed a > string-prefix 'i' -- a string prefix like for the raw (r'...') and > unicode (u'...') strings -- that would mark the string as being for > i18n? Something like this (reusing my example above):: +1: having helped out with substantial amounts of i18n work over the years, I agree with you 100% that this addition would give substantial benefits. > - We could also have a prefix "I" for strings to be marked but not > runtime-translated, to replace the N_() strings. I'm more dubious about this one, because I don't really see the point. Expand pls? With a couple of use cases, maybe? > - My impression is that this process of i18n is common enough that it > does not "move" very much, and that there aren't 18 ways to go about > it either, so that it would be reasonable to consider adding it to the > language. This may be completely wrong, I am by no means an i18n > expert, please show if this is not the case. My experience agrees with your assessment regarding the first half of the proposal (but I don't get the second half, I think). > Okay, let's push it further a bit: how about if there was some kind > of generic mechanism built-in in Python for adding new string-prefixes > which invoke callbacks when the string with the prefix is evaluated? I think this one is an idea for Python 3000: you should probably post it to that mailing list. > This could be used to implement what I'm suggesting above, and beyond. > Something like this:: > > import i18n > i18n.register_string_prefix('i', _) > i18n.register_string_prefix('I', N_) > > I'm not sure what else we might be able to do with this, you may have > other useful ideas. Oh, plenty of things, such as d'123.45' as a syntax for "decimal literals" (or viceversa for binary floats, if decimals become the Py3k default), q'123/54' for rationals (today's gmpy.mpq('123/54') or other implementations), etc. Ideas which have little to do with i18n, mostly. It's exactly because of the broad impact of such a mechanism (and, inevitably, the possibility of abuse and overuse) that I think it's Py3k material, not 2.* stuff. Alex From skip at pobox.com Thu Apr 6 19:16:14 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 6 Apr 2006 12:16:14 -0500 Subject: [Python-Dev] module aliasing In-Reply-To: <20060406142248.GB23101@panix.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> <4434CFFE.1080607@canterbury.ac.nz> <20060406142248.GB23101@panix.com> Message-ID: <17461.19678.173387.338631@montanaro.dyndns.org> >>>> Modules should have short, lowercase names, without underscores. >> >> But if we don't start becoming stricter about the naming of things >> added to the stdlib, consistency of naming is never going to improve. >> >> Or should this wait for Py3k? aahz> For contributions that are also maintained separately from Python, aahz> I think compatibility with the external code has to have some aahz> importance. I vote we wait for Py3k. Why not implement some sort of user-controlled module aliasing? That way in 2.x you might have StringIO and stringio available at the same time. The user could enable or disable one or both names for testing and backward compatibility. This of course presumes that the api of the module doesn't change, just its name. Skip From guido at python.org Thu Apr 6 19:47:44 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 6 Apr 2006 10:47:44 -0700 Subject: [Python-Dev] dis module and new-style classes In-Reply-To: <e13gh9$knj$1@sea.gmane.org> References: <e13gh9$knj$1@sea.gmane.org> Message-ID: <ca471dc20604061047m6e3e193dh845a253e9cef6766@mail.gmail.com> I think it's fine as it is. I don't think making it walk the inheritance tree is helpful; the output would be too large. Also, an instance doesn't have any code and that's fine too. (Didn't you mean "dis.dis(D) doesn't touch C"?) --Guido On 4/6/06, Georg Brandl <g.brandl at gmx.net> wrote: > Hi, > > dis.dis currently handles new-style classes stepmotherly: given > > class C(object): > def Cm(): pass > class D(object): > def Dm(): pass > > dis.dis(C) doesn't touch D, and > dis.dis(C()) doesn't touch anything. > > Should it be fixed? It may need some reworking in dis.dis. > > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From faassen at infrae.com Thu Apr 6 19:21:52 2006 From: faassen at infrae.com (Martijn Faassen) Date: Thu, 06 Apr 2006 19:21:52 +0200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> Message-ID: <44354E30.7080307@infrae.com> Alex Martelli wrote: > On Apr 5, 2006, at 8:30 PM, Greg Ewing wrote: > > >>A while ago there was some discussion about including >>elementtree in the std lib. I can't remember what the >>conclusion about that was, but if it does go ahead, >>I'd like to suggest that it be reorganised a bit. >> >>I've just started playing with it, and having a >>package called elementtree containing a module >>called ElementTree containing a class called >>ElementTree is just too confusing for words! > > > Try the 2.5 alpha 1 just released, and you'll see that the toplevel > package is now xml.etree. The module and class are still called > ElementTree, though. Note that lxml (which implements an ElementTree compatible API on top of libxml2) was using the 'etree' as a *module* (not a package name) before this move of ElementTree in the core. I had some discussions with Fredrik about making ElementTree in the Python core consistent with lxml, but no luck there. I.e., this in ElementTree: from elementtree.ElementTree import Element is this in lxml: from lxml.etree import Element and I believe in python 2.5 it's now: from xml.etree.ElementTree import Element which is not good in my opinion... (though also not a disaster) Regards, Martijn From trentm at ActiveState.com Thu Apr 6 21:18:40 2006 From: trentm at ActiveState.com (Trent Mick) Date: Thu, 6 Apr 2006 12:18:40 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <44354E30.7080307@infrae.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> Message-ID: <20060406191840.GA12465@activestate.com> [Martijn Faassen wrote] > I.e., this in ElementTree: > ... http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/475126 import ElementTree from everywhere try: import xml.etree.ElementTree as ET # in python >=2.5 except ImportError: try: import cElementTree as ET # effbot's C module except ImportError: try: import elementtree.ElementTree as ET # effbot's pure Python module except ImportError: try: import lxml.etree as ET # ElementTree API using libxml2 except ImportError: import warnings warning.warn("could not import ElementTree " "(http://effbot.org/zone/element-index.htm)") # Or you might just want to raise an ImportError here. # Use ET.Element, ET.ElementTree, etc... That is the current state. Trent -- Trent Mick TrentM at ActiveState.com From g.brandl at gmx.net Thu Apr 6 21:21:09 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 06 Apr 2006 21:21:09 +0200 Subject: [Python-Dev] dis module and new-style classes In-Reply-To: <ca471dc20604061047m6e3e193dh845a253e9cef6766@mail.gmail.com> References: <e13gh9$knj$1@sea.gmane.org> <ca471dc20604061047m6e3e193dh845a253e9cef6766@mail.gmail.com> Message-ID: <e13pn5$mid$1@sea.gmane.org> Guido van Rossum wrote: > I think it's fine as it is. I don't think making it walk the > inheritance tree is helpful; the output would be too large. Also, an > instance doesn't have any code and that's fine too. Inheritance has nothing to do with that. > (Didn't you mean "dis.dis(D) doesn't touch C"?) No. > On 4/6/06, Georg Brandl <g.brandl at gmx.net> wrote: >> Hi, >> >> dis.dis currently handles new-style classes stepmotherly: given >> >> class C(object): >> def Cm(): pass >> class D(object): >> def Dm(): pass >> >> dis.dis(C) doesn't touch D, and >> dis.dis(C()) doesn't touch anything. >> >> Should it be fixed? It may need some reworking in dis.dis. Here is an example transcript to make clearer what I mean: Python 2.4.2 (#1, Mar 12 2006, 00:14:41) >>> import dis >>> class C: ... def Cm(): pass ... class D: ... def Dm(): pass ... >>> dis.dis(C) Disassembly of Cm: 2 0 LOAD_CONST 0 (None) 3 RETURN_VALUE Disassembly of D: Disassembly of Dm: 4 0 LOAD_CONST 0 (None) 3 RETURN_VALUE >>> dis.dis(C()) Disassembly of Cm: 2 0 LOAD_CONST 0 (None) 3 RETURN_VALUE Disassembly of D: Disassembly of Dm: 4 0 LOAD_CONST 0 (None) 3 RETURN_VALUE >>> class Co(object): ... def Cm(): pass ... class Do(object): ... def Dm(): pass ... >>> dis.dis(Co) Disassembly of Cm: 2 0 LOAD_CONST 0 (None) 3 RETURN_VALUE >>> dis.dis(Co()) >>> From fredrik at pythonware.com Thu Apr 6 21:35:21 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 6 Apr 2006 21:35:21 +0200 Subject: [Python-Dev] elementtree in stdlib References: <44348B3E.2000607@canterbury.ac.nz><9E095442-9AA2-4148-9238-F1527E511760@gmail.com><44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> Message-ID: <e13qhq$q86$1@sea.gmane.org> Trent Mick wrote: > That is the current state. which reminds that maybe it's time to add an import helper to the standard library, so you can do stringio = import_search("cStringIO", "StringIO") ET = import_search("lxml.etree", "cElementTree", "xml.etree.cElementTree") db = import_search("superdb", "sqlite3", "fancydb", "dumbdb") etc. without having to type in for mod in ("cStringIO", "StringIO"): try: m = __import__(mod) for p in mod.split(".")[1:]: m = getattr(m, p, None) if m is None: raise ImportError return m except ImportError: pass else: raise ImportError(mod) all the time (or create those horridly nested try-except constructs). or perhaps try: import cStringIO as stringio retry ImportError: import StringIO as stringio except ImportError: print "didn't work!" would be a solution (sorry, wrong list). </F> From trentm at ActiveState.com Thu Apr 6 21:52:16 2006 From: trentm at ActiveState.com (Trent Mick) Date: Thu, 6 Apr 2006 12:52:16 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <e13qhq$q86$1@sea.gmane.org> References: <20060406191840.GA12465@activestate.com> <e13qhq$q86$1@sea.gmane.org> Message-ID: <20060406195216.GA16894@activestate.com> [Fredrik Lundh wrote] > Trent Mick wrote: > > > That is the current state. > > which reminds that maybe it's time to add an import helper to > the standard library, so you can do > > stringio = import_search("cStringIO", "StringIO") > ET = import_search("lxml.etree", "cElementTree", "xml.etree.cElementTree") > db = import_search("superdb", "sqlite3", "fancydb", "dumbdb") To the 'imp' module? Hrm, would then maybe want to change the docs from: 3.21 imp -- Access the import internals to 3.21 imp -- Access the import internals and some other useful importing stuff :) Trent -- Trent Mick TrentM at ActiveState.com From guido at python.org Thu Apr 6 22:36:37 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 6 Apr 2006 13:36:37 -0700 Subject: [Python-Dev] dis module and new-style classes In-Reply-To: <e13pn5$mid$1@sea.gmane.org> References: <e13gh9$knj$1@sea.gmane.org> <ca471dc20604061047m6e3e193dh845a253e9cef6766@mail.gmail.com> <e13pn5$mid$1@sea.gmane.org> Message-ID: <ca471dc20604061336r6d072742wd936decf466a1962@mail.gmail.com> Sorry, I missed the fact that this was about nested classes. Still, I don't think it's worth fixing. --Guido On 4/6/06, Georg Brandl <g.brandl at gmx.net> wrote: > Guido van Rossum wrote: > > I think it's fine as it is. I don't think making it walk the > > inheritance tree is helpful; the output would be too large. Also, an > > instance doesn't have any code and that's fine too. > > Inheritance has nothing to do with that. > > > (Didn't you mean "dis.dis(D) doesn't touch C"?) > > No. > > > On 4/6/06, Georg Brandl <g.brandl at gmx.net> wrote: > >> Hi, > >> > >> dis.dis currently handles new-style classes stepmotherly: given > >> > >> class C(object): > >> def Cm(): pass > >> class D(object): > >> def Dm(): pass > >> > >> dis.dis(C) doesn't touch D, and > >> dis.dis(C()) doesn't touch anything. > >> > >> Should it be fixed? It may need some reworking in dis.dis. > > Here is an example transcript to make clearer what I mean: > > Python 2.4.2 (#1, Mar 12 2006, 00:14:41) > >>> import dis > >>> class C: > ... def Cm(): pass > ... class D: > ... def Dm(): pass > ... > >>> dis.dis(C) > Disassembly of Cm: > 2 0 LOAD_CONST 0 (None) > 3 RETURN_VALUE > > Disassembly of D: > Disassembly of Dm: > 4 0 LOAD_CONST 0 (None) > 3 RETURN_VALUE > > > >>> dis.dis(C()) > Disassembly of Cm: > 2 0 LOAD_CONST 0 (None) > 3 RETURN_VALUE > > Disassembly of D: > Disassembly of Dm: > 4 0 LOAD_CONST 0 (None) > 3 RETURN_VALUE > > > >>> class Co(object): > ... def Cm(): pass > ... class Do(object): > ... def Dm(): pass > ... > >>> dis.dis(Co) > Disassembly of Cm: > 2 0 LOAD_CONST 0 (None) > 3 RETURN_VALUE > > >>> dis.dis(Co()) > >>> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From g.brandl at gmx.net Fri Apr 7 00:09:13 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 07 Apr 2006 00:09:13 +0200 Subject: [Python-Dev] str.partition? Message-ID: <e143i9$n5o$1@sea.gmane.org> Hi, a while ago, Raymond proposed str.partition, and I guess the reaction was positive. So what about including it now? Georg From martin at v.loewis.de Fri Apr 7 00:16:49 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 07 Apr 2006 00:16:49 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> Message-ID: <44359351.2090403@v.loewis.de> Martin Blais wrote: > ... > A(P(_("Click here to forget"), href="... > ... I assume that this should be P(A(_("Click here to forget"), href="... instead (i.e. href is a parameter to A, not to P) > > (In my example, I built a library not unlike stan for creating HTML, > which is where classes A and P come from.) I find the i18n markup a > bit annoying, especially when there are many i18n strings close > together. My point is: adding parentheses around almost all strings > gets tiresome and "charges" the otherwise divine esthetics of Python > source code. There is a simple solution to that: write it as P(a("Click here to forget", href="... and define def a(content, **kw): return A(_(content), **kw) You could it also write as P(A_("Click here to forget", href="... to make it a little more obvious to the reader that there is a gettext lookup here. Regards, Martin From raymond.hettinger at verizon.net Fri Apr 7 00:22:15 2006 From: raymond.hettinger at verizon.net (Raymond Hettinger) Date: Thu, 06 Apr 2006 15:22:15 -0700 Subject: [Python-Dev] str.partition? References: <e143i9$n5o$1@sea.gmane.org> Message-ID: <015801c659c8$8f680cd0$6c3c0a0a@RaymondLaptop1> > a while ago, Raymond proposed str.partition, and I guess the reaction > was positive. So what about including it now? Neal approved this for going into the second alpha. Will do it this month. Raymond From astrand at cendio.se Thu Apr 6 22:39:08 2006 From: astrand at cendio.se (=?iso-8859-1?Q?Peter_=C5strand?=) Date: Thu, 6 Apr 2006 22:39:08 +0200 (CEST) Subject: [Python-Dev] subprocess maintenance - SVN write access Message-ID: <Pine.LNX.4.64.0604062218010.15857@maggie.lkpg.cendio.se> Hi everyone. I've been away from Python dev for a while, but I've noticed that I'm assigned to quite many subprocess bugs (14 or so) that needs some care. The first question is: Am I the right person to take care of these? I do have some ideas for some of the bugs and. OTOH, I don't have time to read python-dev anymore. In case I should do some subprocess work, I need svn write access. I've read section 1.2.8 in the FAQ, but to who should I send my SSH key? Regards, -- Peter ?strand ThinLinc Chief Developer Cendio http://www.cendio.se Teknikringen 3 583 30 Link?ping Phone: +46-13-21 46 00 From dinov at exchange.microsoft.com Thu Apr 6 19:01:05 2006 From: dinov at exchange.microsoft.com (Dino Viehland) Date: Thu, 6 Apr 2006 10:01:05 -0700 Subject: [Python-Dev] [IronPython] base64 module In-Reply-To: <5b0248170604052245m75def21et8dc410151082f732@mail.gmail.com> Message-ID: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> Well, CPython at least still enforces the padding, even if it's ignoring the invalid characters. Here's Seo's repro 'simplified' to go straight to binascii (just to get to the root API): >>> import binascii >>> binascii.a2b_base64('%') '' And then sending a valid character, invalid padding: >>> binascii.a2b_base64('A') Traceback (most recent call last): File "<stdin>", line 1, in ? binascii.Error: Incorrect padding and then throwing in random invalid characters, and CPython ignores the invalid characters: >>> binascii.a2b_base64('ABC=') '\x00\x10' >>> binascii.a2b_base64('%%ABC=') '\x00\x10' >>> binascii.a2b_base64('%%ABC=%!@##') '\x00\x10' >>> binascii.a2b_base64('%%ABC=%!@##*#*()') '\x00\x10' The documentation for binascii.a2b_base64 doesn't specify if it throws for anything either. I would suspect that there's a reason why CPython is ignoring the invalid characters here. If this is the expected behavior then I'm happy to make IronPython match this. And at the very least we HAVE to fix the exception that gets thrown - I'm with Seo that it should be a ValueError but line between ValueError and TypeError is blurry at times anyway, and TypeError is what's documented. Do you want to help develop Dynamic languages on CLR? (http://members.microsoft.com/careers/search/details.aspx?JobID=6D4754DE-11F0-45DF-8B78-DC1B43134038) -----Original Message----- From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Sanghyeon Seo Sent: Wednesday, April 05, 2006 10:45 PM To: python-dev at python.org; users at lists.ironpython.com Subject: [IronPython] base64 module Hello, base64 module documentation for b64decode function says, "TypeError is raised if s were incorrectly padded or if there are non-alphabet characters present in the string." But this doesn't seem to be the case. Testcase: import base64 base64.b64decode('%') Since % is a non-alphabet character, this should raise TypeError (btw, shouldn't this be ValueError instead?), but Python 2.4.3 silently ignores. I found this while experimenting with IronPython. IronPython 1.0 Beta 5 gives: Traceback (most recent call last): File base64, line unknown, in b64decode SystemError: cannot decode byte: '%' It's not TypeError, but it doesn't silently ignore either. Seo Sanghyeon _______________________________________________ users mailing list users at lists.ironpython.com http://lists.ironpython.com/listinfo.cgi/users-ironpython.com From sanxiyn at gmail.com Thu Apr 6 07:45:17 2006 From: sanxiyn at gmail.com (Sanghyeon Seo) Date: Thu, 6 Apr 2006 14:45:17 +0900 Subject: [Python-Dev] base64 module Message-ID: <5b0248170604052245m75def21et8dc410151082f732@mail.gmail.com> Hello, base64 module documentation for b64decode function says, "TypeError is raised if s were incorrectly padded or if there are non-alphabet characters present in the string." But this doesn't seem to be the case. Testcase: import base64 base64.b64decode('%') Since % is a non-alphabet character, this should raise TypeError (btw, shouldn't this be ValueError instead?), but Python 2.4.3 silently ignores. I found this while experimenting with IronPython. IronPython 1.0 Beta 5 gives: Traceback (most recent call last): File base64, line unknown, in b64decode SystemError: cannot decode byte: '%' It's not TypeError, but it doesn't silently ignore either. Seo Sanghyeon From blais at furius.ca Fri Apr 7 02:14:23 2006 From: blais at furius.ca (Martin Blais) Date: Thu, 6 Apr 2006 20:14:23 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <44359351.2090403@v.loewis.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <44359351.2090403@v.loewis.de> Message-ID: <8393fff0604061714m3ac1e6cbn5d46a1317927079b@mail.gmail.com> On 4/6/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Martin Blais wrote: > > ... > > A(P(_("Click here to forget"), href="... > > ... > > I assume that this should be > > P(A(_("Click here to forget"), href="... > > instead (i.e. href is a parameter to A, not to P) Yeah, that's right, sorry. (You know, you get up with an idea, you're all excited, whip out the laptop, scramble to spit your message... Thanks for the correction) > > (In my example, I built a library not unlike stan for creating HTML, > > which is where classes A and P come from.) I find the i18n markup a > > bit annoying, especially when there are many i18n strings close > > together. My point is: adding parentheses around almost all strings > > gets tiresome and "charges" the otherwise divine esthetics of Python > > source code. > > There is a simple solution to that: write it as > > P(a("Click here to forget", href="... > > and define > > def a(content, **kw): > return A(_(content), **kw) No. That's not going to work: pygettext needs to be able to extract the string for the catalogs. No markup, no extraction. (This is how you enter strings that are not meant to be translated.) > > You could it also write as > > P(A_("Click here to forget", href="... > > to make it a little more obvious to the reader that there is a > gettext lookup here. This is not generic enough, HTML is too flexible to hard-code all cases. This only would help slightly. From brett at python.org Fri Apr 7 02:15:16 2006 From: brett at python.org (Brett Cannon) Date: Thu, 6 Apr 2006 17:15:16 -0700 Subject: [Python-Dev] subprocess maintenance - SVN write access In-Reply-To: <Pine.LNX.4.64.0604062218010.15857@maggie.lkpg.cendio.se> References: <Pine.LNX.4.64.0604062218010.15857@maggie.lkpg.cendio.se> Message-ID: <bbaeab100604061715m73618ea0vbcd27b557032f10a@mail.gmail.com> On 4/6/06, Peter ?strand <astrand at cendio.se> wrote: > > Hi everyone. I've been away from Python dev for a while, but I've noticed > that I'm assigned to quite many subprocess bugs (14 or so) that needs some > care. > > The first question is: Am I the right person to take care of these? I do > have some ideas for some of the bugs and. OTOH, I don't have time to read > python-dev anymore. > I say if you are up for fixing them, then yes you are the right person. > In case I should do some subprocess work, I need svn write access. I've > read section 1.2.8 in the FAQ, but to who should I send my SSH key? > I think Martin usually does it, but you can just send it as a reply to your own email. -Brett From blais at furius.ca Fri Apr 7 02:35:51 2006 From: blais at furius.ca (Martin Blais) Date: Thu, 6 Apr 2006 20:35:51 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> Message-ID: <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> On 4/6/06, Alex Martelli <aleaxit at gmail.com> wrote: > On 4/6/06, Martin Blais <blais at furius.ca> wrote: > > > - We could also have a prefix "I" for strings to be marked but not > > runtime-translated, to replace the N_() strings. > > I'm more dubious about this one, because I don't really see the point. > Expand pls? With a couple of use cases, maybe? N_() is used for marking up strings to be extracted for the catalogs, but that will get expanded programmatically later on, explicitly using _(variable) syntax or with a call to gettext(). For example, using my web forms library (wink, wink, open source at http://furius.ca/atocha/), I declare forms at module-level, and so this gets evaluated only once at import time, when the i18n environment is not determined yet:: form1 = Form( 'test-form', StringField('name', N_("Person's Name")), ... The point of declaring these at module or class level is that once the apache child is loaded, none of the forms ever need be rebuilt, i.e. it's fast. N_() is a no-op (i.e. lambda x: x). Later on, when a request is processed (when we've setup the i18n target environment), the label gets translated when used (i.e. from the Atocha code):: def _get_label( self, field ): """ Returns a printable label for the given field. """ return (field.label and _(field.label) # <------ translate the label here or field.name.capitalize().decode('ascii')) In summary: sometimes you need to mark-for-translate AND translate a piece of string, two separate tasks handled by the _() function, and sometimes you just need to mark-for-translate, handled by the N_() function, under the assumption that you will be a good boy and not forget to take care of it yourself later :-) This is pretty standard getttext stuff, if you used _() a lot I'm surprised you don't have a need for N_(), I always needed it when I used i18n (or maybe I misunderstood your question?). It wouldn't make sense to me to add i'' and not I'' (or equivalents). There is another example in gloriously verbose C code version from the GNU manual: http://www.gnu.org/software/gettext/manual/html_node/gettext_19.html#SEC19 cheers, From greg.ewing at canterbury.ac.nz Fri Apr 7 02:47:53 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 07 Apr 2006 12:47:53 +1200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <20060406191840.GA12465@activestate.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> Message-ID: <4435B6B9.2070309@canterbury.ac.nz> Trent Mick wrote: > try: > import xml.etree.ElementTree as ET # in python >=2.5 > except ImportError: > ... etc ad nauseam For situations like this I've thought it might be handy to be able to say import xml.etree.ElementTree or cElementTree or \ elementtree.ElementTree or lxml.etree as ET -- Greg From aahz at pythoncraft.com Fri Apr 7 03:03:05 2006 From: aahz at pythoncraft.com (Aahz) Date: Thu, 6 Apr 2006 18:03:05 -0700 Subject: [Python-Dev] base64 module In-Reply-To: <5b0248170604052245m75def21et8dc410151082f732@mail.gmail.com> References: <5b0248170604052245m75def21et8dc410151082f732@mail.gmail.com> Message-ID: <20060407010305.GB28977@panix.com> On Thu, Apr 06, 2006, Sanghyeon Seo wrote: > > base64 module documentation for b64decode function says, "TypeError is > raised if s were incorrectly padded or if there are non-alphabet > characters present in the string." But this doesn't seem to be the > case. Testcase: > > import base64 > base64.b64decode('%') > > Since % is a non-alphabet character, this should raise TypeError (btw, > shouldn't this be ValueError instead?), but Python 2.4.3 silently > ignores. Please submit a bug report on SourceForge and report back the ID. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Look, it's your affair if you want to play with five people, but don't go calling it doubles." --John Cleese anticipates Usenet From fdrake at acm.org Fri Apr 7 04:22:14 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 6 Apr 2006 22:22:14 -0400 Subject: [Python-Dev] str.partition? In-Reply-To: <e143i9$n5o$1@sea.gmane.org> References: <e143i9$n5o$1@sea.gmane.org> Message-ID: <200604062222.14429.fdrake@acm.org> On Thursday 06 April 2006 18:09, Georg Brandl wrote: > a while ago, Raymond proposed str.partition, and I guess the reaction > was positive. So what about including it now? +1 -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From anthony at interlink.com.au Fri Apr 7 04:38:59 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Fri, 7 Apr 2006 12:38:59 +1000 Subject: [Python-Dev] packaging/bootstrap issue Message-ID: <200604071239.03277.anthony@interlink.com.au> This is from bug www.python.org/sf/1465408 Because the Python.asdl and the generated Python-ast.[ch] get checked into svn in the same revision, the svn export I use to build the tarballs sets them all to the same timestamp on disk (the timestamp of the checkin). "make" then attempts to rebuild the ast files - this requires a python executable. Can you see the bootstrap problem? To "fix" this, I'm going to make the "welease" script that does the releases touch the ast files to set their timestamps newer than that of Python.asdl. It's not an ideal solution, but it should fix the problem. The other option would be some special Makefile magic that detects this case and doesn't rebuild the files if no "python" binary can be found. I have no idea how you'd do this in a portable way. Anyone got other options? Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From sanxiyn at gmail.com Fri Apr 7 03:21:26 2006 From: sanxiyn at gmail.com (Sanghyeon Seo) Date: Fri, 7 Apr 2006 10:21:26 +0900 Subject: [Python-Dev] base64 module In-Reply-To: <20060407010305.GB28977@panix.com> References: <5b0248170604052245m75def21et8dc410151082f732@mail.gmail.com> <20060407010305.GB28977@panix.com> Message-ID: <5b0248170604061821se51c54ex37e6de0c74f1cd51@mail.gmail.com> 2006/4/7, Aahz <aahz at pythoncraft.com>: > > Please submit a bug report on SourceForge and report back the ID. http://python.org/sf/1466065 Seo Sanghyeon From martin at v.loewis.de Fri Apr 7 08:10:38 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 07 Apr 2006 08:10:38 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604061713u4f4873cdq2eb0ff4d70756ce5@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <44359351.2090403@v.loewis.de> <8393fff0604061713u4f4873cdq2eb0ff4d70756ce5@mail.gmail.com> Message-ID: <4436025E.6070002@v.loewis.de> Martin Blais wrote: >> P(a("Click here to forget", href="... > > No. That's not going to work: pygettext needs to be able to extract > the string for the catalogs. No markup, no extraction. (This is how > you enter strings that are not meant to be translated.) I know; I wrote pygettext. You can pass the set of functions that count as marker with -k/--keyword arguments. >> You could it also write as >> >> P(A_("Click here to forget", href="... >> >> to make it a little more obvious to the reader that there is a >> gettext lookup here. > > This is not generic enough, HTML is too flexible to hard-code all > cases. This only would help slightly. I don't understand. What case that you would like to express couldn't be expressed? Regards, Martin From martin at v.loewis.de Fri Apr 7 08:23:08 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 07 Apr 2006 08:23:08 +0200 Subject: [Python-Dev] subprocess maintenance - SVN write access In-Reply-To: <Pine.LNX.4.64.0604062218010.15857@maggie.lkpg.cendio.se> References: <Pine.LNX.4.64.0604062218010.15857@maggie.lkpg.cendio.se> Message-ID: <4436054C.8000904@v.loewis.de> Peter ?strand wrote: > In case I should do some subprocess work, I need svn write access. I've > read section 1.2.8 in the FAQ, but to who should I send my SSH key? Yes, please send it to me, along with the preferred spelling of your name (I'd assume peter.astrand). Regards, Martin From martin at v.loewis.de Fri Apr 7 08:37:28 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 07 Apr 2006 08:37:28 +0200 Subject: [Python-Dev] packaging/bootstrap issue In-Reply-To: <200604071239.03277.anthony@interlink.com.au> References: <200604071239.03277.anthony@interlink.com.au> Message-ID: <443608A8.4010500@v.loewis.de> Anthony Baxter wrote: > Because the Python.asdl and the generated Python-ast.[ch] get checked > into svn in the same revision, the svn export I use to build the > tarballs sets them all to the same timestamp on disk (the timestamp > of the checkin). Actually, the generated c file often has a newer checkin, because it gets the version of Python.asdl embedded - but only after Python.asdl gets its $Revision$ updated (i.e. after the checkin). Still, the .h file will have the same revision, or even an older one if the AST change doesn't affect the header file (not sure if this can ever happen). > To "fix" this, I'm going to make the "welease" script that does the > releases touch the ast files to set their timestamps newer than that > of Python.asdl. It's not an ideal solution, but it should fix the > problem. The other option would be some special Makefile magic that > detects this case and doesn't rebuild the files if no "python" binary > can be found. I have no idea how you'd do this in a portable way. The common approach would be to use autoconf for that. Let autoconf search for a Python binary, and fall back to /bin/true if you don't find any. > Anyone got other options? This strategy (of specifically touching the files for the release) is quite common. Alternatively, you could also force a commit for Python-ast.[ch] if they have the same revision as the .asdl file. As the AST doesn't change that often, this dummy commit would only be rarely needed (and, as I suggested, only on the .h file). Regards, Martin From rasky at develer.com Fri Apr 7 08:55:28 2006 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 7 Apr 2006 08:55:28 +0200 Subject: [Python-Dev] elementtree in stdlib References: <44348B3E.2000607@canterbury.ac.nz><9E095442-9AA2-4148-9238-F1527E511760@gmail.com><44354E30.7080307@infrae.com><20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> Message-ID: <006e01c65a10$413d22a0$23ba2997@bagio> Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: >> try: >> import xml.etree.ElementTree as ET # in python >=2.5 >> except ImportError: > > ... etc ad nauseam > > For situations like this I've thought it might > be handy to be able to say > > import xml.etree.ElementTree or cElementTree or \ > elementtree.ElementTree or lxml.etree as ET Astonishingly cute. +1. Giovanni Bajo From g.brandl at gmx.net Fri Apr 7 08:58:08 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 07 Apr 2006 08:58:08 +0200 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <442D356B.9080505@gmail.com> References: <e0itla$l7d$1@sea.gmane.org> <442D356B.9080505@gmail.com> Message-ID: <e152i0$79d$1@sea.gmane.org> Nick Coghlan wrote: > Georg Brandl wrote: >> Hi, >> >> some time ago, someone posted in python-list about icons using the Python >> logo from the new site design [1]. IMO they are looking great and would >> be a good replacement for the old non-scaling snakes on Windows in 2.5. > > Those are *really* pretty. And the self-referential PIL source code and > disassembly is just plain brilliant. . . > > You could even use a similar style for a Python egg icon by placing the > plus-Python logo in front of a file folder picture. > > However, the concerns raised on python-list about the similarities between the > .exe and .pyc icons are valid, IMO. I also agree with Andrew Clover's own > comment that having the Windows shortcut symbol cover the Python logo on the > .exe is a bad thing. I've now got back an email from Andrew. About licensing, he says """Whatever licence is easiest, I'm not too worried about it.""" He'll make some tweaks to the images, then I think they'll be ready. Now, are there any other opinions? Georg From fredrik at pythonware.com Fri Apr 7 09:00:30 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 7 Apr 2006 09:00:30 +0200 Subject: [Python-Dev] packaging/bootstrap issue References: <200604071239.03277.anthony@interlink.com.au> Message-ID: <e152md$7oi$1@sea.gmane.org> Anthony Baxter wrote: > This is from bug www.python.org/sf/1465408 > > Because the Python.asdl and the generated Python-ast.[ch] get checked > into svn in the same revision, the svn export I use to build the > tarballs sets them all to the same timestamp on disk (the timestamp > of the checkin). "make" then attempts to rebuild the ast files - this > requires a python executable. Can you see the bootstrap problem? > > To "fix" this, I'm going to make the "welease" script that does the > releases touch the ast files to set their timestamps newer than that > of Python.asdl. It's not an ideal solution, but it should fix the > problem. The other option would be some special Makefile magic that > detects this case and doesn't rebuild the files if no "python" binary > can be found. I have no idea how you'd do this in a portable way. this is closely related to www.python.org/sf/1393109 except that in the latter case, the system have a perfectly working Python 2.1 which chokes on the new-style constructs used in the generator script. fwiw, that bug report (from december) says iirc, various solutions to this were discussed on python-dev, but nobody seems to have done anything about it. so it's about time someone did something about it... explicitly touching the files should be good enough. </F> From thomas at python.org Fri Apr 7 10:54:39 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 7 Apr 2006 10:54:39 +0200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <4435B6B9.2070309@canterbury.ac.nz> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> Message-ID: <9e804ac0604070154k611a7878s98f1d13eeb9e085f@mail.gmail.com> On 4/7/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > > Trent Mick wrote: > > > try: > > import xml.etree.ElementTree as ET # in python >=2.5 > > except ImportError: > > ... etc ad nauseam > > For situations like this I've thought it might > be handy to be able to say > > import xml.etree.ElementTree or cElementTree or \ > elementtree.ElementTree or lxml.etree as ET That does look cute (note that you can use parentheses rather than newline-escaping to continue the line.) I assume it should come with: from (xml.etree.cElementTree or xml.etree.ElementTree or elementtree.cElementTree or elementtree.ElementTree or lxml.etree) import ElementTree as ET (Parentheses there are currently illegal.) But should it also come with: from xml.etree import (cElementTree or ElementTree) as ElementTree and combined: from xml.etree or elementtree import cElementTree or ElementTree as ElementTree and of course combined with explicit-relative imports: from .custometree or xml.etree or elementtree import cElementTree or ElementTree as ET or is that all going too far? :) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060407/6b293939/attachment.html From mal at egenix.com Fri Apr 7 11:37:40 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 07 Apr 2006 11:37:40 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> Message-ID: <443632E4.1010906@egenix.com> Martin Blais wrote: > Hi all > > I got an evil idea for Python this morning -- Guido: no, it's not > about linked lists :-) -- , and I'd like to bounce it here. But > first, a bit of context. This has been discussed a few times before, see e.g. http://mail.python.org/pipermail/python-list/2000-January/020346.html In summary, the following points were made in the various discussions (this is from memory, so I may have forgotten a few points): * the string literal modifiers r"" and u"" are really only a cludge which should not be extended to other uses * being able to register such modifiers would result in unreadable and unmaintainable code, since the purpose of the used modifiers wouldn't be clear to the reader of a code snippet * writing i"" instead of _("") saves two key-strokes - not really enough to warrant the change * if you want to do it right, you'd also have to add iu"", ir"" for completeness * internationalization requires a lot more than just calling a function: context and domains are very important when it comes to translating strings in i18n efforts; these can easily be added to a function call as parameter, but not to a string modifier * there are lots of tools to do string extraction using the _("") notation (which also works in C); for i"" such tools would have to be rewritten > In the context of writing i18n apps, programmers have to "mark" > strings that may be internationalized in a way that > > - a special hook gets called at runtime to perform the lookup in a > catalog of translations setup for a specific language; > > - they can be extracted by an external tool to produce the keys of all > the catalogs, so that translators can update the list of keys to > translate and produce the values in the target languages. > > Usually, you bind a function to a short name, like _() and N_(), and > it looks kind-of like this:: > > _("My string to translate.") > > or > > N_("This is marked for translation") # N_() is a noop. > > pygettext does the work of extracting those patterns from the files, > doing all the parsing manually, i..e it does not use runtime Python > introspection to do this at all, it is simply a simple text parsing > algorithm (which works pretty well). I'm simplifying things a bit, > but that is the jist of how it works, for those not familiar with > i18n. > > > This morning I woke up staring at the ceiling and the only thing in my > mind was "my web app code is ugly". I had visions of LISP parentheses > with constructs like > > ... > A(P(_("Click here to forget"), href="... > ... > > (In my example, I built a library not unlike stan for creating HTML, > which is where classes A and P come from.) I find the i18n markup a > bit annoying, especially when there are many i18n strings close > together. My point is: adding parentheses around almost all strings > gets tiresome and "charges" the otherwise divine esthetics of Python > source code. > > (Okie, that's enough for context.) > > > So I had the following idea: would it not be nice if there existed a > string-prefix 'i' -- a string prefix like for the raw (r'...') and > unicode (u'...') strings -- that would mark the string as being for > i18n? Something like this (reusing my example above):: > > A(P(i"Click here to forget", href="... > > Notes: > > - We could then use the spiffy new AST to build a better parser to > extract those strings from the source code as well. > > - We could also have a prefix "I" for strings to be marked but not > runtime-translated, to replace the N_() strings. > > - This implies that we would have to introduce some way for these > strings to call a custom function at runtime. > > - My impression is that this process of i18n is common enough that it > does not "move" very much, and that there aren't 18 ways to go about > it either, so that it would be reasonable to consider adding it to the > language. This may be completely wrong, I am by no means an i18n > expert, please show if this is not the case. > > - Potential issue: do we still need other prefixes when 'i' is used, > and if so, how do we combine them... > > > Okay, let's push it further a bit: how about if there was some kind > of generic mechanism built-in in Python for adding new string-prefixes > which invoke callbacks when the string with the prefix is evaluated? > This could be used to implement what I'm suggesting above, and beyond. > Something like this:: > > import i18n > i18n.register_string_prefix('i', _) > i18n.register_string_prefix('I', N_) > > I'm not sure what else we might be able to do with this, you may have > other useful ideas. > > > Any comments welcome. > > cheers, > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 07 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From 2005a at usenet.alexanderweb.de Fri Apr 7 13:29:35 2006 From: 2005a at usenet.alexanderweb.de (Alexander Schremmer) Date: Fri, 7 Apr 2006 13:29:35 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> Message-ID: <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> On Thu, 6 Apr 2006 20:35:51 -0400, Martin Blais wrote: > This is pretty standard > getttext stuff, if you used _() a lot I'm surprised you don't have a > need for N_(), I always needed it when I used i18n (or maybe I > misunderstood your question?). Have you thought about simply writing _ = lambda x:x instead of N_ ...? By doing that, you just need to care about one function (of course _ doesn't translate in that case and you might need to del _ afterwards). Kind regards, Alexander From ncoghlan at gmail.com Fri Apr 7 15:00:41 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 07 Apr 2006 23:00:41 +1000 Subject: [Python-Dev] module aliasing In-Reply-To: <17461.19678.173387.338631@montanaro.dyndns.org> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> <4434CFFE.1080607@canterbury.ac.nz> <20060406142248.GB23101@panix.com> <17461.19678.173387.338631@montanaro.dyndns.org> Message-ID: <44366279.8020407@gmail.com> skip at pobox.com wrote: > >>>> Modules should have short, lowercase names, without underscores. > >> > >> But if we don't start becoming stricter about the naming of things > >> added to the stdlib, consistency of naming is never going to improve. > >> > >> Or should this wait for Py3k? > > aahz> For contributions that are also maintained separately from Python, > aahz> I think compatibility with the external code has to have some > aahz> importance. I vote we wait for Py3k. > > Why not implement some sort of user-controlled module aliasing? That way in > 2.x you might have StringIO and stringio available at the same time. The > user could enable or disable one or both names for testing and backward > compatibility. > > This of course presumes that the api of the module doesn't change, just its > name. Something that has occasionally bugged me is the verbosity of trying an import, failing with an ImportError, then trying again. For instance, to be Py3k friendly, the simple: from StringIO import StringIO would have to become: try: from stringio import StringIO except ImportError: from StringIO import StringIO And if I wanted to check for cStringIO as well (assuming Py3k was clever enough to supply the C version if it was available), it would be: try: from stringio import StringIO except ImportError: try: from cStringIO import StringIO except ImportError: from StringIO import StringIO It would be nice if this chain could instead be written as: from stringio or cStringIO or StringIO import StringIO Similar to PEP 341, this could be pure syntactic sugar, with the actual try-except statements generated in the AST. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From g.brandl at gmx.net Fri Apr 7 15:00:21 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 07 Apr 2006 15:00:21 +0200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <4435B6B9.2070309@canterbury.ac.nz> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> Message-ID: <e15np5$bvk$1@sea.gmane.org> Greg Ewing wrote: > Trent Mick wrote: > >> try: >> import xml.etree.ElementTree as ET # in python >=2.5 >> except ImportError: > > ... etc ad nauseam > > For situations like this I've thought it might > be handy to be able to say > > import xml.etree.ElementTree or cElementTree or \ > elementtree.ElementTree or lxml.etree as ET Suppose I wanted to implement that, what would be the best strategy to follow: - change handling of IMPORT_NAME and IMPORT_FROM in ceval.c - emit different bytecodes in compile.c - directly create TryExcept AST nodes in ast.c ? Georg From thomas at python.org Fri Apr 7 15:12:59 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 7 Apr 2006 15:12:59 +0200 Subject: [Python-Dev] module aliasing In-Reply-To: <44366279.8020407@gmail.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> <4434CFFE.1080607@canterbury.ac.nz> <20060406142248.GB23101@panix.com> <17461.19678.173387.338631@montanaro.dyndns.org> <44366279.8020407@gmail.com> Message-ID: <9e804ac0604070612p4645f39ax1bda8f7f8c2ad13d@mail.gmail.com> On 4/7/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > It would be nice if this chain could instead be written as: > > from stringio or cStringIO or StringIO import StringIO > > Similar to PEP 341, this could be pure syntactic sugar, with the actual > try-except statements generated in the AST. It could, but it's probably easier to make the IMPORT_NAME (and possibly IMPORT_FROM) opcodes take more names from the stack, and do it all in the C code for the opcodes. And __import__ could be changed to accept a sequence of modules (it takes keyword arguments now, anyway. ;) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060407/2e1c58a1/attachment.html From ncoghlan at gmail.com Fri Apr 7 15:26:42 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 07 Apr 2006 23:26:42 +1000 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <e15np5$bvk$1@sea.gmane.org> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> <e15np5$bvk$1@sea.gmane.org> Message-ID: <44366892.6090508@gmail.com> Georg Brandl wrote: > Greg Ewing wrote: >> Trent Mick wrote: >> >>> try: >>> import xml.etree.ElementTree as ET # in python >=2.5 >>> except ImportError: >> > ... etc ad nauseam >> >> For situations like this I've thought it might >> be handy to be able to say >> >> import xml.etree.ElementTree or cElementTree or \ >> elementtree.ElementTree or lxml.etree as ET > > Suppose I wanted to implement that, what would be the best strategy > to follow: > - change handling of IMPORT_NAME and IMPORT_FROM in ceval.c > - emit different bytecodes in compile.c > - directly create TryExcept AST nodes in ast.c Definitely option 3, since you only have to modify the parser and the AST compiler. To change it in compile.c, you have to first modify the parser, the AST definition and the AST compiler in order to get the info to the bytecode compiler. To change it in ceval.c, you have to first modify the parser, the AST definition, the AST compiler and the bytecode compiler in order to get the info to the eval loop. Given that import statements aren't supposed to be in time critical code, go for the easy option :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From blais at furius.ca Fri Apr 7 16:07:26 2006 From: blais at furius.ca (Martin Blais) Date: Fri, 7 Apr 2006 10:07:26 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> Message-ID: <8393fff0604070707j7d533705j9bc529daa9c7d569@mail.gmail.com> On 4/7/06, Alexander Schremmer <2005a at usenet.alexanderweb.de> wrote: > On Thu, 6 Apr 2006 20:35:51 -0400, Martin Blais wrote: > > > This is pretty standard > > getttext stuff, if you used _() a lot I'm surprised you don't have a > > need for N_(), I always needed it when I used i18n (or maybe I > > misunderstood your question?). > > Have you thought about simply writing _ = lambda x:x instead of N_ ...? > By doing that, you just need to care about one function (of course _ > doesn't translate in that case and you might need to del _ afterwards). There are cases where you need N_() after initialization, so you need both, really. See the link I sent to Alex earlier (to the GNU manual example). From blais at furius.ca Fri Apr 7 16:13:14 2006 From: blais at furius.ca (Martin Blais) Date: Fri, 7 Apr 2006 10:13:14 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <443632E4.1010906@egenix.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <443632E4.1010906@egenix.com> Message-ID: <8393fff0604070713y3778b73bp25d4b2c91cb86714@mail.gmail.com> On 4/7/06, M.-A. Lemburg <mal at egenix.com> wrote: > Martin Blais wrote: > > Hi all > > > > I got an evil idea for Python this morning -- Guido: no, it's not > > about linked lists :-) -- , and I'd like to bounce it here. But > > first, a bit of context. > > This has been discussed a few times before, see e.g. > > http://mail.python.org/pipermail/python-list/2000-January/020346.html Oh, wow, thanks! > In summary, the following points were made in the various > discussions (this is from memory, so I may have forgotten > a few points): > > * the string literal modifiers r"" and u"" are really only a cludge > which should not be extended to other uses > > * being able to register such modifiers would result in unreadable > and unmaintainable code, since the purpose of the used modifiers > wouldn't be clear to the reader of a code snippet > > * writing i"" instead of _("") saves two key-strokes - not really > enough to warrant the change > > * if you want to do it right, you'd also have to add iu"", > ir"" for completeness Good points. Thanks for summarizing. It's certainly true that adding a general mechanism to hook custom calls into strings initialization may cause confusion if people define them differently. > * internationalization requires a lot more than just calling > a function: context and domains are very important when it > comes to translating strings in i18n efforts; these can > easily be added to a function call as parameter, but not > to a string modifier Sure, but the simple case covers 99% the great majority of the uses. > * there are lots of tools to do string extraction using the > _("") notation (which also works in C); for i"" such tools > would have to be rewritten (I don't really see this as a problem.) From blais at furius.ca Fri Apr 7 16:23:05 2006 From: blais at furius.ca (Martin Blais) Date: Fri, 7 Apr 2006 10:23:05 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <4436025E.6070002@v.loewis.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <44359351.2090403@v.loewis.de> <8393fff0604061713u4f4873cdq2eb0ff4d70756ce5@mail.gmail.com> <4436025E.6070002@v.loewis.de> Message-ID: <8393fff0604070723h5be934ffl7b570cb4e7461bd7@mail.gmail.com> On 4/7/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Martin Blais wrote: > >> P(a("Click here to forget", href="... > > > > No. That's not going to work: pygettext needs to be able to extract > > the string for the catalogs. No markup, no extraction. (This is how > > you enter strings that are not meant to be translated.) > > I know; I wrote pygettext. You can pass the set of functions that count > as marker with -k/--keyword arguments. > > >> You could it also write as > >> > >> P(A_("Click here to forget", href="... > >> > >> to make it a little more obvious to the reader that there is a > >> gettext lookup here. > > > > This is not generic enough, HTML is too flexible to hard-code all > > cases. This only would help slightly. > > I don't understand. What case that you would like to express > couldn't be expressed? Hmmm thinking about this more, I think you're right, I do want to i18n all the strings passed in to HTML builder classes as positional arguments, and so I should make all the HTML tags keywords for pygettext, this would cover the majority of strings in my application. I'm not sure all the cases are handled, but for those which aren't I can't see why I couldn't hack the pygettext parser to make it do what I want, e.g. is the case were the function contains multiple strings handled? :: P(A_("Status: ", get_balance(), "dollars", href=".... ") (No need to answer to list, this is getting slightly OT, I'll check it out in more detail.) From 2005a at usenet.alexanderweb.de Fri Apr 7 16:23:20 2006 From: 2005a at usenet.alexanderweb.de (Alexander Schremmer) Date: Fri, 7 Apr 2006 16:23:20 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> <8393fff0604070707j7d533705j9bc529daa9c7d569@mail.gmail.com> Message-ID: <11ivir0mzk1xw.dlg@usenet.alexanderweb.de> On Fri, 7 Apr 2006 10:07:26 -0400, Martin Blais wrote: > There are cases where you need N_() after initialization, so you need > both, really. See the link I sent to Alex earlier (to the GNU manual > example). On the page you were referring to, I cannot find a particular use case that does not work with the idea sketched above. Kind regards, Alexader From blais at furius.ca Fri Apr 7 16:28:59 2006 From: blais at furius.ca (Martin Blais) Date: Fri, 7 Apr 2006 10:28:59 -0400 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <e12ac2$hjf$1@sea.gmane.org> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <53E81A66-E6F8-49CC-AD44-4C861AE2F77C@redivi.com> <e12ac2$hjf$1@sea.gmane.org> Message-ID: <8393fff0604070728l9302207k8f9ac8344d3fea75@mail.gmail.com> On 4/6/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > Bob Ippolito wrote: > > > > Try the 2.5 alpha 1 just released, and you'll see that the toplevel > > > package is now xml.etree. The module and class are still called > > > ElementTree, though. > > > > It would be nice to have new code be PEP 8 compliant.. > > it's not new code, and having *different* module names for the same > well-established library isn't very nice to anyone. (It's not new code, but it is new code to the stdlib.) How about doing a rename but creating some kind of alias for the current names? That would serve everyone. (I also find the current naming to be unfortunate and stumble over it every time.) From blais at furius.ca Fri Apr 7 16:38:33 2006 From: blais at furius.ca (Martin Blais) Date: Fri, 7 Apr 2006 10:38:33 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <11ivir0mzk1xw.dlg@usenet.alexanderweb.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> <8393fff0604070707j7d533705j9bc529daa9c7d569@mail.gmail.com> <11ivir0mzk1xw.dlg@usenet.alexanderweb.de> Message-ID: <8393fff0604070738i33175b56jc3b06d9526827cfc@mail.gmail.com> On 4/7/06, Alexander Schremmer <2005a at usenet.alexanderweb.de> wrote: > On Fri, 7 Apr 2006 10:07:26 -0400, Martin Blais wrote: > > > There are cases where you need N_() after initialization, so you need > > both, really. See the link I sent to Alex earlier (to the GNU manual > > example). > > On the page you were referring to, I cannot find a particular use case that > does not work with the idea sketched above. Okie. Here's one example from actual code: class EventEdit(EventEditPages): def handle( self, ctxt ): ... # Render special activate/deactivate button. if ctxt.event.state == 'a': future_state = u's' actstr = N_('Deactivate') else: future_state = u'a' actstr = N_('Activate') values = {'state': future_state} rdrbutton = HoutFormRenderer(form__state_set, values) page.append(rdrbutton.render(submit=actstr)) HoutFormRenderer.render() expects non-translated strings, and it performs the gettext lookup itself (this is a general library-wide policy for all widget labels). (This is just one example. I have many other use cases like this.) From tzot at mediconsa.com Fri Apr 7 17:14:27 2006 From: tzot at mediconsa.com (Christos Georgiou) Date: Fri, 7 Apr 2006 18:14:27 +0300 Subject: [Python-Dev] Patch or feature? Tix.Grid working for 2.5 References: <dvhg0b$1c9$1@sea.gmane.org> <441C87E1.3020709@v.loewis.de> Message-ID: <e15vks$dur$1@sea.gmane.org> ""Martin v. L?wis"" <martin at v.loewis.de> wrote in message news:441C87E1.3020709 at v.loewis.de... > Christos Georgiou wrote: >> I would like to know if supplying a patch for it sometime in the next >> couple >> of weeks would be considered a patch (since the widget currently is not >> working at all, its class in Tix.py contains just a pass statement) or a >> feature (ie extra functionality) for the 2.5 branch... > > I wouldn't object to including it before beta 1. Just in case the auto-assignment email still does not work, I have submitted the patch since March 31. OTOH, if I need to review 5 bugs/patches, I'll try to find some time to do that. From steven.bethard at gmail.com Fri Apr 7 18:04:09 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Fri, 7 Apr 2006 10:04:09 -0600 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <e152i0$79d$1@sea.gmane.org> References: <e0itla$l7d$1@sea.gmane.org> <442D356B.9080505@gmail.com> <e152i0$79d$1@sea.gmane.org> Message-ID: <d11dcfba0604070904n6f1fb323sf5a4b42a985c1fd2@mail.gmail.com> Georg Brandl wrote: > Nick Coghlan wrote: > > Georg Brandl wrote: > >> some time ago, someone posted in python-list about icons using the Python > >> logo from the new site design [1]. IMO they are looking great and would > >> be a good replacement for the old non-scaling snakes on Windows in 2.5. > > > > Those are *really* pretty. And the self-referential PIL source code and > > disassembly is just plain brilliant. . . [snip] > I've now got back an email from Andrew. About licensing, he says > """Whatever licence is easiest, I'm not too worried about it.""" > > He'll make some tweaks to the images, then I think they'll be ready. > > Now, are there any other opinions? Other than that I love the new icons and I can't wait until Python 2.5 has them? ;-) Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From barry at python.org Fri Apr 7 18:20:22 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 07 Apr 2006 12:20:22 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> Message-ID: <1144426822.29923.79.camel@resist.wooz.org> On Thu, 2006-04-06 at 11:48 -0400, Martin Blais wrote: > - This implies that we would have to introduce some way for these > strings to call a custom function at runtime. Yes, definitely. For example, in Mailman we bind _() not to gettext's _() but to a special one that looks up the translation context, find the string's translation, then does the substitutions. So this is one difficult sticking point with the idea. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060407/73711be5/attachment.pgp From barry at python.org Fri Apr 7 18:21:41 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 07 Apr 2006 12:21:41 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <e8a0972d0604061015j55e4aab0wd18ca8ff931d4245@mail.gmail.com> <8393fff0604061735j466463fx4f24445c05c2ce33@mail.gmail.com> <nqf8ab77xpp1.dlg@usenet.alexanderweb.de> Message-ID: <1144426901.29922.81.camel@resist.wooz.org> On Fri, 2006-04-07 at 13:29 +0200, Alexander Schremmer wrote: > Have you thought about simply writing _ = lambda x:x instead of N_ ...? > By doing that, you just need to care about one function (of course _ > doesn't translate in that case and you might need to del _ afterwards). That's essentially what I do in Mailman, although I use def _(s): return s same-difference-ly y'rs, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060407/dda4f6a3/attachment.pgp From trentm at ActiveState.com Fri Apr 7 19:00:16 2006 From: trentm at ActiveState.com (Trent Mick) Date: Fri, 7 Apr 2006 10:00:16 -0700 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <9e804ac0604070154k611a7878s98f1d13eeb9e085f@mail.gmail.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> <9e804ac0604070154k611a7878s98f1d13eeb9e085f@mail.gmail.com> Message-ID: <20060407170016.GA23131@activestate.com> [Thomas Wouters suggested "import ... or" syntax] > or is that all going too far? :) Yes. It is overkill. The number of different ways to import ElementTree is perhaps unfortunate but it is a mostly isolated incident: effbot providing pure and c versions, it being popular and hence having other implementations *and* quickly getting into the core. The original issue was that the various import paths to ElementTree are a little confusing. Adding "or" syntax doesn't change that. Trent -- Trent Mick TrentM at ActiveState.com From g.brandl at gmx.net Fri Apr 7 19:48:58 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 07 Apr 2006 19:48:58 +0200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <44366892.6090508@gmail.com> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> <e15np5$bvk$1@sea.gmane.org> <44366892.6090508@gmail.com> Message-ID: <e168ma$iqv$1@sea.gmane.org> Nick Coghlan wrote: > Georg Brandl wrote: >> Greg Ewing wrote: >>> Trent Mick wrote: >>> >>>> try: >>>> import xml.etree.ElementTree as ET # in python >=2.5 >>>> except ImportError: >>> > ... etc ad nauseam >>> >>> For situations like this I've thought it might >>> be handy to be able to say >>> >>> import xml.etree.ElementTree or cElementTree or \ >>> elementtree.ElementTree or lxml.etree as ET >> >> Suppose I wanted to implement that, what would be the best strategy >> to follow: >> - change handling of IMPORT_NAME and IMPORT_FROM in ceval.c >> - emit different bytecodes in compile.c >> - directly create TryExcept AST nodes in ast.c > > Definitely option 3, since you only have to modify the parser and the AST > compiler. > > To change it in compile.c, you have to first modify the parser, the AST > definition and the AST compiler in order to get the info to the bytecode compiler. > > To change it in ceval.c, you have to first modify the parser, the AST > definition, the AST compiler and the bytecode compiler in order to get the > info to the eval loop. > > Given that import statements aren't supposed to be in time critical code, go > for the easy option :) Well, if there's an encouraging word from more developers, I can try it. Georg From ian.bollinger at gmail.com Fri Apr 7 21:01:32 2006 From: ian.bollinger at gmail.com (Ian D. Bollinger) Date: Fri, 7 Apr 2006 15:01:32 -0400 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <d11dcfba0604070904n6f1fb323sf5a4b42a985c1fd2@mail.gmail.com> References: <e0itla$l7d$1@sea.gmane.org> <442D356B.9080505@gmail.com> <e152i0$79d$1@sea.gmane.org> <d11dcfba0604070904n6f1fb323sf5a4b42a985c1fd2@mail.gmail.com> Message-ID: <86a6a97b0604071201h6eca0621gf8a8a4625378adeb@mail.gmail.com> Also, a while ago a Kevin T. Gadd posted some Python icons he had made. http://mail.python.org/pipermail/python-dev/2004-August/048273.html -- - Ian D. Bollinger -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060407/eab1f2bc/attachment.html From martin at v.loewis.de Sat Apr 8 00:45:19 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 08 Apr 2006 00:45:19 +0200 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <8393fff0604070723h5be934ffl7b570cb4e7461bd7@mail.gmail.com> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <44359351.2090403@v.loewis.de> <8393fff0604061713u4f4873cdq2eb0ff4d70756ce5@mail.gmail.com> <4436025E.6070002@v.loewis.de> <8393fff0604070723h5be934ffl7b570cb4e7461bd7@mail.gmail.com> Message-ID: <4436EB7F.3070008@v.loewis.de> Martin Blais wrote: > I'm not sure all the cases are handled, but for those which aren't I > can't see why I couldn't hack the pygettext parser to make it do what > I want, e.g. is the case were the function contains multiple strings > handled? :: > > P(A_("Status: ", get_balance(), "dollars", href=".... ") > > > (No need to answer to list, this is getting slightly OT, I'll check it > out in more detail.) I'll answer it anyway, because this is important enough for anybody to understand :-) *Never* try to do i18n that way. Don't combine fragments through concatenation. Instead, always use placeholders. IOW, the msgid should be "Status: %s dollars" or, if you have multiple placeholders "Status: %(balance)s placeholders" If you have many fragments, the translator gets the challenge of translating "dollars". Now, this might need to be translated differently in different contexts (and perhaps even depending on the value of balance); the translator must always get the complete message as a single piece. Regards, Martin From martin at v.loewis.de Sat Apr 8 00:55:20 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 08 Apr 2006 00:55:20 +0200 Subject: [Python-Dev] packaging/bootstrap issue In-Reply-To: <e152md$7oi$1@sea.gmane.org> References: <200604071239.03277.anthony@interlink.com.au> <e152md$7oi$1@sea.gmane.org> Message-ID: <4436EDD8.6040606@v.loewis.de> Fredrik Lundh wrote: > this is closely related to > > www.python.org/sf/1393109 > > except that in the latter case, the system have a perfectly working > Python 2.1 which chokes on the new-style constructs used in the > generator script. > > fwiw, that bug report (from december) says > > iirc, various solutions to this were discussed on python-dev, > but nobody seems to have done anything about it. > > so it's about time someone did something about it... explicitly touching > the files should be good enough. Maybe I'm misunderstanding, but I doubt that Anthony's touching the files on his exported tree would help with #1393109 Regards, Martin From barry at python.org Sat Apr 8 02:05:57 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 07 Apr 2006 20:05:57 -0400 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <4436EB7F.3070008@v.loewis.de> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <44359351.2090403@v.loewis.de> <8393fff0604061713u4f4873cdq2eb0ff4d70756ce5@mail.gmail.com> <4436025E.6070002@v.loewis.de> <8393fff0604070723h5be934ffl7b570cb4e7461bd7@mail.gmail.com> <4436EB7F.3070008@v.loewis.de> Message-ID: <1144454757.21556.1.camel@resist.wooz.org> On Sat, 2006-04-08 at 00:45 +0200, "Martin v. L?wis" wrote: > *Never* try to do i18n that way. Don't combine fragments through > concatenation. Instead, always use placeholders. Martin is of course absolutely right! > If you have many fragments, the translator gets the challenge of > translating "dollars". Now, this might need to be translated differently > in different contexts (and perhaps even depending on the value of > balance); the translator must always get the complete message > as a single piece. Plus, if you have multiple placeholders, the order may change in some translations. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060407/cdf97296/attachment.pgp From kbk at shore.net Sat Apr 8 05:04:00 2006 From: kbk at shore.net (Kurt B. Kaiser) Date: Fri, 7 Apr 2006 23:04:00 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200604080304.k383409n018786@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 391 open ( +2) / 3142 closed (+25) / 3533 total (+27) Bugs : 898 open ( -3) / 5731 closed (+44) / 6629 total (+41) RFE : 215 open ( +1) / 207 closed ( +1) / 422 total ( +2) New / Reopened Patches ______________________ give round() kwargs (mostly for PFAs) (2006-03-28) CLOSED http://python.org/sf/1460496 opened by Wesley J. Chun Improved PySet C API (2006-03-25) CLOSED http://python.org/sf/1458476 reopened by bwarsaw "const int" was truncated to "char" (2006-03-31) http://python.org/sf/1461822 opened by Hirokazu Yamamoto ssl build fails due to undefined NETLINK_ stuff (2006-03-31) CLOSED http://python.org/sf/1462080 opened by Arkadiusz Miskiewicz Patch for 1115886 - splitext incorrectly handles filenames l (2006-03-31) http://python.org/sf/1462106 opened by Mike Foord Functioning Tix.Grid (2006-03-31) http://python.org/sf/1462222 opened by Christos Georgiou fdopen(..., "a") (2006-03-31) CLOSED http://python.org/sf/1462227 opened by Ralf Schmitt custom comparison function for bisect module (2006-03-31) http://python.org/sf/1462228 opened by splitscreen Fix for #1250170 (2006-03-31) CLOSED http://python.org/sf/1462230 opened by Kuba Ko??czyk Fix of "disgusting hack" in Misc.after (2006-03-31) CLOSED http://python.org/sf/1462235 opened by Christos Georgiou fdopen(..., "a") (2006-03-31) CLOSED http://python.org/sf/1462245 opened by Ralf Schmitt Fix for bug #1445068 getpass.getpass query on stderr (2006-03-31) CLOSED http://python.org/sf/1462298 reopened by splitscreen Fix for bug #1445068 getpass.getpass query on stderr (2006-03-31) CLOSED http://python.org/sf/1462298 opened by splitscreen Pickle protocol 2 fails on private slots (2006-03-31) CLOSED http://python.org/sf/1462313 opened by ?iga Seilnacht upgrade pyexpat to expat 2.0.0 (2006-03-31) http://python.org/sf/1462338 opened by Trent Mick Possible fix to #1334662 (int() wrong answers) (2006-03-31) http://python.org/sf/1462361 opened by Ivan Vilata i Balaguer Patch for bug #1458017 Log._log needs to be more forgiving (2006-03-31) CLOSED http://python.org/sf/1462414 opened by splitscreen Patch for bug #931877 Segfault in object_reduce_ex (2006-04-01) http://python.org/sf/1462488 opened by ?iga Seilnacht Spelling correction in libsignal.tex (2006-03-31) CLOSED http://python.org/sf/1462496 opened by lmllc bug #1452246 and patch #1087808; sgmllib entities (2006-03-31) CLOSED http://python.org/sf/1462498 opened by Rares Vernica URI parsing library (2006-04-01) http://python.org/sf/1462525 opened by Paul Jimenez replace md5 impl. with one having a more free license (2005-02-07) CLOSED http://python.org/sf/1117961 reopened by doko urllib2.ProxyHandler broken recently for non-userinfo case (2006-04-01) CLOSED http://python.org/sf/1462790 opened by John J Lee Save the hash value of tuples (2006-04-01) CLOSED http://python.org/sf/1462796 opened by Noam Raphael Remove broken code from urllib2 (2006-04-02) CLOSED http://python.org/sf/1463012 opened by John J Lee Bugfix for #847665 (XMLGenerator dies in namespace mode) (2006-04-02) http://python.org/sf/1463026 opened by Nikolai Grigoriev clean up new calendar locale usage (2006-04-02) http://python.org/sf/1463288 opened by Neal Norwitz Add .format() method to str and unicode (2006-04-03) CLOSED http://python.org/sf/1463370 opened by crutcher Improved generator finalization (2006-04-03) http://python.org/sf/1463867 opened by Phillip J. Eby Simple test for os.startfile (2006-04-04) CLOSED http://python.org/sf/1464062 reopened by loewis Simple test for os.startfile (2006-04-04) CLOSED http://python.org/sf/1464062 opened by Thomas Heller fixed handling of nested comments in mail addresses (2006-04-05) http://python.org/sf/1464708 opened by William McVey Patches Closed ______________ give round() kwargs (mostly for PFAs) (2006-03-29) http://python.org/sf/1460496 closed by gbrandl Install PKG-INFO with packages (2006-03-27) http://python.org/sf/1459476 closed by pje Support different GPG keys in upload --sign (2006-03-23) http://python.org/sf/1457316 closed by pje Improved PySet C API (2006-03-25) http://python.org/sf/1458476 closed by bwarsaw ssl build fails due to undefined NETLINK_ stuff (2006-03-31) http://python.org/sf/1462080 closed by loewis fdopen(..., "a") (2006-03-31) http://python.org/sf/1462227 closed by gbrandl Fix for #1250170 (2006-03-31) http://python.org/sf/1462230 closed by gbrandl Fix of "disgusting hack" in Misc.after (2006-03-31) http://python.org/sf/1462235 closed by gbrandl fdopen(..., "a") (2006-03-31) http://python.org/sf/1462245 deleted by titty Fix for bug #1445068 getpass.getpass query on stderr (2006-03-31) http://python.org/sf/1462298 closed by splitscreen Fix for bug #1445068 getpass.getpass query on stderr (2006-03-31) http://python.org/sf/1462298 closed by gbrandl Pickle protocol 2 fails on private slots (2006-03-31) http://python.org/sf/1462313 closed by gbrandl Fix bug read() would hang on ssl socket if settimeout() used (2005-12-14) http://python.org/sf/1380952 closed by gbrandl fix smtplib when local host isn't resolvable in dns (2005-08-12) http://python.org/sf/1257988 closed by gbrandl Patch for bug #1458017 Log._log needs to be more forgiving (2006-03-31) http://python.org/sf/1462414 closed by gbrandl Spelling correction in libsignal.tex (2006-04-01) http://python.org/sf/1462496 closed by gbrandl bug #1452246 and patch #1087808; sgmllib entities (2006-04-01) http://python.org/sf/1462498 closed by gbrandl sgmllib.SGMLParser does not unescape attribute values; patch (2004-12-19) http://python.org/sf/1087808 closed by gbrandl Zipfile fix create_system information (06/29/05) http://python.org/sf/1229511 closed by sf-robot attributes for urlsplit, urlparse result (2002-10-16) http://python.org/sf/624325 closed by fdrake Update docs for zlib.decompressobj.flush() (2006-03-27) http://python.org/sf/1459631 closed by gbrandl Configure patch for Mac OS X 10.3 (2006-01-27) http://python.org/sf/1416559 closed by gbrandl replace md5 impl. with one having a more free license (2005-02-07) http://python.org/sf/1117961 closed by doko urllib2.ProxyHandler broken recently for non-userinfo case (2006-04-01) http://python.org/sf/1462790 closed by gbrandl Save the hash value of tuples (2006-04-01) http://python.org/sf/1462796 closed by gbrandl Add addition and moving messages to mailbox module (2003-02-14) http://python.org/sf/686545 closed by gbrandl Remove broken code from urllib2 (2006-04-02) http://python.org/sf/1463012 closed by gbrandl Add .format() method to str and unicode (2006-04-03) http://python.org/sf/1463370 closed by gbrandl Simple test for os.startfile (2006-04-04) http://python.org/sf/1464062 closed by nnorwitz Simple test for os.startfile (2006-04-04) http://python.org/sf/1464062 closed by theller const PyDict_Type ? (2006-03-13) http://python.org/sf/1448488 closed by anthonybaxter New / Reopened Bugs ___________________ Why not drop the _active list? (2006-03-29) http://python.org/sf/1460493 opened by HVB bei TUP ctypes extension does not compile on Mac OS 10.3.9 (2006-03-29) http://python.org/sf/1460514 opened by Andrew Dalke random.sample can raise KeyError (2006-03-28) CLOSED http://python.org/sf/1460340 reopened by tim_one Misleading documentation for socket.fromfd() (2006-03-29) CLOSED http://python.org/sf/1460564 opened by Michael Smith Python 2.4.2 does not compile on SunOS 5.10 using gcc (2006-03-29) CLOSED http://python.org/sf/1460605 opened by Jakob Schi?tz Broken __hash__ for Unicode objects (2006-03-29) CLOSED http://python.org/sf/1460886 opened by Joe Wreschnig test_winsound fails in 2.4.3 (2006-03-30) CLOSED http://python.org/sf/1461115 opened by Tim Peters xmlrpclib.binary doesn't exist (2006-03-30) CLOSED http://python.org/sf/1461610 opened by Chris AtLee Invalid modes crash open() (2006-03-31) CLOSED http://python.org/sf/1461783 opened by splitscreen fdopen() not guaranteed to have Python semantics (2006-03-31) CLOSED http://python.org/sf/1461855 opened by John Levon Python does not check for POSIX compliant open() modes (2006-03-31) http://python.org/sf/1462152 opened by splitscreen os.startfile() still doesn't work with Unicode filenames (2006-03-16) CLOSED http://python.org/sf/1451503 reopened by theller Coercion rules incomplete in Reference Manual (2006-03-31) CLOSED http://python.org/sf/1462278 opened by ?iga Seilnacht socket.ssl won't work together with socket.settimeout on Win (2006-03-31) http://python.org/sf/1462352 opened by Georg Brandl udp multicast setsockopt fails (2006-03-31) http://python.org/sf/1462440 opened by Alan G StopIteration raised in body of 'with' statement suppressed (2006-04-01) CLOSED http://python.org/sf/1462485 opened by Tim Delaney Errors in PCbuild (2006-04-01) CLOSED http://python.org/sf/1462700 opened by Tim Delaney urllib2 bails if the localhost doens't have a fqdn hostname (2006-04-01) CLOSED http://python.org/sf/1462706 opened by splitscreen test_minidom.py fails for Python-2.4.3 on SUSE 9.3 (2006-04-02) http://python.org/sf/1463043 opened by Richard Townsend problems with os.system and shell redirection on Windows XP (2006-04-02) CLOSED http://python.org/sf/1463104 opened by Manlio Perillo missing svnversion = configure error (2006-04-04) CLOSED http://python.org/sf/1463559 opened by Anthony Baxter logging.StreamHandler ignores argument if it compares False (2006-04-03) http://python.org/sf/1463840 opened by Paul Winkler can't build on cygwin - PyArg_Parse_SizeT undefined (2006-04-04) CLOSED http://python.org/sf/1463926 opened by Anthony Baxter curses library in python 2.4.3 broken (2006-04-04) http://python.org/sf/1464056 opened by nomind ctypes should be able to link with installed libffi (2006-04-04) http://python.org/sf/1464444 opened by Matthias Klose ctypes testsuite hardcodes libc soname (2006-04-04) CLOSED http://python.org/sf/1464506 opened by Matthias Klose Changes to generator object's gi_frame attr (2006-04-04) http://python.org/sf/1464571 opened by Collin Winter PyList_GetItem() clarification (2006-04-04) CLOSED http://python.org/sf/1464658 opened by Stefan Seefeld test_urllib2 and test_mimetypes (2006-04-06) CLOSED http://python.org/sf/1464978 opened by Anthony Baxter CSV regression in 2.5a1: multi-line cells (2006-04-05) http://python.org/sf/1465014 opened by David Goodger Error with 2.5a1 MSI installer (2006-04-05) CLOSED http://python.org/sf/1465093 opened by Paul Moore HP-UX11i installation failure (2006-04-05) http://python.org/sf/1465408 opened by Ralf W. Grosse-Kunstleve Cygwin installer should create a link to libpython2.5.dll.a (2006-04-06) http://python.org/sf/1465554 opened by Miki Tebeka zipfile - arcname must be encode in cp437 (2006-04-06) CLOSED http://python.org/sf/1465600 opened by Jens Diemer str.decode('rot13') produces Unicode (2006-04-06) CLOSED http://python.org/sf/1465619 opened by Kent Johnson test_logging hangs on cygwin (2006-04-06) http://python.org/sf/1465643 opened by Miki Tebeka test_grp & test_pwd fail (2006-04-06) http://python.org/sf/1465646 opened by Richard Townsend bdist_wininst preinstall script support is broken in 2.5a1 (2006-04-06) http://python.org/sf/1465834 opened by Paul Moore HP-UX11i: illegal combination of compilation and link flags (2006-04-06) http://python.org/sf/1465838 opened by Ralf W. Grosse-Kunstleve base64 module ignores non-alphabet characters (2006-04-07) http://python.org/sf/1466065 opened by Seo Sanghyeon ImportError: Module _subprocess not found (2006-04-07) http://python.org/sf/1466301 opened by Africanis Bogus SyntaxError in listcomp (2006-04-07) http://python.org/sf/1466641 opened by Tim Peters Bugs Closed ___________ random.sample can raise KeyError (2006-03-28) http://python.org/sf/1460340 closed by tim_one random.sample can raise KeyError (2006-03-29) http://python.org/sf/1460340 closed by gbrandl Misleading documentation for socket.fromfd() (2006-03-29) http://python.org/sf/1460564 closed by gbrandl Python 2.4.2 does not compile on SunOS 5.10 using gcc (2006-03-29) http://python.org/sf/1460605 closed by gbrandl Malloc error on MacOSX/imaplib (2006-03-24) http://python.org/sf/1457783 closed by etrepum Broken __hash__ for Unicode objects (2006-03-29) http://python.org/sf/1460886 closed by lemburg test_winsound fails in 2.4.3 (2006-03-30) http://python.org/sf/1461115 closed by loewis xmlrpclib.binary doesn't exist (2006-03-30) http://python.org/sf/1461610 closed by gbrandl Invalid modes crash open() (2006-03-31) http://python.org/sf/1461783 closed by splitscreen fdopen() not guaranteed to have Python semantics (2006-03-31) http://python.org/sf/1461855 closed by gbrandl pickle dump argument description vague (2004-02-05) http://python.org/sf/891249 closed by gbrandl Poor documentation of new-style classes (2004-05-25) http://python.org/sf/960340 closed by gbrandl __new__ not defined? (2004-08-28) http://python.org/sf/1018315 closed by gbrandl os.startfile() still doesn't work with Unicode filenames (2006-03-16) http://python.org/sf/1451503 closed by loewis os.startfile() still doesn't work with Unicode filenames (2006-03-16) http://python.org/sf/1451503 closed by gbrandl calendar.weekheader(n): n should mean chars not bytes (2004-05-04) http://python.org/sf/947906 closed by doerwalter makesetup fails to tolerate valid linker options (2006-03-20) http://python.org/sf/1454227 closed by gbrandl traceback.format_exception_only() and SyntaxError (2006-03-11) http://python.org/sf/1447885 closed by gbrandl gethostbyname(gethostname()) fails on misconfigured system (2005-08-02) http://python.org/sf/1250170 closed by gbrandl Coercion rules incomplete in Reference Manual (2006-03-31) http://python.org/sf/1462278 closed by gbrandl getpass.getpass queries on STDOUT not STERR (*nix) (2006-03-07) http://python.org/sf/1445068 closed by gbrandl poplib.POP3_SSL() class incompatible with socket.timeout (2005-11-10) http://python.org/sf/1353269 closed by gbrandl socket.setdefaulttimeout() breaks smtplib.starttls() (2005-01-08) http://python.org/sf/1098618 closed by gbrandl SSL_pending is not used (2005-03-18) http://python.org/sf/1166206 closed by gbrandl Pickle protocol 2 fails on private slots. (2006-03-05) http://python.org/sf/1443328 closed by gbrandl os.open() Documentation (2006-03-06) http://python.org/sf/1444104 closed by gbrandl Iterator on Fileobject gives no MemoryError (2005-04-06) http://python.org/sf/1177964 closed by gbrandl CVS migration not in www.python.org docs (2005-11-07) http://python.org/sf/1350498 closed by bcannon Log._log needs to be more forgiving... (2006-03-24) http://python.org/sf/1458017 closed by gbrandl StopIteration raised in body of 'with' statement suppressed (2006-04-01) http://python.org/sf/1462485 closed by pje htmllib doesn't properly substitute entities (2006-03-17) http://python.org/sf/1452246 closed by tim_one msgfmt.py: fuzzy messages not correctly found (2006-03-16) http://python.org/sf/1451341 closed by gbrandl Errors in PCbuild (2006-04-01) http://python.org/sf/1462700 closed by loewis urllib2 bails if the localhost doens't have a fqdn hostname (2006-04-01) http://python.org/sf/1462706 closed by gbrandl tarfile.open bug / corrupt data (02/08/06) http://python.org/sf/1427552 closed by sf-robot problems with os.system and shell redirection on Windows XP (2006-04-02) http://python.org/sf/1463104 closed by tim_one Mac Framework build fails on intel (2006-03-10) http://python.org/sf/1447587 closed by gbrandl [win32] stderr atty encoding not set (2006-02-01) http://python.org/sf/1421664 closed by loewis sliceobject ssize_t (and index) not completed (2006-03-22) http://python.org/sf/1456470 closed by loewis missing svnversion == configure error (2006-04-03) http://python.org/sf/1463559 closed by loewis can't build on cygwin - PyArg_Parse_SizeT undefined (2006-04-03) http://python.org/sf/1463926 closed by nnorwitz ctypes testsuite hardcodes libc soname (2006-04-04) http://python.org/sf/1464506 closed by theller PyList_GetItem() clarification (2006-04-05) http://python.org/sf/1464658 closed by gbrandl test_urllib2 and test_mimetypes (2006-04-06) http://python.org/sf/1464978 closed by anthonybaxter Error with 2.5a1 MSI installer (2006-04-05) http://python.org/sf/1465093 closed by loewis zipfile - arcname must be encode in cp437 (2006-04-06) http://python.org/sf/1465600 closed by gbrandl str.decode('rot13') produces Unicode (2006-04-06) http://python.org/sf/1465619 closed by gbrandl New / Reopened RFE __________________ Scripts invoked by -m should trim exceptions (2006-04-01) http://python.org/sf/1462486 opened by Tim Delaney Allowing the definition of constants (2006-04-05) http://python.org/sf/1465406 reopened by ciw42 Allowing the definition of constants (2006-04-05) http://python.org/sf/1465406 opened by Chris Wilson RFE Closed __________ Allowing the definition of constants (2006-04-05) http://python.org/sf/1465406 closed by rhettinger calendar._firstweekday is too hard-wired (2005-04-27) http://python.org/sf/1190596 closed by doerwalter From tim.peters at gmail.com Sat Apr 8 07:55:55 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 8 Apr 2006 01:55:55 -0400 Subject: [Python-Dev] Who understands _ssl.c on Windows? Message-ID: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> An "impossible problem" showed up on Bug Day, which got more impossible the more I looked at it: http://www.python.org/sf/1462352 See that for details. The short course is that socketmodule.c and _ssl.c disagree about the offset at which the sock_timeout member of a PySocketSockObject struct lives. As a result, timeouts set on a socket "change by magic" when _ssl.c looks at them. This is certainly the case on the trunk under Windows on my box. No idea about other platforms or Python versions. When someone figures out the cause, those should be obvious ;-) _Perhaps_ it's the case that doubles are aligned to an 8-byte boundary when socketmodule.c is compiled, but (for some unknown reason) only to a 4-byte boundary when _ssl.c is compiled. Although that seems to match the details in the bug report, I have no theory for how that could happen (I don't see any MS packing pragmas anywhere). From mwh at python.net Sat Apr 8 10:13:45 2006 From: mwh at python.net (Michael Hudson) Date: Sat, 08 Apr 2006 09:13:45 +0100 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> (Tim Peters's message of "Sat, 8 Apr 2006 01:55:55 -0400") References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> Message-ID: <2mhd54pjra.fsf@starship.python.net> "Tim Peters" <tim.peters at gmail.com> writes: > _Perhaps_ it's the case that doubles are aligned to an 8-byte boundary > when socketmodule.c is compiled, but (for some unknown reason) only to > a 4-byte boundary when _ssl.c is compiled. Although that seems to > match the details in the bug report, I have no theory for how that > could happen (I don't see any MS packing pragmas anywhere). Well, poking a bit reveals that _ssl and _socket are built by quite different mechanisms: _socket by a .vcproj but _ssl by "_ssl.mak". I don't see anything overly suspicious in _ssl.mak, but I don't really know much about Windows compiler options... Cheers, mwh -- > so python will fork if activestate starts polluting it? I find it more relevant to speculate on whether Python would fork if the merpeople start invading our cities riding on the backs of giant king crabs. -- Brian Quinlan, comp.lang.python From phil at riverbankcomputing.co.uk Sat Apr 8 09:50:49 2006 From: phil at riverbankcomputing.co.uk (Phil Thompson) Date: Sat, 8 Apr 2006 08:50:49 +0100 Subject: [Python-Dev] The "i" string-prefix: I18n'ed strings In-Reply-To: <1144454757.21556.1.camel@resist.wooz.org> References: <8393fff0604060848q3a634cb4oe30cee3efc7353f@mail.gmail.com> <4436EB7F.3070008@v.loewis.de> <1144454757.21556.1.camel@resist.wooz.org> Message-ID: <200604080850.49845.phil@riverbankcomputing.co.uk> On Saturday 08 April 2006 1:05 am, Barry Warsaw wrote: > On Sat, 2006-04-08 at 00:45 +0200, "Martin v. L?wis" wrote: > > *Never* try to do i18n that way. Don't combine fragments through > > concatenation. Instead, always use placeholders. > > Martin is of course absolutely right! > > > If you have many fragments, the translator gets the challenge of > > translating "dollars". Now, this might need to be translated differently > > in different contexts (and perhaps even depending on the value of > > balance); the translator must always get the complete message > > as a single piece. > > Plus, if you have multiple placeholders, the order may change in some > translations. I haven't been following this discussion, so something similar may already have been mentioned. The way Qt handles this is to use %1, %2 etc as placeholders. The numbers refer to the arguments (the order of which is obviously fixed by the programmer). The translator determines the order in which the placeholders appear in the format string. Phil From g.brandl at gmx.net Sat Apr 8 10:26:59 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 08 Apr 2006 10:26:59 +0200 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <2mhd54pjra.fsf@starship.python.net> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <2mhd54pjra.fsf@starship.python.net> Message-ID: <e17s4j$vr$1@sea.gmane.org> Michael Hudson wrote: > "Tim Peters" <tim.peters at gmail.com> writes: > >> _Perhaps_ it's the case that doubles are aligned to an 8-byte boundary >> when socketmodule.c is compiled, but (for some unknown reason) only to >> a 4-byte boundary when _ssl.c is compiled. Although that seems to >> match the details in the bug report, I have no theory for how that >> could happen (I don't see any MS packing pragmas anywhere). > > Well, poking a bit reveals that _ssl and _socket are built by quite > different mechanisms: _socket by a .vcproj but _ssl by "_ssl.mak". I > don't see anything overly suspicious in _ssl.mak, but I don't really > know much about Windows compiler options... A mailing list post found via Google suggests that Visual Studio automatically sets the struct member alignment to 4 bytes when building via old .mak files, for compatibility with older VC++. Anyway, a /Zp8 flag should correct this. Georg From martin at v.loewis.de Sat Apr 8 11:38:14 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 08 Apr 2006 11:38:14 +0200 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> Message-ID: <44378486.3060008@v.loewis.de> Tim Peters wrote: > _Perhaps_ it's the case that doubles are aligned to an 8-byte boundary > when socketmodule.c is compiled, but (for some unknown reason) only to > a 4-byte boundary when _ssl.c is compiled. This is indeed what happens, because of what I consider three bugs: one in Python, and two in the Platform SDK: - _ssl.mak fails to define WIN32; this is bug 1 - because of that, WinSock2.h includes pshpack4.h at the beginning (please take a look at the comment above that include) - WinSock2.h includes windows.h (and some other stuff). This ultimately *defines* WIN32. I haven't traced where exactly it gets defined, but verified that it does before the end of WinSock2.h. Most likely, it comes from Ole2.h. This is bug 2: if WIN32 is a flag to the SDK headers, they shouldn't set it themselves. If they want to assume it is always defined, they should just make the things that it currently affects unconditional. - WinSock2 then includes poppack.h, under the same condition (WIN32 not defined) as pshpack4.h. This is bug 3: the code assumes that the condition doesn't change, but it might. They should instead set a macro (say, WINSOCK2_PSHPACK4) at push time, and then check for it to determine if they need to pop. Anyway, the fix is then straight-forward: just add /DWIN32 to _ssl.mak. Regards, Martin From martin at v.loewis.de Sat Apr 8 11:40:13 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 08 Apr 2006 11:40:13 +0200 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <e17s4j$vr$1@sea.gmane.org> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <2mhd54pjra.fsf@starship.python.net> <e17s4j$vr$1@sea.gmane.org> Message-ID: <443784FD.6090604@v.loewis.de> Georg Brandl wrote: > A mailing list post found via Google suggests that Visual Studio automatically > sets the struct member alignment to 4 bytes when building via old .mak files, > for compatibility with older VC++. Most likely, the poster didn't understand what's going on. I very much doubt nmake does such a thing. More likely, their makefile had the same bug as ours (i.e. lacking a WIN32 define). Regards, Martin From brett at python.org Sat Apr 8 23:47:28 2006 From: brett at python.org (Brett Cannon) Date: Sat, 8 Apr 2006 14:47:28 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP Message-ID: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> OK, I am going to write the PEP I proposed a week or so ago, listing all modules and packages within the stdlib that are maintained externally so we have a central place to go for contact info or where to report bugs on issues. This should only apply to modules that want bugs reported outside of the Python tracker and have a separate dev track. People who just use the Python repository as their mainline version can just be left out. For each package I need the name of the module, the name of the maintainer, homepage of the module outside of Python, and where to report bugs. Do people think we need to document the version that has been imported into Python and when that was done as well? Anyway, here is a list of the packages that I think have outside maintenance (or at least have been at some point). Anyone who has info on them that I need, please let me know the details. Also, if I missed any, obviously speak up: - bsddb (still external?) - sqlite - ctypes - cjkcodecs (still external?) - expat - email - logging (still external?) - Tix - ElementTree From barry at python.org Sun Apr 9 00:20:46 2006 From: barry at python.org (Barry Warsaw) Date: Sat, 08 Apr 2006 18:20:46 -0400 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <1144534846.21566.13.camel@resist.wooz.org> On Sat, 2006-04-08 at 14:47 -0700, Brett Cannon wrote: > - email This has an standalone release, but development and bug reports should all happen in the Python project. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060408/964f5f77/attachment.pgp From greg.ewing at canterbury.ac.nz Sun Apr 9 02:42:25 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 09 Apr 2006 12:42:25 +1200 Subject: [Python-Dev] elementtree in stdlib In-Reply-To: <e15np5$bvk$1@sea.gmane.org> References: <44348B3E.2000607@canterbury.ac.nz> <9E095442-9AA2-4148-9238-F1527E511760@gmail.com> <44354E30.7080307@infrae.com> <20060406191840.GA12465@activestate.com> <4435B6B9.2070309@canterbury.ac.nz> <e15np5$bvk$1@sea.gmane.org> Message-ID: <44385871.5030406@canterbury.ac.nz> Georg Brandl wrote: > Suppose I wanted to implement that, what would be the best strategy > to follow: > - change handling of IMPORT_NAME and IMPORT_FROM in ceval.c > - emit different bytecodes in compile.c > - directly create TryExcept AST nodes in ast.c I'd probably go for the third option. Isn't that the sort of thing the fancy new ast system is designed for? -- Greg From tjreedy at udel.edu Sun Apr 9 05:10:17 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 8 Apr 2006 23:10:17 -0400 Subject: [Python-Dev] need info for externally maintained modules PEP References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <e19tvf$5hk$1@sea.gmane.org> "Brett Cannon" <brett at python.org> wrote in message news:bbaeab100604081447x5f368d82qc242945e467dea7c at mail.gmail.com... > This should only apply to modules that want > bugs reported outside of the Python tracker and have a separate dev > track. People who just use the Python repository as their mainline > version can just be left out. If you include the latter in a separate section (modules with separate release, but using Python repository and tracker, like email), then people will know they have not just been forgotten about and indeed should have bugs reported in the regular tracker ;-). TJR From tim.peters at gmail.com Sun Apr 9 06:49:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 9 Apr 2006 00:49:18 -0400 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> Message-ID: <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> So Martin fixed _ssl.c on Windows (thanks! what a subtle pit that turned out to be), and I restored the test_timeout() test in test_socket_ssl. That test was introduced on Bug Day, but: a) First got fiddled to exclude Windows, because the _ssl.c bug made it impossible for the test to pass on Windows; and, b) Then got disabled on all boxes, because the gmail.org address it tried to connect to stopped responding to anyone, and all buildbot test runs on all non-Windows boxes failed as a result for hours that day. We have one oddity remaining: now that test fails on, and only on, Trent's "x86 W2k trunk" buildbot. It always times out there, and never times out elsewhere. It times out during a socket.connect() call on that buildbot. The socket timeout is set to 30 seconds (seems a very long time, right?). Trent (and anyone else who wants to play along), what happens if you do this by hand in a current trunk or 2.4 build?: import socket s = socket.socket() s.settimeout(30.0) s.connect(("gmail.org", 995)) On my box (when gmail.org:995 responds at all), the connect succeeds in approximately 0.03 seconds, giving 29.97 seconds to spare ;-) Can you identify a reason for why it times out on the Win2K buildbot? (beats me -- firewall issue, DNS sloth, ...?) Anyone: is it "a bug" or "a feature" that a socket.connect() call that times out may raise either socket.timeout or socket.error? Trent's box raises socket.error. On my box, when I set a timeout value small enough so that it _does_ time out, socket.timeout is raised: >>> import socket >>> s = socket.socket() >>> s.settimeout(0.01) >>> s.connect(("gmail.org", 995)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 1, in connect socket.timeout: timed out From martin at v.loewis.de Sun Apr 9 09:16:49 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 09:16:49 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <4438B4E1.8040806@v.loewis.de> Brett Cannon wrote: > - expat Not sure whether you mean the Expat parser proper here, or the pyexpat module: both are externally maintained, also; pyexpat is part of PyXML. OTOH, people can just consider PyXML to be part of Python, and I synchronize the Python sources with the PyXML sources from time to time. Regards, Martin From brett at python.org Sun Apr 9 09:21:17 2006 From: brett at python.org (Brett Cannon) Date: Sun, 9 Apr 2006 00:21:17 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <4438B4E1.8040806@v.loewis.de> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <4438B4E1.8040806@v.loewis.de> Message-ID: <bbaeab100604090021ube72c96xa3578bd46afebf9a@mail.gmail.com> On 4/9/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Brett Cannon wrote: > > - expat > > Not sure whether you mean the Expat parser proper here, or the pyexpat > module: both are externally maintained, I was thinking the parser, but if pyexpat is externally maintained, then both of them. > also; pyexpat is part of PyXML. > OTOH, people can just consider PyXML to be part of Python, and I > synchronize the Python sources with the PyXML sources from time to time. OK, so then just the info for expat. -Brett From martin at v.loewis.de Sun Apr 9 09:48:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 09:48:35 +0200 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> Message-ID: <4438BC53.5010405@v.loewis.de> Tim Peters wrote: > Anyone: is it "a bug" or "a feature" that a socket.connect() call > that times out may raise either socket.timeout or socket.error? > Trent's box raises socket.error. On my box, when I set a timeout > value small enough so that it _does_ time out, socket.timeout is > raised: That should give another clue as to what happened: It is *not* the select call that timed out - this would have raised a socket_timeout exception, due to internal_connect setting the timeout variable. Looking at the code, the following operations might have caused this error 10060: - getsockaddrarg - connect, returning WSAETIMEDOUT instead of WSAEWOULDBLOCK The code in getsockaddrarg is hard to follow, but I think it should only ever cause socket.gaierror. So why would connect ignore the non-blocking mode? the earlier call to ioctlsocket might have failed - unfortunately, we wouldn't know about that, because the return value of ioctlsocket (and all other system calls in internal_setblocking) is ignored. You might want to take a look at this KB article: http://support.microsoft.com/kb/179942/EN-US/ which claims that the WSA_FLAG_OVERLAPPED flag must be set on a socket on NT 4.0 for non-blocking mode to work. This shouldn't matter here, as the system is W2k, and because we used the socket API to create the socket (right?). Regards, Martin From martin at v.loewis.de Sun Apr 9 10:10:01 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 10:10:01 +0200 Subject: [Python-Dev] Subversion downtime today Message-ID: <4438C159.3090905@v.loewis.de> I will need to turn off subversion write access for an hour or so today, from about 18:00 GMT to 19:00 GMT. During that time, I will load the stackless svn dumpfile into the repository. I don't expect problems, but if there are any, I want to be able to revert to a backup, without losing any unrelated commits. Regards, Martin From martin at v.loewis.de Sun Apr 9 10:44:47 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 10:44:47 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <442C2321.9000105@python.net> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> Message-ID: <4438C97F.50905@v.loewis.de> Thomas Heller wrote: >>> return Py_BuildValue("HHHHs", >>> ver.dwMajorVersion, >>> ver.dwMinorVersion, >>> ver.dwBuildNumber, >>> ver.dwPlatformId, >>> ver.szCSDVersion); >>> >>> The crash disappears if I change the first parameter in the >>> Py_BuildValue call to "LLLLs". No idea why. >>> With this change, I can start the exe without a crash, but >>> sys.versioninfo starts with (IIRC) (2, 0, 5,...). >> Very strange. What is your compiler version (first line of cl /?)? I have looked into this. In the latest SDK (2003 SP1), Microsoft has changed the include structure; there are no separate amd64 subdirectories anymore. Then, cl.exe was picking up the wrong stdarg.h (the one of VS 2003), which would not work for AMD64. I have corrected that in vsextcomp, but I will need to check a few more things before releasing it. Regards, Martin From thomas at python.org Sun Apr 9 15:30:44 2006 From: thomas at python.org (Thomas Wouters) Date: Sun, 9 Apr 2006 15:30:44 +0200 Subject: [Python-Dev] int()'s ValueError behaviour Message-ID: <9e804ac0604090630i32f795a7jd9afd9b0365477eb@mail.gmail.com> Someone on IRC (who refuses to report bugs on sourceforge, so I guess he wants to remain anonymous) came with this very amusing bug: int(), when raising ValueError, doesn't quote (or repr(), rather) its arguments: >>> int("") Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: invalid literal for int(): >>> int("34\n\n\n5") Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: invalid literal for int(): 34 5 >>> Unicode behaviour also isn't always consistent: >>> int(u'\u0100') Traceback (most recent call last): File "<stdin>", line 1, in ? UnicodeEncodeError: 'decimal' codec can't encode character u'\u0100' in position 0: invalid decimal Unicode string >>> int(u'\u09ec', 6) Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: invalid literal for int(): 6 And trying to use the 'decimal' codec directly: >>> u'6'.encode('decimal') Traceback (most recent call last): File "<stdin>", line 1, in ? LookupError: unknown encoding: decimal I'm not sure if the latter problems are fixable, but the former should be fixed by passing the argument to ValueError through repr(), I think. It's also been suggested (by the reporter, and I agree) that the actual base should be in the errormessage too. Is there some reason not to do this that I've overlooked? -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060409/9b123b2e/attachment.htm From p.f.moore at gmail.com Sun Apr 9 16:35:05 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 9 Apr 2006 15:35:05 +0100 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <79990c6b0604090735g63378e39me8268ac3d9d44a57@mail.gmail.com> On 4/8/06, Brett Cannon <brett at python.org> wrote: > Anyway, here is a list of the packages that I think have outside > maintenance (or at least have been at some point). Anyone who has > info on them that I need, please let me know the details. Also, if I > missed any, obviously speak up: I think there's still some confusion over optparse/optik. There was a recent thread about a feature in optik which hadn't made it into optparse - but maybe that's just a synchronisation problem. Paul. From pje at telecommunity.com Sun Apr 9 17:20:42 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 09 Apr 2006 11:20:42 -0400 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> At 02:47 PM 4/8/2006 -0700, Brett Cannon wrote: > Do people think we need to document the version that has >been imported into Python and when that was done as well? Definitely. Better yet, in 2.5 I'd like to have the version information embedded in a PKG-INFO file that gets installed with Python, so that setuptools can detect the package's presence. (This is particularly important for cytpes, sqlite, and ElementTree, which were added in 2.5 and thus may be listed in package dependencies for 2.3/2.4 programs.) Unfortunately, I haven't quite figured out how to ensure that the PKG-INFO files get built, because there's only one setup.py in the Python build process at the moment. It would be good if we could have separate setup.py files for "external" libraries, not only because of the ability to get PKG-INFO installed, but also because then OS vendors that split those externals out into separate system packages wouldn't need to jump through as many hoops to build just a single external library like ElementTree or ctypes. From guido at python.org Sun Apr 9 17:20:01 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 9 Apr 2006 08:20:01 -0700 Subject: [Python-Dev] int()'s ValueError behaviour In-Reply-To: <9e804ac0604090630i32f795a7jd9afd9b0365477eb@mail.gmail.com> References: <9e804ac0604090630i32f795a7jd9afd9b0365477eb@mail.gmail.com> Message-ID: <ca471dc20604090820t51a2a4aatb28da79bdf7a141f@mail.gmail.com> Go ahead and fix it. This was probably never changed since 1990 or so... Do expect some code brakage where people rely on the old behavior. :-( --Guido On 4/9/06, Thomas Wouters <thomas at python.org> wrote: > > Someone on IRC (who refuses to report bugs on sourceforge, so I guess he > wants to remain anonymous) came with this very amusing bug: int(), when > raising ValueError, doesn't quote (or repr(), rather) its arguments: > > >>> int("") > Traceback (most recent call last): > File "<stdin>", line 1, in ? > ValueError: invalid literal for int(): > >>> int("34\n\n\n5") > Traceback (most recent call last): > File "<stdin>", line 1, in ? > ValueError: invalid literal for int(): 34 > > > 5 > >>> > > Unicode behaviour also isn't always consistent: > >>> int(u'\u0100') > Traceback (most recent call last): > File "<stdin>", line 1, in ? > UnicodeEncodeError: 'decimal' codec can't encode character u'\u0100' in > position 0: invalid decimal Unicode string > >>> int(u'\u09ec', 6) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > ValueError: invalid literal for int(): 6 > > And trying to use the 'decimal' codec directly: > >>> u'6'.encode('decimal') > Traceback (most recent call last): > File "<stdin>", line 1, in ? > LookupError: unknown encoding: decimal > > I'm not sure if the latter problems are fixable, but the former should be > fixed by passing the argument to ValueError through repr(), I think. It's > also been suggested (by the reporter, and I agree) that the actual base > should be in the errormessage too. Is there some reason not to do this that > I've overlooked? > > -- > Thomas Wouters <thomas at python.org> > > Hi! I'm a .signature virus! copy me into your .signature file to help me > spread! > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From michele.simionato at gmail.com Sun Apr 9 18:39:11 2006 From: michele.simionato at gmail.com (Michele Simionato) Date: Sun, 9 Apr 2006 16:39:11 +0000 (UTC) Subject: [Python-Dev] threadless brownian.py Message-ID: <loom.20060409T183822-466@post.gmane.org> Recently I downloaded Python 2.5a1 and I have started playing with it. In doing so, I have been looking at the Demo directory in the distribution, to check if demos of the new features have been added there. So, I rediscovered brownian.py, in Demo/tkinter/guido. I just love this little program, because it reminds myself of one my first programs, a long long time ago (a brownian motion in AmigaBasic, with sprites!). It is also one of the first programs I looked at, when I started studying threads four years ago and I thought it was perfect. However, nowadays I know better and I have realized that brownian.py is perfect textbook example of a case where you don't really need threads, and you can use generators instead. So I thought it would be nice to add a threadless version of brownian.py in the Demo directory. Here it is. If you like it, I donate the code to the PSF! ---------------------------- # Brownian motion -- an example of a NON multi-threaded Tkinter program ;) from Tkinter import * import random import sys WIDTH = 400 HEIGHT = 300 SIGMA = 10 BUZZ = 2 RADIUS = 2 LAMBDA = 10 FILL = 'red' stop = 0 # Set when main loop exits root = None # main window def particle(canvas): # particle = iterator over the moves r = RADIUS x = random.gauss(WIDTH/2.0, SIGMA) y = random.gauss(HEIGHT/2.0, SIGMA) p = canvas.create_oval(x-r, y-r, x+r, y+r, fill=FILL) while not stop: dx = random.gauss(0, BUZZ) dy = random.gauss(0, BUZZ) try: canvas.move(p, dx, dy) except TclError: break else: yield None def move(particle): # move the particle at random time particle.next() dt = random.expovariate(LAMBDA) root.after(int(dt*1000), move, particle) def main(): global root, stop root = Tk() canvas = Canvas(root, width=WIDTH, height=HEIGHT) canvas.pack(fill='both', expand=1) np = 30 if sys.argv[1:]: np = int(sys.argv[1]) for i in range(np): # start the dance move(particle(canvas)) try: root.mainloop() finally: stop = 1 if __name__ == '__main__': main() From martin at v.loewis.de Sun Apr 9 19:56:01 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 19:56:01 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> References: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> Message-ID: <44394AB1.8090709@v.loewis.de> Phillip J. Eby wrote: > It would be good if we could have separate setup.py files for "external" > libraries, not only because of the ability to get PKG-INFO installed, but > also because then OS vendors that split those externals out into separate > system packages wouldn't need to jump through as many hoops to build just a > single external library like ElementTree or ctypes. -1. These aren't external libraries; they are part of Python. There are many build options (such as linking them statically through Modules/Setup, or building them as dynamic libraries using Modules/Setup, or building them through setup.py); adding more files would increase confusion. If you want additional files generated and installed, additional code can be put into setup.py to generate them. Regards, Martin From martin at v.loewis.de Sun Apr 9 20:42:57 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 20:42:57 +0200 Subject: [Python-Dev] Subversion repository back up Message-ID: <443955B1.7090706@v.loewis.de> I just completed the repository import; write access to the projects repository is back. Regards, Martin From pje at telecommunity.com Sun Apr 9 20:48:47 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 09 Apr 2006 14:48:47 -0400 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <44394AB1.8090709@v.loewis.de> References: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> At 07:56 PM 4/9/2006 +0200, Martin v. L?wis wrote: >Phillip J. Eby wrote: > > It would be good if we could have separate setup.py files for "external" > > libraries, not only because of the ability to get PKG-INFO installed, but > > also because then OS vendors that split those externals out into separate > > system packages wouldn't need to jump through as many hoops to build > just a > > single external library like ElementTree or ctypes. > >-1. These aren't external libraries; they are part of Python. They *were* external libraries. Also, many OS vendors nonetheless split the standard library into different system packages, e.g. Debian's longstanding tradition of excising the distutils into a separate python-dev package. As much as we might wish that vendors not do these things, they often have practical matters of continuity and documentation to deal with; if they currently have a "python-ctypes" package, for example, they may wish to maintain that even when ctypes is bundled with 2.5. > There are >many build options (such as linking them statically through >Modules/Setup, or building them as dynamic libraries using >Modules/Setup, or building them through setup.py); adding more files >would increase confusion. I was hoping that we had reached a point where setup.py could simply become the One Obvious Way to do it, in which case the One Obvious Way to bundle formerly-external libraries would be to bundle their setup scripts. This is particularly helpful for external libraries that still maintain an external distribution for older Python versions. >If you want additional files generated and installed, additional code >can be put into setup.py to generate them. Of course it can, and that will certainly solve the issues for packages with dependencies on ctypes, ElementTree, and setuptools. But I was hoping for a more modular approach, one that would be more amenable to the needs of external library maintainers, and of system packagers who split even the core stdlib into multiple packages. Those packagers will have to figure out what parts of the new setup.py to extract and what to keep in order to do their splitting. And the people who develop the libraries will have to maintain two unrelated setup.py files. From martin at v.loewis.de Sun Apr 9 21:00:05 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 09 Apr 2006 21:00:05 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> References: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> Message-ID: <443959B5.5010808@v.loewis.de> Phillip J. Eby wrote: > I was hoping that we had reached a point where setup.py could simply > become the One Obvious Way to do it, in which case the One Obvious Way > to bundle formerly-external libraries would be to bundle their setup > scripts. Well, setup.py cannot (yet?) replace the other mechanisms. It is certainly the default, yes. > Of course it can, and that will certainly solve the issues for packages > with dependencies on ctypes, ElementTree, and setuptools. But I was > hoping for a more modular approach, one that would be more amenable to > the needs of external library maintainers, and of system packagers who > split even the core stdlib into multiple packages. Those packagers will > have to figure out what parts of the new setup.py to extract and what to > keep in order to do their splitting. And the people who develop the > libraries will have to maintain two unrelated setup.py files. I can't envision how that would work, or what problems it would solve. Hence, I personally can't work on any of this. System packagers already have mechanisms for splitting the Python distribution into several packages. They typically run the entire build process, building all modules, and then put the various files into different packages. I don't think they have ever asked for better support of this process, and I doubt that additional setup.py will help them - they just apply the process that the where using all the time to the new release, and it will continue to work fine. Regards, Martin From mal at egenix.com Sun Apr 9 22:27:34 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 09 Apr 2006 22:27:34 +0200 Subject: [Python-Dev] int()'s ValueError behaviour In-Reply-To: <ca471dc20604090820t51a2a4aatb28da79bdf7a141f@mail.gmail.com> References: <9e804ac0604090630i32f795a7jd9afd9b0365477eb@mail.gmail.com> <ca471dc20604090820t51a2a4aatb28da79bdf7a141f@mail.gmail.com> Message-ID: <44396E36.40300@egenix.com> Guido van Rossum wrote: > Go ahead and fix it. This was probably never changed since 1990 or > so... Do expect some code brakage where people rely on the old > behavior. :-( > > --Guido > > On 4/9/06, Thomas Wouters <thomas at python.org> wrote: >> Someone on IRC (who refuses to report bugs on sourceforge, so I guess he >> wants to remain anonymous) came with this very amusing bug: int(), when >> raising ValueError, doesn't quote (or repr(), rather) its arguments: >> >> >>> int("") >> Traceback (most recent call last): >> File "<stdin>", line 1, in ? >> ValueError: invalid literal for int(): >>>>> int("34\n\n\n5") >> Traceback (most recent call last): >> File "<stdin>", line 1, in ? >> ValueError: invalid literal for int(): 34 >> >> >> 5 >> Unicode behaviour also isn't always consistent: >>>>> int(u'\u0100') >> Traceback (most recent call last): >> File "<stdin>", line 1, in ? >> UnicodeEncodeError: 'decimal' codec can't encode character u'\u0100' in >> position 0: invalid decimal Unicode string >>>>> int(u'\u09ec', 6) >> Traceback (most recent call last): >> File "<stdin>", line 1, in ? >> ValueError: invalid literal for int(): 6 >> >> And trying to use the 'decimal' codec directly: >>>>> u'6'.encode('decimal') >> Traceback (most recent call last): >> File "<stdin>", line 1, in ? >> LookupError: unknown encoding: decimal This part I can explain: the internal decimal codec isn't made public through the codec registry since it only supports encoding. The encoder converts a Unicode decimal strings to plain ASCII decimals. The error message looks like a standard codec error message because the raise_encode_exception() API is used. >> I'm not sure if the latter problems are fixable, but the former should be >> fixed by passing the argument to ValueError through repr(), I think. It's >> also been suggested (by the reporter, and I agree) that the actual base >> should be in the errormessage too. Is there some reason not to do this that >> I've overlooked? >> >> -- >> Thomas Wouters <thomas at python.org> >> >> Hi! I'm a .signature virus! copy me into your .signature file to help me >> spread! >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> >> > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 09 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From andrew-pythondev at puzzling.org Mon Apr 10 01:35:57 2006 From: andrew-pythondev at puzzling.org (Andrew Bennetts) Date: Mon, 10 Apr 2006 09:35:57 +1000 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> References: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> Message-ID: <20060409233557.GA29986@steerpike.home.puzzling.org> On Sun, Apr 09, 2006 at 02:48:47PM -0400, Phillip J. Eby wrote: > At 07:56 PM 4/9/2006 +0200, Martin v. L?wis wrote: [...] > >-1. These aren't external libraries; they are part of Python. > > They *were* external libraries. Also, many OS vendors nonetheless split > the standard library into different system packages, e.g. Debian's > longstanding tradition of excising the distutils into a separate python-dev > package. Debian has fixed this bug. The entire standard library, including distutils, is now in the python2.4 package. The python2.4-dev package mainly contains header files now. This has been true for some time; it's certainly the case in the current stable release. > As much as we might wish that vendors not do these things, they often have > practical matters of continuity and documentation to deal with; if they > currently have a "python-ctypes" package, for example, they may wish to > maintain that even when ctypes is bundled with 2.5. They can do that just by shipping an empty "python-ctypes" package that depends on the full python package. -Andrew. From anthony at interlink.com.au Mon Apr 10 02:41:45 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Mon, 10 Apr 2006 10:41:45 +1000 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> References: <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> Message-ID: <200604101041.48990.anthony@interlink.com.au> On Monday 10 April 2006 04:48, Phillip J. Eby wrote: > They *were* external libraries. Also, many OS vendors nonetheless > split the standard library into different system packages, e.g. > Debian's longstanding tradition of excising the distutils into a > separate python-dev package. Ubuntu (a debian derivative) and I _think_ Debian proper has fixed this now. Well, I'd be suprised if Debian proper hasn't fixed it as well, as the same person packages both. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From pje at telecommunity.com Mon Apr 10 03:07:23 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 09 Apr 2006 21:07:23 -0400 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <20060409233557.GA29986@steerpike.home.puzzling.org> References: <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409111505.01e20888@mail.telecommunity.com> <5.1.1.6.0.20060409143705.01e3b4a8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060409210452.03799350@mail.telecommunity.com> At 09:35 AM 4/10/2006 +1000, Andrew Bennetts wrote: >On Sun, Apr 09, 2006 at 02:48:47PM -0400, Phillip J. Eby wrote: > > At 07:56 PM 4/9/2006 +0200, Martin v. L?wis wrote: >[...] > > >-1. These aren't external libraries; they are part of Python. > > > > They *were* external libraries. Also, many OS vendors nonetheless split > > the standard library into different system packages, e.g. Debian's > > longstanding tradition of excising the distutils into a separate > python-dev > > package. > >Debian has fixed this bug. And there was much rejoicing. :) > > As much as we might wish that vendors not do these things, they often have > > practical matters of continuity and documentation to deal with; if they > > currently have a "python-ctypes" package, for example, they may wish to > > maintain that even when ctypes is bundled with 2.5. > >They can do that just by shipping an empty "python-ctypes" package that >depends >on the full python package. Yeah, but why do something that logical and simple when you can create elaborate patches to remove functionality from setup.py? ;) But you've convinced me. I'd still prefer we generate these packages' PKG-INFO from the "formerly external" packages' setup.py files in order to ensure the metadata is correct. But if we have to do it manually, we have to do it manually. From guido at python.org Mon Apr 10 03:34:05 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 9 Apr 2006 18:34:05 -0700 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line Message-ID: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> On Linux, In HEAD 2.5, but only with the non-debug version, I get a segfault when I do this: >>> ''' ... ''' It seems to occur for any triple-quoted string crossing a line boundary. A bit of the stack trace: #0 0x40030087 in pthread_mutex_lock () from /lib/i686/libpthread.so.0 #1 0x4207ad18 in free () from /lib/i686/libc.so.6 #2 0x08057990 in tok_nextc (tok=0x81c71d8) at ../Parser/tokenizer.c:809 #3 0x0805872d in tok_get (tok=0x81c71d8, p_start=0xbffff338, p_end=0xbffff33c) at ../Parser/tokenizer.c:1411 #4 0x08059042 in PyTokenizer_Get (tok=0x81c71d8, p_start=0xbffff338, p_end=0xbffff33c) at ../Parser/tokenizer.c:1514 #5 0x080568a7 in parsetok (tok=0x81c71d8, g=0x814a000, start=256, err_ret=0xbffff3a0, flags=0) at ../Parser/parsetok.c:135 Does this ring a bell? Is there already an SF bug open perhaps? On OSX, I get an interesting error: python2.5(12998) malloc: *** Deallocation of a pointer not malloced: 0x36b460; This could be a double free(), or free() called with the middle of an allocated block; Try setting environment variable MallocHelp to see tools to help debug -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.peters at gmail.com Mon Apr 10 04:44:29 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 9 Apr 2006 22:44:29 -0400 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line In-Reply-To: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> References: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> Message-ID: <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> [Guido] > On Linux, In HEAD 2.5, but only with the non-debug version, I get a > segfault when I do this: > > >>> ''' > ... ''' > > It seems to occur for any triple-quoted string crossing a line > boundary. A bit of the stack trace: > > #0 0x40030087 in pthread_mutex_lock () from /lib/i686/libpthread.so.0 > #1 0x4207ad18 in free () from /lib/i686/libc.so.6 > #2 0x08057990 in tok_nextc (tok=0x81c71d8) at ../Parser/tokenizer.c:809 > #3 0x0805872d in tok_get (tok=0x81c71d8, p_start=0xbffff338, p_end=0xbffff33c) > at ../Parser/tokenizer.c:1411 > #4 0x08059042 in PyTokenizer_Get (tok=0x81c71d8, p_start=0xbffff338, > p_end=0xbffff33c) at ../Parser/tokenizer.c:1514 > #5 0x080568a7 in parsetok (tok=0x81c71d8, g=0x814a000, start=256, > err_ret=0xbffff3a0, flags=0) at ../Parser/parsetok.c:135 > > > Does this ring a bell? Is there already an SF bug open perhaps? > > On OSX, I get an interesting error: > > python2.5(12998) malloc: *** Deallocation of a pointer not malloced: > 0x36b460; This could be a double free(), or free() called with the > middle of an allocated block; Try setting environment variable > MallocHelp to see tools to help debug It rings a bell here only in that the front end had lots of allocate-versus-free mismatches between the PyObject_ and PyMem_ raw-memory APIs, and this kind of failure smells a lot like that. For example, the ../Parser/tokenizer.c:809 in the traceback is PyMem_FREE(new); and _one_ way to set `new` is from the earlier well-hidden else if (tok_stdin_decode(tok, &new) != 0) where tok_stdin_decode() can do PyMem_FREE(*inp); *inp = converted; where `inp` is its local name for `new`, and `converted` comes from converted = new_string(PyString_AS_STRING(utf8), PyString_GET_SIZE(utf8)); and new_string() starts with char* result = (char *)PyObject_MALLOC(len + 1); So that's a mismatch, although I don't know whether it's the one that's triggering. When I repaired all the mismatches that caused tests to crash on my box, I changed affected front-end string mucking to use PyObject_ uniformly (strings are usual small, the small-object allocator is usually faster than the platform malloc, and half (exactly half :-) of the crash-causing mismatched pairs were using PyObject_ anyway). Someone who understands their way through the sub-maze above is encouraged to do the same for it. BTW, your "but only with the non-debug version" is more evidence: in a debug build, PyMem_ and PyObject_ calls are all redirected to Python's obmalloc, to take advantage of its debug-build padding gimmicks. It's only in a release build that PyMem_ resolves directly to the platform malloc/realloc/free. From nnorwitz at gmail.com Mon Apr 10 05:31:44 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 9 Apr 2006 20:31:44 -0700 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line In-Reply-To: <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> References: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> Message-ID: <ee2a432c0604092031r5ea5b689kc00834dc023bc598@mail.gmail.com> On 4/9/06, Tim Peters <tim.peters at gmail.com> wrote: > [Guido] > > On Linux, In HEAD 2.5, but only with the non-debug version, I get a > > segfault when I do this: > > > > >>> ''' > > ... ''' > > It rings a bell here only in that the front end had lots of > allocate-versus-free mismatches between the PyObject_ and PyMem_ > raw-memory APIs, and this kind of failure smells a lot like that. http://python.org/sf/1467512 fixes the problem for me on linux. It converts all the PyMem_* APIs to PyObject_* APIs. Assigned to Guido until he changes that. :-) There are several more places in the core that should probably use PyObject_* memory APIs since the alloced memory is small. 412 uses of PyMem_* in */*.c. Most of those are in modules where it is probably appropriate. But PyFutureFeatures could really use PyObject_* given it's only 8 bytes. (Python/future.c and Python/compile.c). Modules/_bsddb.c has a scary line: #define PyObject_Del PyMem_DEL n From tim.peters at gmail.com Mon Apr 10 06:13:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 00:13:18 -0400 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line In-Reply-To: <ee2a432c0604092031r5ea5b689kc00834dc023bc598@mail.gmail.com> References: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> <ee2a432c0604092031r5ea5b689kc00834dc023bc598@mail.gmail.com> Message-ID: <1f7befae0604092113s565d36bfmb24eee79da67f3b3@mail.gmail.com> [Neal Norwitz] > http://python.org/sf/1467512 fixes the problem for me on linux. It > converts all the PyMem_* APIs to PyObject_* APIs. Assigned to Guido > until he changes that. :-) Thanks! I didn't take that route, instead not changing anything I didn't take the time to understand first. For example, it's a pure loss to change tok_new() to use PyObject_ (as the patch does), since a struct tok_state is far too big for obmalloc to handle (e.g., the `indstack` member alone is too big -- that's an array of 100 ints, so consumes at least 400 bytes by itself). Using PyObject_ here just wastes some time figuring out that tok_state is too big for it, and passes the calls on to the system malloc/free. I thought about switching the readline thingies at the time, but quickly hit a wall: I have no idea where the source for #ifdef __VMS extern char* vms__StdioReadline(FILE *sys_stdin, FILE *sys_stdout, char *prompt); #endif may be (it's apparently not checked in, and Google couldn't find it either). Does anyone still work on the VMS port? If not, screw it ;-) > There are several more places in the core that should probably use > PyObject_* memory APIs since the alloced memory is small. 412 uses of > PyMem_* in */*.c. Most of those are in modules where it is probably > appropriate. But PyFutureFeatures could really use PyObject_* given > it's only 8 bytes. (Python/future.c and Python/compile.c). If we allocate one of those per PyAST_Compile, I'm not sure the difference would be measurable ;-) > Modules/_bsddb.c has a scary line: #define PyObject_Del PyMem_DEL That one's fine -- it's protected by an appropriate #ifdef: #if PYTHON_API_VERSION <= 1007 /* 1.5 compatibility */ #define PyObject_New PyObject_NEW #define PyObject_Del PyMem_DEL #endif The name "PyObject_Del" didn't exist in Pythons that old, so "something like that" is necessary to write C that works with all Pythons ever released. From martin at v.loewis.de Mon Apr 10 08:40:12 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 08:40:12 +0200 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line In-Reply-To: <1f7befae0604092113s565d36bfmb24eee79da67f3b3@mail.gmail.com> References: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> <ee2a432c0604092031r5ea5b689kc00834dc023bc598@mail.gmail.com> <1f7befae0604092113s565d36bfmb24eee79da67f3b3@mail.gmail.com> Message-ID: <4439FDCC.6010805@v.loewis.de> Tim Peters wrote: > #ifdef __VMS > extern char* vms__StdioReadline(FILE *sys_stdin, FILE *sys_stdout, > char *prompt); > #endif > > may be (it's apparently not checked in, and Google couldn't find it > either). Does anyone still work on the VMS port? If not, screw it > ;-) I would have no concerns with breaking the VMS port. If it is broken, we will either receive patches, or it stays broken. Regards, Martin From martin at v.loewis.de Mon Apr 10 09:32:52 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 09:32:52 +0200 Subject: [Python-Dev] segfault (double free?) when '''-string crosses line In-Reply-To: <443A0240.60005@laposte.net> References: <ca471dc20604091834k2ed39ae6ob1787e7190557c12@mail.gmail.com> <1f7befae0604091944o43055893w709d655b9c0762fd@mail.gmail.com> <ee2a432c0604092031r5ea5b689kc00834dc023bc598@mail.gmail.com> <1f7befae0604092113s565d36bfmb24eee79da67f3b3@mail.gmail.com> <4439FDCC.6010805@v.loewis.de> <443A0240.60005@laposte.net> Message-ID: <443A0A24.8030803@v.loewis.de> Jean-Fran?ois Pi?ronne wrote: > I (and a few others guys) work on the VMS port,also, HP has publish in > the VMS technical journal an article about Python on VMS. > What do you need? In the specific case: an answer to Tim Peter's question (where is vms__StdioReadline, and how can its memory allocator be changed, and why is it not checked into the Python source tree?) >> I would have no concerns with breaking the VMS port. If it is broken, >> we will either receive patches, or it stays broken. >> > > Why, there is world outside Windows, I have concerns with breaking the > VMS port, we can send patches but the last one was still on hold. Basically because of lack of maintenance. Can you please remind me what patch is still on hold? Searching for open patches with "VMS" in their summary gives no matches. Does Python 2.5a1 even build on VMS, out of the box (i.e. without additional patches?) Regards, Martin From gh at ghaering.de Mon Apr 10 15:00:09 2006 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Mon, 10 Apr 2006 15:00:09 +0200 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite Message-ID: <443A56D9.4010602@ghaering.de> Posting here because I don't know a better place: Federico di Gregorio and me have both faxed PSF contributor agreements to the PSF for integration of pysqlite into Python 2.5. -- Gerhard From martin at v.loewis.de Mon Apr 10 15:06:03 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 15:06:03 +0200 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <443A56D9.4010602@ghaering.de> References: <443A56D9.4010602@ghaering.de> Message-ID: <443A583B.5040608@v.loewis.de> Gerhard H?ring wrote: > Posting here because I don't know a better place: > > Federico di Gregorio and me have both faxed PSF contributor agreements > to the PSF for integration of pysqlite into Python 2.5. Thanks! I should make the list of names of people available somewhere which have signed the agreement, so that committers can check each time whether the is available. Regards, Martin From anthony at interlink.com.au Mon Apr 10 16:22:57 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Tue, 11 Apr 2006 00:22:57 +1000 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <443A583B.5040608@v.loewis.de> References: <443A56D9.4010602@ghaering.de> <443A583B.5040608@v.loewis.de> Message-ID: <200604110023.03034.anthony@interlink.com.au> As far as I know, I've never signed one. I probably should, or is there some grandfathering rule for people who've been contributing from before the new agreement came in? From martin at v.loewis.de Mon Apr 10 16:56:05 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 16:56:05 +0200 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <200604110023.03034.anthony@interlink.com.au> References: <443A56D9.4010602@ghaering.de> <443A583B.5040608@v.loewis.de> <200604110023.03034.anthony@interlink.com.au> Message-ID: <443A7205.8020908@v.loewis.de> Anthony Baxter wrote: > As far as I know, I've never signed one. I probably should, or is > there some grandfathering rule for people who've been contributing > from before the new agreement came in? No - those people will have to fill out the agreement covering past contributions also: http://www.python.org/psf/contrib-form-python.html And yes, you are right - you haven't filed one, so far. Regards, Martin From theller at python.net Mon Apr 10 18:00:53 2006 From: theller at python.net (Thomas Heller) Date: Mon, 10 Apr 2006 18:00:53 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <443A8135.1090301@python.net> Brett Cannon wrote: > OK, I am going to write the PEP I proposed a week or so ago, listing > all modules and packages within the stdlib that are maintained > externally so we have a central place to go for contact info or where > to report bugs on issues. This should only apply to modules that want > bugs reported outside of the Python tracker and have a separate dev > track. People who just use the Python repository as their mainline > version can just be left out. > > For each package I need the name of the module, the name of the > maintainer, homepage of the module outside of Python, and where to > report bugs. Do people think we need to document the version that has > been imported into Python and when that was done as well? > > Anyway, here is a list of the packages that I think have outside > maintenance (or at least have been at some point). Anyone who has > info on them that I need, please let me know the details. Also, if I > missed any, obviously speak up: > > - bsddb (still external?) > - sqlite > - ctypes Maintainer: Thomas Heller Homepage: http://starship.python.net/crew/theller/ctypes/ Where to report bugs: I do not care. I'll try to keep track of both the python bug tracker and the ctypes tracker on the SF pages. Thomas From anthony at interlink.com.au Mon Apr 10 18:36:50 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Tue, 11 Apr 2006 02:36:50 +1000 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <443A7205.8020908@v.loewis.de> References: <443A56D9.4010602@ghaering.de> <200604110023.03034.anthony@interlink.com.au> <443A7205.8020908@v.loewis.de> Message-ID: <200604110236.53171.anthony@interlink.com.au> On Tuesday 11 April 2006 00:56, Martin v. L?wis wrote: > No - those people will have to fill out the agreement covering past > contributions also: > > http://www.python.org/psf/contrib-form-python.html > > And yes, you are right - you haven't filed one, so far. Righto - I will try to get to this sometime this week. Is it worth sending out emails to as many people as can be found to try to rustle up more of these? It's unclear to me just how important these are to the state of Python. From martin at v.loewis.de Mon Apr 10 18:37:50 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 18:37:50 +0200 Subject: [Python-Dev] I'm not getting email from SF when assignedabug/patch In-Reply-To: <bbaeab100604031255q1ff0224btdfbd71684e04325@mail.gmail.com> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <e0h4vs$26r$1@sea.gmane.org> <442C1C7E.4050502@v.loewis.de> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> <e0pj6g$onr$1@sea.gmane.org> <e0rg7c$cjg$1@sea.gmane.org> <bbaeab100604031255q1ff0224btdfbd71684e04325@mail.gmail.com> Message-ID: <443A89DE.6080501@v.loewis.de> Brett Cannon wrote: > Can someone (Martin, Barry?) post this on python.org (I don't think > this necessarily needs to be put into svn and I don't have any access > but svn) so Fredrik can free up the space on his server? Did I ever respond to that? I put the file on http://svn.python.org/snapshots/ Regards, Martin From martin at v.loewis.de Mon Apr 10 18:51:18 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 18:51:18 +0200 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <200604110236.53171.anthony@interlink.com.au> References: <443A56D9.4010602@ghaering.de> <200604110023.03034.anthony@interlink.com.au> <443A7205.8020908@v.loewis.de> <200604110236.53171.anthony@interlink.com.au> Message-ID: <443A8D06.9080605@v.loewis.de> Anthony Baxter wrote: > Righto - I will try to get to this sometime this week. Is it worth > sending out emails to as many people as can be found to try to rustle > up more of these? It's unclear to me just how important these are to > the state of Python. I think I twice mailed everybody in Misc/ACKS. In principle, we want to have agreements from everybody who ever contributed, so that we can formally change the license (and so that it is clear to Python users what the legal standing is). Some Python users really begged the PSF to get this process started; they were happy when it did get started. If you want to ping people: sure; I can send you the list of forms received so far. Regards, Martin From pje at telecommunity.com Mon Apr 10 19:03:02 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 10 Apr 2006 13:03:02 -0400 Subject: [Python-Dev] PEP 302 support for traceback, inspect, site, warnings, doctest, and linecache Message-ID: <5.1.1.6.0.20060410120339.01e305e8@mail.telecommunity.com> Here's my plan for implementing PEP 302 support (``__loader__`` sensitivity) for the above modules: * Change all of the ``linecache`` API functions (except ``clearcache()``) to add an optional ``module_globals`` argument, which they will use to obtain the ``__name__`` and ``__loader__`` in the event that the given filename doesn't exist, but appears to be the source file for the given module. (That is, if the basename of the filename is consistent with the last part of the ``__name__`` in the module_globals, if ``sys.modules[__name__].__dict__ is module_globals``, and the ``__loader__`` (if any) in module_globals has a ``get_source()`` method.) * Change ``inspect.getsourcefile()`` to return the source path as long as the object's module has a ``__loader__`` with a ``get_source()`` method. This will ensure that other ``inspect`` functions will still try to load the source code for the object in question, even if it's in a zipfile or loaded by some other import mechanism. * Change ``inspect.findsource()`` to pass the target object's module globals as an extra argument to ``linecache.getlines()`` * Change the ``traceback`` module to supply frame globals to the ``linecache`` APIs it uses. * Change the ``site`` module to not "absolutize" ``__file__`` attributes of modules with ``__loader__`` attributes; ``__loader__`` objects are responsible for their own ``__file__`` values. (Actually, if this is desirable behavior, the builtin import machinery should be doing it, not site.py!) * Add an optional ``module_globals`` argument to ``warnings.warn_explicit()``, which will be passed through to ``linecache.getlines()`` if the warning is not ignored. (This will prime the line cache in case the warning is to be output.) * Change ``warnings.warn()`` to pass the appropriate globals to ``warn_explicit()``. * Change ``doctest.testfile()`` and ``DocTestSuite`` to use the target module's ``__loader__.get_data()`` method (if available) to load the given doctest file. * Change ``__patched_linecache_getlines()`` in ``doctest`` to accept an optional ``module_globals`` argument, to be passed through to the original ``getlines()`` function. I'm starting work on the above now. If I finish early enough today, I'll follow up with plans for fixing ``pydoc``, which requires slightly more extensive surgery, since it doesn't even support packages with multiple ``__path__`` entries yet, let alone PEP 302. Please let me know if you have any questions or concerns. From skip at pobox.com Mon Apr 10 19:21:44 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 10 Apr 2006 12:21:44 -0500 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <443A8135.1090301@python.net> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <443A8135.1090301@python.net> Message-ID: <17466.37928.191007.359419@montanaro.dyndns.org> Brett Cannon wrote: > OK, I am going to write the PEP I proposed a week or so ago, listing > all modules and packages within the stdlib that are maintained > externally so we have a central place to go for contact info or where > to report bugs on issues. Based on the recent interchange regarding VMS, perhaps you should add port maintainers to that PEP as well. Skip From steven.bethard at gmail.com Mon Apr 10 19:22:53 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 10 Apr 2006 11:22:53 -0600 Subject: [Python-Dev] DRAFT: python-dev summary for 2006-02-01 to 2006-02-15 Message-ID: <d11dcfba0604101022xdf05d53qe21b060a51b574ef@mail.gmail.com> Sorry about the delay folks. Here's the summary for the first half of February. I'm going to try to get the ones out for the second half of February and first half of March shortly. Please send me any comments/corrections! ============= Announcements ============= ----------------------------- QOTF: Quotes of the Fortnight ----------------------------- We had a plethora (yes, I did just say plethora) of quotable quotes this fortnight. Martin v. L?wis on the `lambda keyword`_: I believe that usage of a keyword with the name of a Greek letter also contributes to people considering something broken. Raymond Hettinger on the `learnability of Python`_: A language suitable for beginners should be easy to learn, but it should not leave them permanently crippled... To misquote Einstein: The language should be as simple as possible, but no simpler. Robert Brewer on `Pythonic syntax`_: Community consensus on syntax is a pipe dream. .. _lambda keyword: http://mail.python.org/pipermail/python-dev/2006-February/060389.html .. _learnability of Python: http://mail.python.org/pipermail/python-dev/2006-February/060420.html .. _Pythonic syntax: http://mail.python.org/pipermail/python-dev/2006-February/060556.html [SJB] -------------------- Release plan for 2.5 -------------------- `PEP 356`_ lists the release plan for Python 2.5. Check it out for the latest feature updates and planned release dates. .. _PEP 356: http://www.python.org/dev/peps/pep-0356/ Contributing threads: - `release plan for 2.5 ? <http://mail.python.org/pipermail/python-dev/2006-February/060493.html>`__ - `2.5 release schedule <http://mail.python.org/pipermail/python-dev/2006-February/060982.html>`__ - `2.5 PEP <http://mail.python.org/pipermail/python-dev/2006-February/060985.html>`__ [SJB] ---------------------------- lsprof available as cProfile ---------------------------- Armin Rigo finished his integration of the lsprof profiler. It's now available as the cProfile module which exposes the same interface as profile. Contributing thread: - `cProfile module <http://mail.python.org/pipermail/python-dev/2006-February/060479.html>`__ [SJB] --------------------- ssize_t branch merged --------------------- Martin v. L?wis merged in the ssize_t branch (`PEP 353`_). All you folks on 64 bit machines should now be able to index sequences using your full address space. Enjoy! .. _PEP 353: http://www.python.org/dev/peps/pep-0353/ Contributing threads: - `ssize_t status (Was: release plan for 2.5 ?) <http://mail.python.org/pipermail/python-dev/2006-February/060714.html>`__ - `ssize_t branch (Was: release plan for 2.5 ?) <http://mail.python.org/pipermail/python-dev/2006-February/060810.html>`__ - `ssize_t branch merged <http://mail.python.org/pipermail/python-dev/2006-February/061073.html>`__ [SJB] ========= Summaries ========= ------------------------------------------------------ Rumors of lambda's death have been greatly exaggerated ------------------------------------------------------ Guido's finally given in -- the lambda expression will stay in Python 3.0. Of course, this didn't stave off another massively long thread discussing the issue, but Guido finally killed that by providing a pretty exhaustive list of why we should keep lambda as it is: * No purely `syntactic change to lambda`_ is clearly a net gain over the current syntax * It's perfectly fine that Python's lambda is different from Lisp's * Lambda current binding behavior is (correctly) exactly the same as a def statement * Allowing a block inside a lambda is never going to work because of the need to indent the block .. _syntactic change to lambda: http://wiki.python.org/moin/AlternateLambdaSyntax Contributing threads: - `any support for a methodcaller HOF? <http://mail.python.org/pipermail/python-dev/2006-February/060341.html>`__ - `Let's just *keep* lambda <http://mail.python.org/pipermail/python-dev/2006-February/060415.html>`__ - `Let's send lambda to the shearing shed (Re: Let's just *keep* lambda) <http://mail.python.org/pipermail/python-dev/2006-February/060583.html>`__ [SJB] -------------- The bytes type -------------- Guido asked for an update to `PEP 332`_, which proposed a ``bytes`` type. This spawned a massive discussion about what the bytes type should look like and how it should interact with strings, especially in Python 3.0 when all strings would be unicode. Pretty much everyone agreed that bytes objects should be mutable sequences of ints in the range(0, 256). Guido and others were also generally against a b'...' literal for bytes, as that would confusingly suggest that bytes objects were text-like, which they wouldn't be. There was a fair bit of haggling over the signature of the bytes constructor, but it seemed towards the end of the discussion that people were coming to an agreement on ``bytes(initializer [,encoding])``, where ``initializer`` could be a sequence of ints or a str or unicode instance, and where ``encoding`` would be an error for a sequence of ints, ignored for a str instance, and the encoding for a unicode instance. Some people argued that the encoding argument was unnecessary as unicode objects could be encoded using their .encode() method, but Guido was concerned that, at least before Python 3.0 when .encode() would return bytes objects, the multiple copying (one for the encode, one for the bytes object creation) would require too much overhead. The default encoding for unicode objects was also a contentious point, with people suggesting ASCII, Latin-1, and the system default encoding. Guido argued that since Python strings don't know the encoding that was used to create them, and since a programmer who typed in Latin-1 would expect Latin-1 as the encoding and a programmer who typed in UTF-8 would expect UTF-8 as the encoding, then the only sensible solution was to encode unicode objects using the system default encoding (which is pretty much always ASCII). It seemed like an official PEP for the bytes type would be forthcoming. .. _PEP 332: http://www.python.org/peps/pep-0332.html Contributing threads: - `PEP 332 revival in coordination with pep 349? [ Was:Re: release plan for 2.5 ?] <http://mail.python.org/pipermail/python-dev/2006-February/060752.html>`__ - `bytes type discussion <http://mail.python.org/pipermail/python-dev/2006-February/060930.html>`__ - `byte literals unnecessary [Was: PEP 332 revival in coordination with pep 349?] <http://mail.python.org/pipermail/python-dev/2006-February/060944.html>`__ - `bytes type needs a new champion <http://mail.python.org/pipermail/python-dev/2006-February/061058.html>`__ [SJB] -------------- Octal literals -------------- Mattheww proposed removing octal literals, making 0640 a SyntaxError, and modifying functions like os.chmod to take string arguments instead, e.g. ``os.chmod(path, "0640")``. This led to a discussion about how necessary octal literals actually are -- the only use case mentioned in the thread was os.chmod() -- and how to improve their syntax. For octal literals people seemed to prefer an ``0o`` or ``0c`` prefix along the lines of the ``0x`` prefix, possibly introducing an analogous ``0b`` as binary literal syntax at the same time. People also suggested a number of syntaxes for expressing numeric literals in any base, but these seemed like overkill for the problem at hand. Contributing threads: - `Octal literals <http://mail.python.org/pipermail/python-dev/2006-January/060262.html>`__ - `Octal literals <http://mail.python.org/pipermail/python-dev/2006-February/060277.html>`__ [SJB] --------------------------------------------------- PEP 357: Allowing Any Object to be Used for Slicing --------------------------------------------------- Travis Oliphant presented `PEP 357`_ to address some issues with the sequence protocol. In Python 2.4 and earlier, if you defined an integer-like type, there was no way of letting Python use instances of your type in sequence indexing. And Python couldn't just call the ``__int__`` slot because then floats could be inappropriately used as indices, e.g. ``range(10)[5.7]`` would be ``5``. To solve this problem, `PEP 357`_ proposed adding an ``__index__`` slot that would be called when a sequence was given a non-integer in a slice. Floats would not define ``__index__`` because they could not be guaranteed to produce an exact integer, but exact integers like those in numpy_ could define this method and then be allowable as sequence indices. .. _PEP 357: http://www.python.org/peps/pep-0357.html .. _numpy: http://www.numpy.org/ Contributing threads: - `PEP for adding an sq_index slot so that any object, a or b, can be used in X[a:b] notation <http://mail.python.org/pipermail/python-dev/2006-February/060594.html>`__ - `Please comment on PEP 357 -- adding nb_index slot to PyNumberMethods <http://mail.python.org/pipermail/python-dev/2006-February/060973.html>`__ [SJB] ----------------------------- const declarations in CPython ----------------------------- To avoid some deprecation warnings generated by C++, Jeremy Hylton had added ``const`` to a few CPython APIs that took ``char*`` but were typically called by passing string literals. This generated compiler errors when ``char**`` variables were passed to PyArg_ParseTupleAndKeywords. Initially, people thought this could be solved by changing ``const char **`` declarations to ``const char * const *`` declarations, but while this was fine for C++, it did not solve the problem for C. The end result was to remove the ``const`` declaration from the ``char **`` variables, but not the ``char *`` variables. Contributing thread: - `Baffled by PyArg_ParseTupleAndKeywords modification <http://mail.python.org/pipermail/python-dev/2006-February/060689.html>`__ [SJB] -------------------- asynchat and threads -------------------- Mark Edgington presented a patch adding "thread safety" to asynchat. Most people seemed to think mixing threads and asynchat was a bad idea, and the thread drifted off to talking about exracting a subset of Twisted to add to the stdlib (as a replacement for asynchat and asyncore). People seemed generally in favor of this (if the subset was reasonably small) but no one stepped forward to extract that subset. Contributing thread: - `threadsafe patch for asynchat <http://mail.python.org/pipermail/python-dev/2006-February/060469.html>`__ [SJB] --------------------------------------------------- Approximate equality between floating point numbers --------------------------------------------------- To make floating-point math easier for newbies, Alex Martelli proposed math.areclose() which would take two floating point numbers with optional tolerances and indicate whether or not they were "equal". With a similar motivation, Smith proposed a math.nice() function which would round floating point numbers in the same way that str() does. Raymond Hettinger and others expressed concern over the stated use case and suggested that hiding floating point details would only make learning their intricacies later more difficult. A few people suggested that math.areclose() might also be useful for experienced floating-point users, but it seemed that the proposal probably didn't have enough strength to get into the stdlib. Contributing threads: - `math.areclose ...? <http://mail.python.org/pipermail/python-dev/2006-February/060413.html>`__ - `nice() <http://mail.python.org/pipermail/python-dev/2006-February/060811.html>`__ - `[Tutor] nice() <http://mail.python.org/pipermail/python-dev/2006-February/060814.html>`__ [SJB] ----------------------------- Eliminating compiler warnings ----------------------------- Thomas Wouters noticed a few warnings on amd64, and asked if Python developers should strive to remove all warnings even if they're harmless. Tim Peters was definitely in favor of eliminating all warnings whenever possible, and so had Thomas Wouters unnecessarily (for other compilers at least) initialize a variable to silence the warning. Contributing threads: - `Compiler warnings <http://mail.python.org/pipermail/python-dev/2006-January/060253.html>`__ - `Compiler warnings <http://mail.python.org/pipermail/python-dev/2006-February/060280.html>`__ [SJB] --------------------------------- Adding syntactic support for sets --------------------------------- Greg Wilson asked about providing syntactic support for sets literals and set comprehensions, e.g. ``{1, 2, 3, 4, 5}`` and ``{z for z in x if (z % 2)}``. The set comprehension suggestion was pretty quickly shot down as it wasn't deemed to be much of an improvement over a generator expression in the set constructor, e.g. ``set(z for z in x if (z % 2))``. The question of syntactic support for set literals sparked a little more of a discussion. Some people initially suggested that syntactic support for set literals was unnecessary as the set constructor could be expanded, e.g ``set(1, 2, 3, 4, 5)``. However this proposal would not be backwards compatible as it would be unclear whether ``set('title')`` should be a set of one element or five. Others questioned whether the syntactically supported set literal should create a set() or a frozenset(). No one had a good answer for this latter question, and the rest of the thread drifted off into a miscellany of syntax suggestions. Contributing thread: - `syntactic support for sets <http://mail.python.org/pipermail/python-dev/2006-February/060307.html>`__ [SJB] ------------------------------- FD_SETSIZE in Windows and POSIX ------------------------------- Revision 42253 introduced a bug in Windows sockets by assuming that FD_SETSIZE was the maximum number of distinct file descriptors a file descriptor set can hold, and checking for this. FD_SETSIZE is actually the numerical magnitude of a file descriptor (which happens to correspond to the maximum number of distinct file descriptors on POSIX systems where fdsets are just big bit vectors). A #define, Py_DONT_CHECK_FD_SETSIZE, was introduced so that the check could be skipped on windows machines. Contributing thread: - `Pervasive socket failures on Windows <http://mail.python.org/pipermail/python-dev/2006-February/060666.html>`__ [SJB] ----------------------------- Iterators and __length_hint__ ----------------------------- In Python 2.4, a number of iterators had grown __len__ methods to allow for some optimizations (like allocating enough space to create a list out of them). Guido had asked for these methods to be removed and hidden instead as an implementation detail. Originally, Raymond had used the name ``_length_cue``, but he was convinced instead to use the name ``__lenght_hint__``. Contributing thread: - `_length_cue() <http://mail.python.org/pipermail/python-dev/2006-February/060524.html>`__ [SJB] ----------------------------------------------- Linking with mscvrt instead of the compiler CRT ----------------------------------------------- Martin v. L?wis suggested that on Windows, instead of linking Python with the CRT of the compiler used to compile Python, Python should be linked with mscvrt, the CRT that is part of the operating system. Martin hoped that this would get rid of problems where extension modules had to be compiled with the same compiler as the Python with which they were to run. Unfortunately, after further investigation, Martin had to withdraw the proposal as the platform SDK doesn't provide an import library for msvrt.dll and mscvrt is documented as being intended only for "system components". Contributing thread: - `Linking with mscvrt <http://mail.python.org/pipermail/python-dev/2006-February/060490.html>`__ [SJB] ------------------------------------------- Opening text and binary files in Python 3.0 ------------------------------------------- Guido indicated that in Python 3.0 there should be two open() functions, one for opening text files and one for opening binary files. The .read*() methods of text files would return unicode objects, while the .read*() methods of binary files would return bytes objects. People then suggested a variety of names for these functions including: * opentext() and openbytes() * text.open() and bytes.open() * file.text() and file.bytes() Guido didn't like the tight coupling of data types and IO libraries present in the latter two. He also expressed a preference for using open() instead of opentext(), since it was the more common use case. Of course, modifying open() in this way would be backwards incompatible and would thus have to wait until Python 3.0. Contributing thread: - `str object going in Py3K <http://mail.python.org/pipermail/python-dev/2006-February/060896.html>`__ [SJB] -------------------------------------------- Adding additional bdist_* installation types -------------------------------------------- Phillip Eby suggested adding bdist_deb, bdist_msi, and friends to the standard library in Python 2.5. People seemed generally in favor of this, though there was some discussion as to whether or not bdist_egg should also be included. The thread then trailed off into a discussion of the various installation behaviors on different platforms. Contributing thread: - `bdist_* to stdlib? <http://mail.python.org/pipermail/python-dev/2006-February/060869.html>`__ [SJB] ------------------------------------- URL for the development documentation ------------------------------------- Georg Brandl pointed out that while http://docs.python.org/dev/ holds the current development documentation, http://www.python.org/dev/doc/devel was still available. Fred Drake indicated that it was definitely preferable to use the automatically updated docs (http://docs.python.org/dev/) but that having them on docs.python.org seemed wrong -- docs.python.org was established so that queries could be made for the current stable documentation without old docs or development docs showing up. Fred then proposed that the current stable documentation go up at http://www.python.org/doc/current/ and the current development documentation go up at http://www.python.org/dev/doc/. Guido thought that http://docs.python.org/ could possibly disappear in the new `python.org redesign`_. .. _python.org redesign: http://beta.python.org Contributing threads: - `http://www.python.org/dev/doc/devel still available <http://mail.python.org/pipermail/python-dev/2006-February/060825.html>`__ - `moving content around (Re: http://www.python.org/dev/doc/devel still available) <http://mail.python.org/pipermail/python-dev/2006-February/060839.html>`__ [SJB] ------------------------------------------------ PEP 355: Path - Object oriented filesystem paths ------------------------------------------------ BJ?rn Lindqvist updated `PEP 355`_ which proposes adding the path module to replace some of the functions in os, shutil, etc. At Guido's request, the use of ``/`` and ``//`` operators for path concatenation was removed, and a number of redundancies brought up in the last Path PEP discussion were addressed (though some redundancies, e.g. ``.name`` and ``.basename()`` seem to still be present). .. _PEP 355: http://www.python.org/peps/pep-0355.html Contributing threads: - `The path module PEP <http://mail.python.org/pipermail/python-dev/2006-February/060304.html>`__ - `Path PEP and the division operator <http://mail.python.org/pipermail/python-dev/2006-February/060385.html>`__ - `Path PEP: some comments <http://mail.python.org/pipermail/python-dev/2006-February/060397.html>`__ - `Path PEP -- a couple of typos. <http://mail.python.org/pipermail/python-dev/2006-February/060399.html>`__ [SJB] ----------------------------------------- Rejection of PEP 351: The freeze protocol ----------------------------------------- Raymond Hettinger's strong opposition to `PEP 351`_, the freeze protocol, finally won out, and Guido rejected the PEP. .. _PEP 351: http://www.python.org/peps/pep-0351.html Contributing thread: - `PEP 351 <http://mail.python.org/pipermail/python-dev/2006-February/060789.html>`__ [SJB] -------------------------------------- PEP 338 - Executing Modules as Scripts -------------------------------------- Nick Coghlan updated `PEP 338`_ to comply with the import system of `PEP 302`_. The updated PEP includes a ``run_module()`` function which will execute the supplied module as if it were a script (e.g. with __name__ == "__main__"). Since it supports the `PEP 302`_ import system, it properly handles modules in both packages and zip files. .. _PEP 302: http://www.python.org/peps/pep-0302.html .. _PEP 338: http://www.python.org/peps/pep-0338.html Contributing thread: - `PEP 338 - Executing Modules as Scripts <http://mail.python.org/pipermail/python-dev/2006-February/060760.html>`__ [SJB] ----------------------------------------------- Getting sources for a particular Python release ----------------------------------------------- David Abrahams wanted to get the Python-2.4.2 sources from SVN but couldn't figure out which tag to check out. The right answer for him was http://svn.python.org/projects/python/tags/r242/ and in general it seemed that the r<major><minor><micro> tags corresponded to the <major>.<minor>.<micro> release. Contributing thread: - `How to get the Python-2.4.2 sources from SVN? <http://mail.python.org/pipermail/python-dev/2006-February/060770.html>`__ [SJB] --------------------------------------- PEP for \*args and \*\*kwargs unpacking --------------------------------------- Thomas Wouters asked if people would be interested in a PEP to generalize the use of \*args and \*\*kwargs to unpacking, e.g.: * ``['a', 'b', *iterable, 'c']`` * ``a, b, *rest = range(5)`` * ``{**defaults, **args, 'fixedopt': 1}`` * ``spam(*args, 3, **kwargs, spam='extra', eggs='yes')`` People definitely wanted to see a PEP, as these or similar suggestions have been raised a number of times. The time-table would likely be for Python 2.6 though, so no PEP has yet been made available. Contributing thread: - `Generalizing *args and **kwargs <http://mail.python.org/pipermail/python-dev/2006-February/061011.html>`__ [SJB] ================ Deferred Threads ================ - `The decorator(s) module <http://mail.python.org/pipermail/python-dev/2006-February/060759.html>`__ - `C AST to Python discussion <http://mail.python.org/pipermail/python-dev/2006-February/060994.html>`__ - `bytes.from_hex() [Was: PEP 332 revival in coordination with pep 349?] <http://mail.python.org/pipermail/python-dev/2006-February/061037.html>`__ - `how bugfixes are handled? <http://mail.python.org/pipermail/python-dev/2006-February/061067.html>`__ ================== Previous Summaries ================== - `Extension to ConfigParser <http://mail.python.org/pipermail/python-dev/2006-February/060278.html>`__ - `[PATCH] Fix dictionary subclass semantics whenused as global dictionaries <http://mail.python.org/pipermail/python-dev/2006-February/060438.html>`__ =============== Skipped Threads =============== - `YAML (was Re: Extension to ConfigParser) <http://mail.python.org/pipermail/python-dev/2006-February/060275.html>`__ - `webmaster at python.org failing sender verification. <http://mail.python.org/pipermail/python-dev/2006-February/060300.html>`__ - `ctypes patch (was: (libffi) Re: Copyright issue) <http://mail.python.org/pipermail/python-dev/2006-February/060327.html>`__ - `ctypes patch <http://mail.python.org/pipermail/python-dev/2006-February/060332.html>`__ - `Weekly Python Patch/Bug Summary <http://mail.python.org/pipermail/python-dev/2006-February/060468.html>`__ - `Any interest in tail call optimization as a decorator? <http://mail.python.org/pipermail/python-dev/2006-February/060478.html>`__ - `Help with Unicode arrays in NumPy <http://mail.python.org/pipermail/python-dev/2006-February/060482.html>`__ - `small floating point number problem <http://mail.python.org/pipermail/python-dev/2006-February/060507.html>`__ - `Old Style Classes Goiung in Py3K <http://mail.python.org/pipermail/python-dev/2006-February/060516.html>`__ - `Make error on solaris 9 x86 - error: parse error before "upad128_t" <http://mail.python.org/pipermail/python-dev/2006-February/060517.html>`__ - `Python modules should link to libpython <http://mail.python.org/pipermail/python-dev/2006-February/060533.html>`__ - `Help on choosing a PEP to volunteer on it : 308, 328 or 343 <http://mail.python.org/pipermail/python-dev/2006-February/060564.html>`__ - `email 3.1 for Python 2.5 using PEP 8 module names <http://mail.python.org/pipermail/python-dev/2006-February/060579.html>`__ - `[BULK] Python-Dev Digest, Vol 31, Issue 37 <http://mail.python.org/pipermail/python-dev/2006-February/060585.html>`__ - `py3k and not equal; re names <http://mail.python.org/pipermail/python-dev/2006-February/060595.html>`__ - `Post-PyCon PyPy Sprint: February 27th - March 2nd 2006 <http://mail.python.org/pipermail/python-dev/2006-February/060688.html>`__ - `compiler.pyassem <http://mail.python.org/pipermail/python-dev/2006-February/060717.html>`__ - `To know how to set "pythonpath" <http://mail.python.org/pipermail/python-dev/2006-February/060773.html>`__ - `Where to put "post-it notes"? <http://mail.python.org/pipermail/python-dev/2006-February/060777.html>`__ - `file.next() vs. file.readline() <http://mail.python.org/pipermail/python-dev/2006-February/060781.html>`__ - `Fwd: Ruby/Python Continuations: Turning a block callback into a read()-method ? <http://mail.python.org/pipermail/python-dev/2006-February/060813.html>`__ - `PEP 343: Context managers a superset of decorators? <http://mail.python.org/pipermail/python-dev/2006-February/060816.html>`__ - `Missing PyCon 2006 <http://mail.python.org/pipermail/python-dev/2006-February/060873.html>`__ - `how to upload new MacPython web page? <http://mail.python.org/pipermail/python-dev/2006-February/060983.html>`__ - `A codecs nit (was Re: bytes.from_hex()) <http://mail.python.org/pipermail/python-dev/2006-February/061070.html>`__ From pje at telecommunity.com Mon Apr 10 20:41:50 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 10 Apr 2006 14:41:50 -0400 Subject: [Python-Dev] Failing "inspect" test: test_getargspec_sublistofone Message-ID: <5.1.1.6.0.20060410142340.01e21bb0@mail.telecommunity.com> I seem to recall some Python-dev discussion about this particular behavior, but can't find it in Google. Is this test currently known to be failing? If so, why, and what's the plan for it? From jjl at pobox.com Mon Apr 10 20:29:45 2006 From: jjl at pobox.com (John J Lee) Date: Mon, 10 Apr 2006 18:29:45 +0000 (UTC) Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <443A8D06.9080605@v.loewis.de> References: <443A56D9.4010602@ghaering.de> <200604110023.03034.anthony@interlink.com.au> <443A7205.8020908@v.loewis.de> <200604110236.53171.anthony@interlink.com.au> <443A8D06.9080605@v.loewis.de> Message-ID: <Pine.LNX.4.64.0604101827070.8326@alice> On Mon, 10 Apr 2006, "Martin v. L?wis" wrote: > I think I twice mailed everybody in Misc/ACKS. In principle, we want > to have agreements from everybody who ever contributed, so that we > can formally change the license (and so that it is clear to Python > users what the legal standing is). [...] Not sure if it's just me, but I'm in that list, and I'm pretty sure I neither received an email nor faxed a contributor agreement (and my email address hasn't changed for years). John From pje at telecommunity.com Mon Apr 10 21:34:45 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 10 Apr 2006 15:34:45 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? Message-ID: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> Is anybody else getting this? Python 2.5a1 (trunk:45237, Apr 10 2006, 15:25:33) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pdb >>> def x(): ... if 'a' in 'b': ... pass ... >>> pdb.run("x()") > <string>(1)<module>() (Pdb) s --Call-- > <stdin>(1)x() (Pdb) s > <stdin>(2)x() (Pdb) s Segmentation fault It usually happens within a few 's' operations in pdb. From jeremy at alum.mit.edu Mon Apr 10 21:39:44 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 10 Apr 2006 15:39:44 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> Message-ID: <e8bf7a530604101239g1ac3025dra66153682e649189@mail.gmail.com> On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > Is anybody else getting this? Neal had originally reported that test_trace failed with a segfault, and it's essentially exercising the same code. I don't see a failure there or here at the moment. If there is a bug, though, it's likely to be in the line number table that the new compiler generates. > > Python 2.5a1 (trunk:45237, Apr 10 2006, 15:25:33) > [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import pdb > >>> def x(): > ... if 'a' in 'b': > ... pass > ... > >>> pdb.run("x()") > > <string>(1)<module>() > (Pdb) s > --Call-- > > <stdin>(1)x() > (Pdb) s > > <stdin>(2)x() > (Pdb) s > Segmentation fault > > It usually happens within a few 's' operations in pdb. >>> def x(): ... if 'a' in 'b': ... pass ... [34945 refs] >>> pdb.run('x()') > <string>(1)<module>()->None (Pdb) s --Call-- > <stdin>(1)x() (Pdb) s --Return-- > <stdin>(1)x()->None (Pdb) s --Return-- > <string>(1)<module>()->None (Pdb) s [35023 refs] >>> [35023 refs] [11168 refs] Will try with a non-debug build soon. Jeremy From jeremy at alum.mit.edu Mon Apr 10 21:43:05 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Mon, 10 Apr 2006 15:43:05 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <e8bf7a530604101239g1ac3025dra66153682e649189@mail.gmail.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <e8bf7a530604101239g1ac3025dra66153682e649189@mail.gmail.com> Message-ID: <e8bf7a530604101243h14c2e182lf4e2282915dbd7d8@mail.gmail.com> 4On 4/10/06, Jeremy Hylton <jeremy at alum.mit.edu> wrote: > On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > Is anybody else getting this? > > Neal had originally reported that test_trace failed with a segfault, > and it's essentially exercising the same code. I don't see a failure > there or here at the moment. If there is a bug, though, it's likely > to be in the line number table that the new compiler generates. > > > > > > Python 2.5a1 (trunk:45237, Apr 10 2006, 15:25:33) > > [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> import pdb > > >>> def x(): > > ... if 'a' in 'b': > > ... pass > > ... > > >>> pdb.run("x()") > > > <string>(1)<module>() > > (Pdb) s > > --Call-- > > > <stdin>(1)x() > > (Pdb) s > > > <stdin>(2)x() > > (Pdb) s > > Segmentation fault > > > > It usually happens within a few 's' operations in pdb. > > > >>> def x(): > ... if 'a' in 'b': > ... pass > ... > [34945 refs] > >>> pdb.run('x()') > > <string>(1)<module>()->None > (Pdb) s > --Call-- > > <stdin>(1)x() > (Pdb) s > --Return-- > > <stdin>(1)x()->None > (Pdb) s > --Return-- > > <string>(1)<module>()->None > (Pdb) s > [35023 refs] > >>> > [35023 refs] > [11168 refs] > > Will try with a non-debug build soon. I don't see it in a non-debug build either. Python 2.5a1 (trunk:43632M, Apr 10 2006, 15:41:31) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Jeremy From thomas at python.org Mon Apr 10 21:47:09 2006 From: thomas at python.org (Thomas Wouters) Date: Mon, 10 Apr 2006 21:47:09 +0200 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> Message-ID: <9e804ac0604101247u71babd6cs6d68042575e2e911@mail.gmail.com> On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > Is anybody else getting this? > <stdin>(2)x() > (Pdb) s > Segmentation fault I'm not able to reproduce this in 32bit or 64bit mode (debian unstable.) Does 'make distclean' before configure/compile fix it? If not, can you unlimit coredumpsize and check to see what it crashes on? -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060410/df7315de/attachment.html From martin at v.loewis.de Mon Apr 10 22:08:45 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 22:08:45 +0200 Subject: [Python-Dev] PSF Contributor Agreement for pysqlite In-Reply-To: <Pine.LNX.4.64.0604101827070.8326@alice> References: <443A56D9.4010602@ghaering.de> <200604110023.03034.anthony@interlink.com.au> <443A7205.8020908@v.loewis.de> <200604110236.53171.anthony@interlink.com.au> <443A8D06.9080605@v.loewis.de> <Pine.LNX.4.64.0604101827070.8326@alice> Message-ID: <443ABB4D.9000606@v.loewis.de> John J Lee wrote: > On Mon, 10 Apr 2006, "Martin v. L?wis" wrote: >> I think I twice mailed everybody in Misc/ACKS. In principle, we want >> to have agreements from everybody who ever contributed, so that we >> can formally change the license (and so that it is clear to Python >> users what the legal standing is). > [...] > > Not sure if it's just me, but I'm in that list, and I'm pretty sure I > neither received an email nor faxed a contributor agreement (and my > email address hasn't changed for years). Ah, ok. I misremembered - I only mailed the committers at the time. I *meant* to contact everybody in Misc/ACKS, but never got to do so. Regards, Martin From martin at v.loewis.de Mon Apr 10 22:17:27 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 10 Apr 2006 22:17:27 +0200 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> Message-ID: <443ABD57.5020402@v.loewis.de> Phillip J. Eby wrote: > Is anybody else getting this? I can't reproduce this, on Debian, gcc 4.0.3, trunk:45237. Regards, Martin From pje at telecommunity.com Mon Apr 10 22:41:31 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 10 Apr 2006 16:41:31 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <9e804ac0604101247u71babd6cs6d68042575e2e911@mail.gmail.com > References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> At 09:47 PM 4/10/2006 +0200, Thomas Wouters wrote: >On 4/10/06, Phillip J. Eby ><<mailto:pje at telecommunity.com>pje at telecommunity.com> wrote: >>Is anybody else getting this? > >> > <stdin>(2)x() >>(Pdb) s >>Segmentation fault > >I'm not able to reproduce this in 32bit or 64bit mode (debian unstable.) >Does 'make distclean' before configure/compile fix it? Nope. > If not, can you unlimit coredumpsize and check to see what it crashes on? #0 0x401bbaa4 in _int_free () from /lib/libc.so.6 #1 0x401baa3c in free () from /lib/libc.so.6 #2 0x080a253e in builtin_raw_input (self=0x0, args=0x4050862c) at Python/bltinmodule.c:1759 It appears the problem is an object/mem mismatch: both PyOS_Readline in pgenmain.c, and PyOS_StdioReadline use PyObject_MALLOC, but bltinmodule.c is freeing the pointer with PyMem_FREE. Just to add to the confusion, by the way, the "readline" module dynamically sets PyOS_ReadlineFunctionPointer to call_readline, which does its allocation with PyMem_MALLOC... So does anybody know what the protocol should be for readline functions? What about readline implementations that are "in the field", so to speak? (e.g. modules that use the readline hook to implement that functionality using something other than GNU readline). From nnorwitz at gmail.com Mon Apr 10 22:44:10 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 10 Apr 2006 13:44:10 -0700 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> Message-ID: <ee2a432c0604101344l25b8df8fv56bdbd061bb2e48b@mail.gmail.com> On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > It appears the problem is an object/mem mismatch: both PyOS_Readline in > pgenmain.c, and PyOS_StdioReadline use PyObject_MALLOC, but bltinmodule.c > is freeing the pointer with PyMem_FREE. This (Readline using PyObject) was due to my recent change to fix Guido's problem last night. I didn't realize anything seeped out. All calls to PyOS_StdioReadline would need to be updated. I can do that tonight. Hmm, that means this will be an API change. I wonder if I should revert my fix and just deal with a second alloc and copy of the data to fix Guido's original problem. n From tim.peters at gmail.com Mon Apr 10 22:57:15 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 16:57:15 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> Message-ID: <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> [Phillip J. Eby] > ... > #0 0x401bbaa4 in _int_free () from /lib/libc.so.6 > #1 0x401baa3c in free () from /lib/libc.so.6 > #2 0x080a253e in builtin_raw_input (self=0x0, args=0x4050862c) at > Python/bltinmodule.c:1759 > > It appears the problem is an object/mem mismatch: both PyOS_Readline in > pgenmain.c, and PyOS_StdioReadline use PyObject_MALLOC, That wasn't true yesterday, but Neal changed it while you slept ;-). He changed it to worm around a different case of object/mem mismatch. > but bltinmodule.c is freeing the pointer with PyMem_FREE. > > Just to add to the confusion, by the way, the "readline" module dynamically > sets PyOS_ReadlineFunctionPointer to call_readline, which does its > allocation with PyMem_MALLOC... > > So does anybody know what the protocol should be for readline > functions? What about readline implementations that are "in the field", so > to speak? (e.g. modules that use the readline hook to implement that > functionality using something other than GNU readline). It's documented (after a fashion) at the declaration of PyOS_ReadlineFunctionPointer. Yesterday that read: """ /* By initializing this function pointer, systems embedding Python can override the readline function. Note: Python expects in return a buffer allocated with PyMem_Malloc. */ char *(*PyOS_ReadlineFunctionPointer)(FILE *, FILE *, char *); """ Overnight, "PyMem_Malloc" there changed to "PyObject_Malloc". It didn't matter in practice before (2.5) because all PyXYZ_ABC ways to spell "free the memory" resolved to PyObject_Free. Now that PyMem_ and PyObject_ call different things, mismatches are deadly. Since the only docs we had said PyMem_Malloc must be used for the readline function, best to stick with that. It's messy, though (there are a lot of functions that think they're in charge of freeing the memory, and the memory can originally come from a lot of places). From trentm at ActiveState.com Mon Apr 10 23:35:43 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 10 Apr 2006 14:35:43 -0700 Subject: [Python-Dev] updating PyExpat (Was: need info for externally maintained modules PEP) In-Reply-To: <4438B4E1.8040806@v.loewis.de> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <4438B4E1.8040806@v.loewis.de> Message-ID: <20060410213543.GD18402@activestate.com> [Martin v. Loewis wrote] > Brett Cannon wrote: > > - expat > > Not sure whether you mean the Expat parser proper here, or the pyexpat > module: both are externally maintained, also; pyexpat is part of PyXML. > OTOH, people can just consider PyXML to be part of Python, and I > synchronize the Python sources with the PyXML sources from time to time. I was going to be updating Modules/expat/... to Expat 2.0 relatively soon. Must I then go via the PyXML folks to do this update then or can I checkin to Python's SVN directly? Trent -- Trent Mick TrentM at ActiveState.com From nnorwitz at gmail.com Tue Apr 11 00:00:19 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 10 Apr 2006 15:00:19 -0700 Subject: [Python-Dev] Failing "inspect" test: test_getargspec_sublistofone In-Reply-To: <5.1.1.6.0.20060410142340.01e21bb0@mail.telecommunity.com> References: <5.1.1.6.0.20060410142340.01e21bb0@mail.telecommunity.com> Message-ID: <ee2a432c0604101500t2642c976yedbe9f66a6572238@mail.gmail.com> On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > I seem to recall some Python-dev discussion about this particular behavior, > but can't find it in Google. Is this test currently known to be > failing? If so, why, and what's the plan for it? Only failing for 2.4 IIRC: http://python.org/sf/1459159 n From guido at python.org Tue Apr 11 00:06:52 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 10 Apr 2006 15:06:52 -0700 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> Message-ID: <ca471dc20604101506y767b0c2cv1ecb0d12414d11b0@mail.gmail.com> On 4/10/06, Tim Peters <tim.peters at gmail.com> wrote: > It's documented (after a fashion) at the declaration of > PyOS_ReadlineFunctionPointer. Yesterday that read: > > """ > /* By initializing this function pointer, systems embedding Python can > override the readline function. > > Note: Python expects in return a buffer allocated with PyMem_Malloc. */ > > char *(*PyOS_ReadlineFunctionPointer)(FILE *, FILE *, char *); > """ > > Overnight, "PyMem_Malloc" there changed to "PyObject_Malloc". It > didn't matter in practice before (2.5) because all PyXYZ_ABC ways to > spell "free the memory" resolved to PyObject_Free. Now that PyMem_ > and PyObject_ call different things, mismatches are deadly. Since > the only docs we had said PyMem_Malloc must be used for the readline > function, best to stick with that. It's messy, though (there are a > lot of functions that think they're in charge of freeing the memory, > and the memory can originally come from a lot of places). Shouldn't it at least match call_readline() in Modules/readline.c, which uses PyMem_Malloc()? Also, since it's really a char array, I don't see the point of using something with "Object" in its name. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Tue Apr 11 00:32:56 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 11 Apr 2006 00:32:56 +0200 Subject: [Python-Dev] updating PyExpat (Was: need info for externally maintained modules PEP) In-Reply-To: <20060410213543.GD18402@activestate.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <4438B4E1.8040806@v.loewis.de> <20060410213543.GD18402@activestate.com> Message-ID: <443ADD18.9040806@v.loewis.de> Trent Mick wrote: > I was going to be updating Modules/expat/... to Expat 2.0 relatively > soon. Must I then go via the PyXML folks to do this update then or can I > checkin to Python's SVN directly? Please check in directly. I'm going to update PyXML some time. Regards, Martin From tim.peters at gmail.com Tue Apr 11 02:01:55 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 20:01:55 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <ca471dc20604101506y767b0c2cv1ecb0d12414d11b0@mail.gmail.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> <ca471dc20604101506y767b0c2cv1ecb0d12414d11b0@mail.gmail.com> Message-ID: <1f7befae0604101701r24ce6171pb83903cfd366debd@mail.gmail.com> [Guido] > Shouldn't it at least match call_readline() in Modules/readline.c, > which uses PyMem_Malloc()? I'm not sure what "it" means there, but, like I said, it's messy regardless. That's why we ended up with a large number of mismatches to begin with: there are many allocation and free'ing sites, spread over several modules, and these don't follow nice patterns. call_readline() is just one of the ways the memory might be allocated to begin with, and they can all end up on the same lines of code that try to free memory. > Also, since it's really a char array, I don't see the point of using something > with "Object" in its name. The PyObject_ memory family is generally faster and more memory-efficient for small allocations than the PyMem_ memory family. Lines of source code, and encoding strings, are usually small enough to exploit that. The "ob" in obmalloc.c doesn't really have anything to do with objects either. PyMem_SmallMalloc (etc) may have been better names (although I doubt that ;-)). From steven.bethard at gmail.com Tue Apr 11 02:03:16 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 10 Apr 2006 18:03:16 -0600 Subject: [Python-Dev] DRAFT: python-dev summary for 2006-02-16 to 2006-02-28 Message-ID: <d11dcfba0604101703v551637f0vac026711004c373e@mail.gmail.com> Ok, here's the summary for the second half of February. Again, comments and corrections are greatly appreciated! (And thanks to those who already gave me some for the last summary.) ============= Announcements ============= ----------------------- Python release schedule ----------------------- The Python 2.5 release schedule is `PEP 356`_. The first releases are planned for the end of March/beginning of April. Check the PEP for the full plan of features. .. _PEP 356: http://www.python.org/dev/peps/pep-0356/ Contributing threads: - `2.5 PEP <http://mail.python.org/pipermail/python-dev/2006-February/061110.html>`__ - `2.5 release schedule <http://mail.python.org/pipermail/python-dev/2006-February/061249.html>`__ - `2.4.3 for end of March? <http://mail.python.org/pipermail/python-dev/2006-February/061901.html>`__ [SJB] --------------------- Buildbot improvements --------------------- Thanks to Benji York and Walter D?rwald, the `buildbot results page`_ now has a new CSS stylesheet that should make it a little easier to read. (And thanks to Josiah Carlson, we should now have a Windows buildbot slave.) .. _buildbot results page: http://www.python.org/dev/buildbot/ Contributing threads: - `buildbot is all green <http://mail.python.org/pipermail/python-dev/2006-February/061399.html>`__ - `buildbot vs. Windows <http://mail.python.org/pipermail/python-dev/2006-February/061554.html>`__ [SJB] ------------------------------- Deprecation of multifile module ------------------------------- The multifile module, which has been supplanted by the email module since Python 2.2, is finally being deprecated. Though the module will not be removed in Python 2.5, its documentation now clearly indicates the deprecation. Contributing thread: - `Deprecate \`\`multifile\`\`? <http://mail.python.org/pipermail/python-dev/2006-February/061211.html>`__ [SJB] ------------------------------ Win64 AMD64 binaries available ------------------------------ Martin v. L?wis has made `AMD64 binaries`_ available for the current trunk's Python. If you're using an AMD64 machine (a.k.a. EM64T or x64), give 'em a whirl and see how they work. .. _amd64 binaries: http://www.dcl.hpi.uni-potsdam.de/home/loewis/ Contributing thread: - `Win64 AMD64 (aka x64) binaries available64 <http://mail.python.org/pipermail/python-dev/2006-February/061533.html>`__ [SJB] --------------------------------------------------- Javascript to adopt Python iterators and generators --------------------------------------------------- On a slightly off-topic note, Brendan Eich has blogged_ that the next version of Javascript will borrow iterators, generators and list comprehensions from Python. Nice to see that the Python plague is even spreading to other programming languages now. ;) .. _blogged: http://weblogs.mozillazine.org/roadmap/archives/2006/02/ Contributing thread: - `javascript "standing on Python's shoulders" as it moves forward. <http://mail.python.org/pipermail/python-dev/2006-February/061472.html>`__ [SJB] ========= Summaries ========= --------------------------- A dict with a default value --------------------------- Guido suggested a defaultdict type which would act like a dict, but produce a default value when __getitem__ was called and no key existed. The intent was to simplify code examples like:: # a dict of lists for x in y: d.setdefault(key, []).append(value) # a dict of counts for x in y: d[key] = d.get(key, 0) + 1 where the user clearly wants to associate a single default with the dict, but has no simple way to spell this. People quickly agreed that the default should be specified as a function so that using ``list`` as a default could create a dict of lists, and using ``int`` as a default could create a dict of counts. Then the real thread began. Guido proposed adding an ``on_missing`` method to the dict API, which would be called whenever ``__getitem__`` found that the requested key was not present in the dict. The ``on_missing`` method would look for a ``default_factory`` attribute, and try to call it if it was set, or raise a KeyError if it was not. This would allow e.g. ``dd.default_factory = list`` to make a dict object produce empty lists as default values, and ``del dd.default_factory`` to revert the dict object to the standard behavior. However, a number of opponents worried that confusion would arise when basic dict promises (like that ``x in d`` implies that ``x in d.keys()`` and ``d[x]`` doesn't raise a KeyError) could be conditionally overridden by the existence of a ``default_factory`` attribute. Others worried about complicating the dict API with yet another method, especially one that was never meant to be called directly (only overridden in subclasses). Eventually, Guido was convinced that instead of modifying the builtin dict type, a new collections.defaultdict should be introduced. Guido then defended keeping ``on_missing`` as a method of the dict type, noting that without ``on_missing`` any subclasses (e.g. ``collections.defaultdict``) that wanted to override the behavior for missing keys would have to override ``__getitem__`` and pay the penalty on every call instead of just the ones where the key wasn't present. In the patch committed to the Python trunk, ``on_missing`` was renamed to ``__missing__`` and though no ``__missing__`` method is defined for the dict type, if a subclass defines it, it will be called instead of raising the usual KeyError. Contributing threads: - `Proposal: defaultdict <http://mail.python.org/pipermail/python-dev/2006-February/061169.html>`__ - `Counter proposal: multidict (was: Proposal: defaultdict) <http://mail.python.org/pipermail/python-dev/2006-February/061264.html>`__ - `Counter proposal: multidict <http://mail.python.org/pipermail/python-dev/2006-February/061276.html>`__ - `defaultdict proposal round three <http://mail.python.org/pipermail/python-dev/2006-February/061485.html>`__ - `defaultdict and on_missing() <http://mail.python.org/pipermail/python-dev/2006-February/061702.html>`__ - `getdefault(), the real replacement for setdefault() <http://mail.python.org/pipermail/python-dev/2006-February/061748.html>`__ [SJB] ----------------------------------------- Encode and decode interface in Python 3.0 ----------------------------------------- Jason Orendorff suggested that ``bytes.encode()`` and ``text.decode()`` (where text is the name of Python 3.0's str/unicode) should be removed in Python 3.0. Guido agreed, suggesting that Python 3.0 should have one of the following APIs for encoding and decoding: - bytes.decode(enc) -> text text.encode(enc) -> bytes - text(bytes, enc) -> text bytes(text, enc) -> bytes There was a lot of discussion about how hard it was for beginners to figure out the current ``.encode()`` and ``.decode()`` methods, and Martin v. L?wis suggested that the behavior:: py> "Martin v. L?wis".encode("utf-8") Traceback (most recent call last): File "<stdin>", line 1, in ? UnicodeDecodeError: 'ascii' codec can't decode byte 0xf6 in position 11: ordinal not in range(128) would be better if replaced by Guido's suggested behavior:: py> "Martin v. L?wis".encode("utf-8") Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'str' object has no attribute 'encode' since the user would immediately know that they had made a mistake by trying to encode a string. However, some people felt that this problem could be solved by simply changing the UnicodeDecodeError to something more informative like ``ValueError: utf8 can only encode unicode objects``. M.-A. Lemburg felt strongly that text and bytes objects should keep both ``.encode()`` and ``.decode()`` methods as simple interfaces to the registered codecs. Since the codecs system handles general encodings (not just text<->bytes encodings) he felt that ``.encode()`` and ``.decode()`` should be available on both bytes and text objects and should be able to return whatever type the encoding deems appropriate. Guido repeated one of his design guidelines: the return value of a function should not depend on the *value* of the arguments. Thus he would prefer that ``bytes.decode()`` only return text and ``text.decode()`` only return bytes, regardless of the encodings passed in. (He didn't seem to be commenting on the architecture of the codecs module however, just the architecture of the bytes and text types.) Contributing threads: - `bytes.from_hex() [Was: PEP 332 revival in coordination with pep 349?] <http://mail.python.org/pipermail/python-dev/2006-February/061037.html>`__ - `bytes.from_hex() [Was: PEP 332 revival in coordination with pep 349?] <http://mail.python.org/pipermail/python-dev/2006-February/061104.html>`__ - `str.translate vs unicode.translate (was: Re: str object going in Py3K) <http://mail.python.org/pipermail/python-dev/2006-February/061179.html>`__ - `bytes.from_hex() <http://mail.python.org/pipermail/python-dev/2006-February/061190.html>`__ - `str.translate vs unicode.translate <http://mail.python.org/pipermail/python-dev/2006-February/061222.html>`__ [SJB] ----------------- Writable closures ----------------- Almann T. Goo was considering writing a PEP to allow write access to names in nested scopes. Currently, names in nested scopes can only be read, not written, so the following code fails with an UnboundLocalError:: def getinc(start=0): def incrementer(inc=1): start += inc return start return incrementer Almann suggested introducing a new declaration, along the lines of ``global``, to indicate that assignments to a name should be interpreted as assignments to the name in the nearest enclosing scope. Initially, he proposed the term ``use`` for this declaration, but most of the thread participants seemed to prefer ``outer``, allowing the function above to be written as:: def getinc(start=0): def incrementer(inc=1): outer start start += inc return start return incrementer A variety of syntactic variants achieving similar results were proposed, including a way to name a function's local namespace:: def getinc(start=0): namespace xxx def incrementer(inc=1): xxx.start += inc return xxx.start return incrementer a way to indicate when a single use of a name should refer to the outer scope, based on the syntax for `relative imports`_:: def getinc(start=0): def incrementer(inc=1): .start += inc # note the "." return .start # note the "." (this one could be optional) return incrementer and the previously suggested rebinding statement, from `PEP 227`_:: def getinc(start=0): def incrementer(inc=1): start +:= inc # note the ":=" instead of "=" return start return incrementer Much like the last time this issue was brought up, "the discussion fizzled out after having failed to reach a consensus on an obviously right way to go about it" (Greg Ewing's quite appropriate wording). No PEP was produced, and it didn't seem like one would soon be forthcoming. .. _PEP 227: http://www.python.org/peps/pep-0227.html .. _relative imports: http://www.python.org/peps/pep-0328.html Contributing threads: - `PEP for Better Control of Nested Lexical Scopes <http://mail.python.org/pipermail/python-dev/2006-February/061568.html>`__ - `Papal encyclical on the use of closures (Re: PEP for Better Control of Nested Lexical Scopes) <http://mail.python.org/pipermail/python-dev/2006-February/061596.html>`__ - `Using and binding relative names (was Re: PEP for Better Control of Nested Lexical Scopes) <http://mail.python.org/pipermail/python-dev/2006-February/061636.html>`__ [SJB] --------------------------- PEP 358: The "bytes" Object --------------------------- This week mostly wrapped up the bytes type discussion from the last fortnight, with the introduction of `PEP 358`_: The "bytes" Object. The PEP proposes a ``bytes`` type which: * is a sequence of range(0, 256) int objects * can be constructed out of lists of range(0, 256) ints * can be constructed out of the characters of str objects * can be constructed out of unicode objects using a specified encoding (or the system default encoding if none is specified) * can be constructed out of a hex string using the classmethod ``bytes.fromhex`` The bytes constructor allows an encoding for unicode objects (instead of requiring a call to unicode.encode) so as not to require double copying (one of encoding and one for conversion to bytes). Some people took issue with the fact that constructor allows an encoding for str objects, but ignores it, as this means code like ``bytes(s, 'utf-16be')`` will do a different thing for str and unicode. Ignoring the encoding argument for str objects was apparently intended to ease the transition from str to bytes, though it was not clear exactly how. .. _PEP 358: http://www.python.org/peps/pep-0358.html Contributing threads: - `bytes type needs a new champion <http://mail.python.org/pipermail/python-dev/2006-February/061080.html>`__ - `bytes type discussion <http://mail.python.org/pipermail/python-dev/2006-February/061082.html>`__ - `Pre-PEP: The "bytes" object <http://mail.python.org/pipermail/python-dev/2006-February/061100.html>`__ - `s/bytes/octet/ [Was:Re: bytes.from_hex() [Was: PEP 332 revival in coordination with pep 349?]] <http://mail.python.org/pipermail/python-dev/2006-February/061482.html>`__ - `PEP 358 (bytes type) comments <http://mail.python.org/pipermail/python-dev/2006-February/061728.html>`__ [SJB] ---------------------------------- Compiling Python with MS VC++ 2005 ---------------------------------- M.-A. Lemburg suggested compiling Python with the new `MS VC++ 2005`_, especially since it's "free". There was some concern about the stability of VS2005, and Benji York pointed out that the express editions are only `free until November 6th`_. Fredrik Lundh pointed out that it would be substantially more work for all the developers who provide ready-made Windows binaries for multiple Python releases. In the end, they decided to keep with the current compiler at least for one more release. .. _MS VC++ 2005: http://msdn.microsoft.com/vstudio/express/default.aspx .. _free until November 6th: http://msdn.microsoft.com/vstudio/express/support/faq/default.aspx#pricing Contributing thread: - `Switch to MS VC++ 2005 ?! <http://mail.python.org/pipermail/python-dev/2006-February/061870.html>`__ [SJB] ----------------------- Alternate lambda syntax ----------------------- Even though Guido already declared that Python 3.0 will keep the current lambda syntax, Talin decided to try out the new AST and give lambda a face-lift. With `Talin's patch`_, you can now write lambdas like:: >>> a = (x*x given (x)) >>> a(9) 81 >>> a = (x*y given (x=3,y=4)) >>> a(9, 10) 90 >>> a(9) 36 >>> a() 12 The patch was remarkably simple, and people were suitably impressed by the flexibility of the new AST. Of course, the patch was rejected since Guido is now happy with the current lambda situation. .. _Talin's patch: http://bugs.python.org/1434008 Contributing thread: - `Adventures with ASTs - Inline Lambda <http://mail.python.org/pipermail/python-dev/2006-February/061124.html>`__ [SJB] --------------- Stateful codecs --------------- Walter D?rwald was looking for ways to cleanly support stateful codecs. M.-A. Lemburg suggested extending the codec registry to maintain slots for the stateful encoders and decoders (and allowing six-tuples to be passed in) and adding the functions ``codecs.getencoderobject()`` and ``codecs.getdecoderobject()``. Walter D?rwald suggested that ``codecs.lookup()`` should return objects with the following attributes: (1) Name (2) Encoder function (3) Decoder function (4) Stateful encoder factory (5) Stateful decoder factory (6) Stream writer factory (7) Stream reader factory For the sake of backwards compatibility, these objects would subclass tuple so that they look like the old four-tuples returned by ``codecs.lookup()``. `Walter's patch`_ provides an implementation of some of these suggestions. .. _Walter's patch: http://bugs.python.org/1436130 Contributing thread: - `Stateful codecs [Was: str object going in Py3K] <http://mail.python.org/pipermail/python-dev/2006-February/061230.html>`__ [SJB] --------------------------------------- operator.is*Type and user-defined types --------------------------------------- Michael Foord pointed out that for types written in Python, ``operator.isMappingType`` and ``operator.isSquenceType`` are essentially identical -- they both return True if ``__getitem__`` is defined. Raymond Hettinger and Greg Ewing explained that for types written in C, these functions can give more detailed information because at the C level, CPython differentiates between the ``__getitem__`` of the sequence protocol and the ``__getitem__`` of the mapping protocol. Contributing thread: - `operator.is*Type <http://mail.python.org/pipermail/python-dev/2006-February/061698.html>`__ [SJB] -------------------------- Python-level AST interface -------------------------- Brett Cannon started a brief thread to discuss where to go next with the Python AST branch. Though some of the discussion moved online at PyCon, the major decisions were reported by Martin v. L?wis: * The ast-objects branch (which used reference-counting instead of arena allocation) was dropped because it seemed less maintainable and people had agreed that expose the C AST objects to Python was a bad idea anyway * Python code would have access to a "shadow tree" of the actual AST tree, accessible by calling ``compile()`` with the flag PyCF_ONLY_AST (0x400). As a result, Python 2.5 now has a Python-level interface to AST objects:: >>> compile('"spam" if x else 42', '<string>', 'eval', 0x400) <_ast.Expression object at 0x00BA0F50> Contributing threads: - `C AST to Python discussion <http://mail.python.org/pipermail/python-dev/2006-February/060994.html>`__ - `C AST to Python discussion <http://mail.python.org/pipermail/python-dev/2006-February/061109.html>`__ - `[Python-projects] AST in Python 2.5 <http://mail.python.org/pipermail/python-dev/2006-February/061145.html>`__ - `Exposing the abstract syntax <http://mail.python.org/pipermail/python-dev/2006-February/061850.html>`__ - `quick status report <http://mail.python.org/pipermail/python-dev/2006-February/061892.html>`__ [SJB] ------------------------------------------- Allowing property to be used as a decorator ------------------------------------------- Georg Brandl suggested in passing that it would be nice if ``property()`` could be used as a decorator. Ian Bicking pointed out that you can already use ``property()`` this way as long as you only want a read-only property. However, the resulting property has no docstring, so Alex Martelli suggested that property use the __doc__ of its fget no docstring was provided. Guido approved it, and `Georg Brandl provided a patch`_. Thus in Python 2.5, you'll be able to write read-only properties like:: @property def x(self): """The x property""" return self._x + 42 .. _Georg Brandl provided a patch: http://bugs.python.org/1434038 Contributing threads: - `The decorator(s) module <http://mail.python.org/pipermail/python-dev/2006-February/060759.html>`__ - `The decorator(s) module <http://mail.python.org/pipermail/python-dev/2006-February/061227.html>`__ [SJB] ----------------------------------------------- Turning on unicode string literals for a module ----------------------------------------------- Neil Schemenauer asked if it would be possible to have a ``from __future__ import unicode_strings`` statement which would turn all string literals into unicode literals for that module (without requiring the usual ``u`` prefix). Currently, you can turn on this kind of behavior for all modules using the undocumented -U command-line switch, but there's no way of enabling it on a per-module basis. There didn't seem to be enough momentum in the thread to implement such a thing however. Contributing thread: - `from __future__ import unicode_strings? <http://mail.python.org/pipermail/python-dev/2006-February/061088.html>`__ [SJB] ------------------------------------------- Allowing cProfile to print to other streams ------------------------------------------- Skip Montaro pointed out that the new cProfile module prints stuff to stdout. He suggested rewriting the necessary bits to add a stream= keyword argument where necessary and using stream.write(...) instead of the print statements. No patch was available at the time of this summary. Contributing thread: - `cProfile prints to stdout? <http://mail.python.org/pipermail/python-dev/2006-February/061815.html>`__ [SJB] -------------------------------- PEP 343 with-statement semantics -------------------------------- Mike Bland provided an initial implementation of `PEP 343`_'s with-statment. In writing some unit-tests for it, Guido discovered that the implementation would not allow generators like:: @contextmanager def foo(): try: yield except Exception: pass with foo(): 1/0 to be equivalent to the corresponding in-line code:: try: 1/0 except Exception: pass because the PEP at the time did not allow context objects to suppress exceptions. Guido modified the patch and the PEP to require __exit__ to reraise the exception if it didn't want it suppressed. .. _PEP 343: http://www.python.org/peps/pep-0343.html Contributing threads: - `PEP 343 "with" statement patch <http://mail.python.org/pipermail/python-dev/2006-February/061637.html>`__ - `with-statement heads-up <http://mail.python.org/pipermail/python-dev/2006-February/061903.html>`__ [SJB] ------------------------------------ Dropping Win9x support in Python 2.6 ------------------------------------ Neal Norwitz suggested that Python 2.6 no longer try to support Win9x and WinME and updated `PEP 11`_ accordingly. There was a little rumbling about dropping the support, but no one stepped forward to volunteer to maintain the patches, and Guido suggested that anyone using a 6+ year old OS should be fine using an older Python too. .. _PEP 11: http://www.python.org/dev/peps/pep-0011/ Contributing thread: - `Dropping support for Win9x in 2.6 <http://mail.python.org/pipermail/python-dev/2006-February/061791.html>`__ [SJB] ---------------------------- Removing non-Unicode support ---------------------------- Neal Norwitz suggested that the --disable-unicode switch might be a candidate for removal in Python 2.6. A few people were mildly concerned that the inability to remove Unicode support might make it harder to put Python on small hand-held devices. However, many (though not all) hand-helds already support Unicode, and currently a number of tests already fail if you use the --disable-unicode switch, so those who need this switch have not been actively maintaining it. Stripping out the numerous Py_USING_UNICODE declarations would substantially simplify some of the Python source. No final decision had been made at the time of this summary. Contributing thread: - `Removing Non-Unicode Support? <http://mail.python.org/pipermail/python-dev/2006-February/061464.html>`__ [SJB] ------------------------------------ Translating the Python documentation ------------------------------------ Facundo Batista had proposed translating the Library Reference and asked about how to get notifications when the documentation was updated (so that the translations could also be updated). Georg Brandl suggested a post-commit hook in SVN, though this would only give notifications at the module level. Fredrik Lundh suggested something based on his `more dynamic library reference platform`_ so that the notifications could indicate particular methods and functions instead. .. _more dynamic library reference platform: http://effbot.org/zone/pyref.htm Contributing threads: - `Translating docs <http://mail.python.org/pipermail/python-dev/2006-February/061823.html>`__ - `Fwd: Translating docs <http://mail.python.org/pipermail/python-dev/2006-February/061834.html>`__ [SJB] --------------- PEP 338 updates --------------- At Guido's suggestion, Nick Coghlan pared down `PEP 338`_ to just the bare bones necessary to properly implement the -m switch. That means the runpy module will contain only a single function, run_module, which will import the named module using the standard import mechanism, and then execute the code in that module. .. _PEP 338: http://www.python.org/peps/pep-0338.html Contributing thread: - `PEP 338 issue finalisation (was Re: 2.5 PEP) <http://mail.python.org/pipermail/python-dev/2006-February/061131.html>`__ [SJB] ----------------- Bugfix procedures ----------------- Just a reminder of the procedure for applying bug patches in Python (thanks to a brief thread started by Arkadiusz Miskiewicz). Anyone can submit a patch, but it will not be committed until a committer reviews and commits the patch. Non-committers are encouraged to review and comment on patches, and a number of the committers have promised that anyone who reviews and comments on at least five patches can have any patch they like looked at. Contributing threads: - `how bugfixes are handled? <http://mail.python.org/pipermail/python-dev/2006-February/061067.html>`__ - `how bugfixes are handled? <http://mail.python.org/pipermail/python-dev/2006-February/061120.html>`__ [SJB] -------------------------------- Removing --with-wctype-functions -------------------------------- M.-A. Lemburg suggested removing support for --with-wctype-functions as it makes Unicode support work in non-standard ways. Though he announced the plan in December 2004, ``PEP 11`` wasn't updated, so removal will be delayed until Python 2.6. Contributing thread: - `[Python-checkins] r42396 - peps/trunk/pep-0011.txt <http://mail.python.org/pipermail/python-dev/2006-February/061159.html>`__ [SJB] --------------------------------- Making ASCII the default encoding --------------------------------- Neal Norwitz asked if we should finally make ASCII the default encoding as `PEP 263`_ had promised in Python 2.3. He received only positive responses on this, and so in Python 2.5, any file missing a ``# -*- coding: ... -*-`` declaration and using non-ASCII characters will generate an error. .. _PEP 263: http://www.python.org/peps/pep-0263.html Contributing thread: - `Making ascii the default encoding <http://mail.python.org/pipermail/python-dev/2006-February/061884.html>`__ [SJB] ------------------------------------------- PEP 308: Conditional Expressions checked in ------------------------------------------- Thomas Wouters checked in a patch for `PEP 308`_, so Python 2.5 now has the long-awaited conditional expressions! .. _PEP 308: http://www.python.org/dev/peps/pep-0308/ Contributing thread: - `PEP 308 <http://mail.python.org/pipermail/python-dev/2006-February/061855.html>`__ [SJB] ================== Previous Summaries ================== - `http://www.python.org/dev/doc/devel still available <http://mail.python.org/pipermail/python-dev/2006-February/061078.html>`__ - `str object going in Py3K <http://mail.python.org/pipermail/python-dev/2006-February/061081.html>`__ - `PEP 332 revival in coordination with pep 349? [ Was:Re: release plan for 2.5 ?] <http://mail.python.org/pipermail/python-dev/2006-February/061084.html>`__ - `nice() <http://mail.python.org/pipermail/python-dev/2006-February/061086.html>`__ - `bdist_* to stdlib? <http://mail.python.org/pipermail/python-dev/2006-February/061105.html>`__ - `Please comment on PEP 357 -- adding nb_index slot to PyNumberMethods <http://mail.python.org/pipermail/python-dev/2006-February/061165.html>`__ =============== Skipped Threads =============== - `ssize_t branch merged <http://mail.python.org/pipermail/python-dev/2006-February/061083.html>`__ - `Off-topic: www.python.org <http://mail.python.org/pipermail/python-dev/2006-February/061090.html>`__ - `Weekly Python Patch/Bug Summary <http://mail.python.org/pipermail/python-dev/2006-February/061106.html>`__ - `2.5 - I'm ok to do release management <http://mail.python.org/pipermail/python-dev/2006-February/061117.html>`__ - `Rename str/unicode to text [Was: Re: str object going in Py3K] <http://mail.python.org/pipermail/python-dev/2006-February/061134.html>`__ - `Does eval() leak? <http://mail.python.org/pipermail/python-dev/2006-February/061149.html>`__ - `Rename str/unicode to text [Was: Re: str object goingin Py3K] <http://mail.python.org/pipermail/python-dev/2006-February/061153.html>`__ - `Test failures in test_timeout <http://mail.python.org/pipermail/python-dev/2006-February/061155.html>`__ - `Rename str/unicode to text <http://mail.python.org/pipermail/python-dev/2006-February/061205.html>`__ - `Copying zlib compression objects <http://mail.python.org/pipermail/python-dev/2006-February/061247.html>`__ - `Serial function call composition syntax foo(x, y) -> bar() -> baz(z) <http://mail.python.org/pipermail/python-dev/2006-February/061282.html>`__ - `A codecs nit <http://mail.python.org/pipermail/python-dev/2006-February/061365.html>`__ - `Stackless Python sprint at PyCon 2006 <http://mail.python.org/pipermail/python-dev/2006-February/061367.html>`__ - `[Python-checkins] r42490 - in python/branches/release24-maint: Lib/fileinput.py Lib/test/test_fileinput.py Misc/NEWS <http://mail.python.org/pipermail/python-dev/2006-February/061421.html>`__ - `Enhancements to the fileinput module <http://mail.python.org/pipermail/python-dev/2006-February/061428.html>`__ - `test_fileinput failing on Windows <http://mail.python.org/pipermail/python-dev/2006-February/061446.html>`__ - `New Module: CommandLoop <http://mail.python.org/pipermail/python-dev/2006-February/061450.html>`__ - `(-1)**(1/2)==1? <http://mail.python.org/pipermail/python-dev/2006-February/061487.html>`__ - `documenting things [Was: Re: Proposal: defaultdict] <http://mail.python.org/pipermail/python-dev/2006-February/061499.html>`__ - `Simple CPython stack overflow. <http://mail.python.org/pipermail/python-dev/2006-February/061506.html>`__ - `problem with genexp <http://mail.python.org/pipermail/python-dev/2006-February/061544.html>`__ - `readline compilarion fails on OSX <http://mail.python.org/pipermail/python-dev/2006-February/061561.html>`__ - `Memory Error the right error for coding cookie promise violation? <http://mail.python.org/pipermail/python-dev/2006-February/061576.html>`__ - `Two patches <http://mail.python.org/pipermail/python-dev/2006-February/061642.html>`__ - `Unifying trace and profile <http://mail.python.org/pipermail/python-dev/2006-February/061669.html>`__ - `Fixing copy.py to allow copying functions <http://mail.python.org/pipermail/python-dev/2006-February/061673.html>`__ - `Path PEP: some comments (equality) <http://mail.python.org/pipermail/python-dev/2006-February/061677.html>`__ - `calendar.timegm <http://mail.python.org/pipermail/python-dev/2006-February/061678.html>`__ - `release plan for 2.5 ? <http://mail.python.org/pipermail/python-dev/2006-February/061731.html>`__ - `[ python-Feature Requests-1436243 ] Extend pre-allocated integers to cover [0, 255] <http://mail.python.org/pipermail/python-dev/2006-February/061734.html>`__ - `buildbot, and test failures <http://mail.python.org/pipermail/python-dev/2006-February/061737.html>`__ - `OT: T-Shirts <http://mail.python.org/pipermail/python-dev/2006-February/061778.html>`__ - `PEP 328 <http://mail.python.org/pipermail/python-dev/2006-February/061813.html>`__ - `Current trunk test failures <http://mail.python.org/pipermail/python-dev/2006-February/061861.html>`__ - `PEP 332 revival in coordination with pep 349? [Was:Re: release plan for 2.5 ?] <http://mail.python.org/pipermail/python-dev/2006-February/061866.html>`__ - `str.count is slow <http://mail.python.org/pipermail/python-dev/2006-February/061885.html>`__ - `Long-time shy failure in test_socket_ssl <http://mail.python.org/pipermail/python-dev/2006-February/061893.html>`__ From trentm at ActiveState.com Tue Apr 11 02:17:53 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 10 Apr 2006 17:17:53 -0700 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> Message-ID: <20060411001753.GD4092@activestate.com> [Tim Peters wrote] > Trent (and anyone else who wants to play along), what happens if you > do this by hand in a current trunk or 2.4 build?: > > import socket > s = socket.socket() > s.settimeout(30.0) > s.connect(("gmail.org", 995)) > > On my box (when gmail.org:995 responds at all), the connect succeeds > in approximately 0.03 seconds, giving 29.97 seconds to spare ;-) C:\trentm\src\python\python\PCbuild>python_d Python 2.5a1 (trunk, Apr 10 2006, 14:48:00) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket [25133 refs] >>> s = socket.socket() [25145 refs] >>> s.settimeout(30.0) [25145 refs] >>> s.connect(("gmail.org", 995)) [25145 refs] >>> Sorry that I took so long to run this. It is a little unfortunate that with the last build step being "clean", I couldn't just cd into the build directory and try to run this. Seems like that was a good thing that I did take so long because it passed in the most recent build :) http://www.python.org/dev/buildbot/trunk/x86%20W2k%20trunk/builds/371/step-test/0 > Can you identify a reason for why it times out on the Win2K buildbot? > (beats me -- firewall issue, DNS sloth, ...?) It is possible that this was due to network changes that we are doing at work here. We are preparing for an office move in a couple of weeks (http://blogs.activestate.com/activestate/2006/02/free_as_in_will.html). My eyes glaze over whenever the systems dudes mention VPN, SSH, DNS, VMWare, sub-domains and DHCP in the same breath. Trent -- Trent Mick TrentM at ActiveState.com From nnorwitz at gmail.com Tue Apr 11 02:28:12 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 10 Apr 2006 17:28:12 -0700 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <20060411001753.GD4092@activestate.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> <20060411001753.GD4092@activestate.com> Message-ID: <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> On 4/10/06, Trent Mick <trentm at activestate.com> wrote: > > Sorry that I took so long to run this. It is a little unfortunate that > with the last build step being "clean", I couldn't just cd into the > build directory and try to run this. Maybe we should clean before we configure/compile? That would leave the last build in tact until the next run. n From brett at python.org Tue Apr 11 02:37:49 2006 From: brett at python.org (Brett Cannon) Date: Mon, 10 Apr 2006 17:37:49 -0700 Subject: [Python-Dev] I'm not getting email from SF when assignedabug/patch In-Reply-To: <443A89DE.6080501@v.loewis.de> References: <ca471dc20603271344p4f8c9ba0leb330201c32aff1a@mail.gmail.com> <ca471dc20603301001h3da20c04i2f8c06d2faa4bc3d@mail.gmail.com> <bbaeab100603301640p4b35e128k2bc9f70466cea65b@mail.gmail.com> <e0igo3$d13$1@sea.gmane.org> <bbaeab100603311148m1462ee47t5220c273cfac674@mail.gmail.com> <e0oqo6$c4n$1@sea.gmane.org> <e0pj6g$onr$1@sea.gmane.org> <e0rg7c$cjg$1@sea.gmane.org> <bbaeab100604031255q1ff0224btdfbd71684e04325@mail.gmail.com> <443A89DE.6080501@v.loewis.de> Message-ID: <bbaeab100604101737l7789641ai4d5e42dbff17cf98@mail.gmail.com> On 4/10/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Brett Cannon wrote: > > Can someone (Martin, Barry?) post this on python.org (I don't think > > this necessarily needs to be put into svn and I don't have any access > > but svn) so Fredrik can free up the space on his server? > > Did I ever respond to that? I put the file on > Nope. Thanks for doing this. I am in the middle of final projects for school so I probably won't get to writing the rough draft of the call for trackers until after the 23rd. -Brett > http://svn.python.org/snapshots/ > > Regards, > Martin > From brett at python.org Tue Apr 11 02:39:41 2006 From: brett at python.org (Brett Cannon) Date: Mon, 10 Apr 2006 17:39:41 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <17466.37928.191007.359419@montanaro.dyndns.org> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <443A8135.1090301@python.net> <17466.37928.191007.359419@montanaro.dyndns.org> Message-ID: <bbaeab100604101739l53ac53dbvf62eb14d0c12d978@mail.gmail.com> On 4/10/06, skip at pobox.com <skip at pobox.com> wrote: > > Brett Cannon wrote: > > OK, I am going to write the PEP I proposed a week or so ago, listing > > all modules and packages within the stdlib that are maintained > > externally so we have a central place to go for contact info or where > > to report bugs on issues. > > Based on the recent interchange regarding VMS, perhaps you should add port > maintainers to that PEP as well. I think that should be a separate PEP. But I do agree that it would be good info to have. -Brett From trentm at ActiveState.com Tue Apr 11 02:42:59 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 10 Apr 2006 17:42:59 -0700 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> <20060411001753.GD4092@activestate.com> <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> Message-ID: <20060411004259.GB25196@activestate.com> [Neal Norwitz wrote] > On 4/10/06, Trent Mick <trentm at activestate.com> wrote: > > > > Sorry that I took so long to run this. It is a little unfortunate that > > with the last build step being "clean", I couldn't just cd into the > > build directory and try to run this. > > Maybe we should clean before we configure/compile? That would leave > the last build in tact until the next run. Sure. Have to make sure that it doesn't choke on the bootstrapping problem: the first time there is no "Makefile" with which to call "make clean". Then again, maybe it is a good thing that one is discouraged from poking around in a buildbot build tree (potentially leaving turds that taint future builds). Thoughts? Trent -- Trent Mick TrentM at ActiveState.com From steven.bethard at gmail.com Tue Apr 11 02:50:51 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 10 Apr 2006 18:50:51 -0600 Subject: [Python-Dev] DRAFT: python-dev summary for 2006-03-01 to 2006-03-15 Message-ID: <d11dcfba0604101750v10a1627ev4e5a99702370ab82@mail.gmail.com> Ok, if I summarize any more of python-dev, my brain's going to explode. ;-) Here's the summaries for the first half of March. Let me know what to fix! ============= Announcements ============= ----------------------- Webstats for python.org ----------------------- Thomas Wouters set up webalizer on dinsdale.python.org and added webstats for all subsites of python.org: * http://www.python.org/webstats/ * http://beta.python.org/webstats/ * http://bugs.python.org/webstats/ * http://planet.python.org/webstats/ * http://docs.python.org/webstats/ * http://svn.python.org/webstats/ Check 'em out if you're interested! Contributing thread: - `Webstats for www.python.org et al. <http://mail.python.org/pipermail/python-dev/2006-March/061930.html>`__ [SJB] --------------------------- Python 2.5 release schedule --------------------------- The first releases scheduled for Python 2.5 are quickly approaching. Check `PEP 356`_ for details, but the first alpha is due on April 1st. .. _PEP 356: http://www.python.org/doc/peps/pep-0356/ Contributing thread: - `2.5 release schedule? <http://mail.python.org/pipermail/python-dev/2006-March/062185.html>`__ [SJB] ----------- Py3K branch ----------- Guido has begun work on Py3K, starting a new branch to rip out some stuff like string exceptions and classic classes. He's trying to get a "feel" for what Python 3.0 will look like, hopefully before his keynote in OSCON. Contributing thread: - `Py3k branch - please stay out :-) <http://mail.python.org/pipermail/python-dev/2006-March/062396.html>`__ [SJB] ------------------------------------------- Deprecated modules going away in Python 2.5 ------------------------------------------- A number of deprecated modules will be removed in Python 2.5, including: * reconvert.py * regex (regexmodule.c) * regex_syntax.py * regsub.py and a variety of things from lib-old. These modules have been deprecated for a while now, and will be pulled in the next Python release. Contributing thread: - `Deprecated modules going away in 2.5 <http://mail.python.org/pipermail/python-dev/2006-March/062405.html>`__ [SJB] ========= Summaries ========= ------------------------------- Maintaining ctypes in SVN trunk ------------------------------- Thomas Heller put ctypes into the Python SVN repository, and with the help of perky, Neal Norwitz and Thomas Wouters, updated it to take advantage of the new ssize_t feature. Thomas Heller has now moved the "official" ctypes development to the Python SVN. Contributing threads: - `ctypes is in SVN now. <http://mail.python.org/pipermail/python-dev/2006-March/062211.html>`__ - `Developing/patching ctypes (was: Re: integrating ctypes into python) <http://mail.python.org/pipermail/python-dev/2006-March/062243.html>`__ - `Developing/patching ctypes <http://mail.python.org/pipermail/python-dev/2006-March/062244.html>`__ [SJB] ----------------- Windows buildbots ----------------- Josiah Carlson had been working on getting a buildbot slave running on a Windows box, but eventually gave up due to crashes caused by VS.NET. Tim Peters fought his way through the setup with a XP box, posting `his lessons`_ to the wiki, and Trent Mick managed to follow a similar route and setup a Win2K buildbot slave. Thanks to all who suffered through the config -- Windows buildbot coverage looks pretty good now! .. _his lessons: http://wiki.python.org/moin/BuildbotOnWindows Contributing threads: - `Another Windows buildbot slave <http://mail.python.org/pipermail/python-dev/2006-March/062068.html>`__ - `Still looking for volunteer to run Windows buildbot <http://mail.python.org/pipermail/python-dev/2006-March/062267.html>`__ [SJB] ----------------------------------- Python 3.0: itr.next() or next(itr) ----------------------------------- The end of last fortnight's defaultdict thread turned to discussing the fate of the iterator protocol's .next() method in Python 3.0. Greg Ewing argued that renaming .next() to .__next__() and introducing a builtin function next() would be more consistent with the other magic methods and also more future-proof, since the next() function could be modified if the protocol method needed to change. Raymond Hettinger was very strongly against this proposal, suggesting that trading a Python-level attribute lookup for a Python-level global lookup plus a C-level slot lookup was not a good tradeoff. The discussion then spread out to other protocol method/function pairs -- e.g. len() and __len__() -- and Oleg Broytmann suggested that they could all be replaced with methods, thus saving a lookup and clearing out the builtin namespace. Neil Schemenauer and Michael Chermside argued against such a change, saying that the double-underscore pattern allows new special methods to be introduced without worrying about breaking user code, and that using functions for protocols forces developers to use the same names when the protocols are involved, while using methods could allow some developers to choose different names than others. Guido indicated that he'd like to do some usability studies to determine whether methods or functions were more intuitive for the various protocols. Contributing threads: - `defaultdict and on_missing() <http://mail.python.org/pipermail/python-dev/2006-March/061913.html>`__ - `iterator API in Py3.0 <http://mail.python.org/pipermail/python-dev/2006-March/061927.html>`__ - `iterator API in Py3. <http://mail.python.org/pipermail/python-dev/2006-March/061977.html>`__ - `.len() instead of __len__() (was: iterator API in Py3.0) <http://mail.python.org/pipermail/python-dev/2006-March/062072.html>`__ - `x.len() instead of len(x) in Py3.0 <http://mail.python.org/pipermail/python-dev/2006-March/062079.html>`__ - `.len() instead of __len__() (was: iterator API inPy3.0) <http://mail.python.org/pipermail/python-dev/2006-March/062085.html>`__ - `.len() instead of __len__() in Py3.0 <http://mail.python.org/pipermail/python-dev/2006-March/062086.html>`__ [SJB] --------------------------- Python 3.0: base64 encoding --------------------------- This fortnight continued discussion from the last as to whether the base64 encoding should produce unicode or bytes objects. The encoding is specified in `RFC 2045`_ as "designed to represent arbitrary sequences of octets" using "a 65-character subset of US-ASCII". Traditionally, base64 "encoding" goes from bytes to characters, and base64 "decoding" goes from characters to bytes. But this is the inverse of the usual unicode meanings, where "encoding" goes from characters to bytes, and where "decoding" goes from bytes to characters. Thus some people felt that the recent proposal to have only bytes.decode(), which would produce unicode, and unicode.encode(), which would produce bytes, would be a major problem for encodings like base64 which used the opposite terminology. A variety of proposals ensued, including putting .encode() and .decode() on both bytes and strings, having encode() and decode() builtins, and various ways of putting encoding and decoding into the unicode and bytes constructors or classmethods. No clear solution could be agreed upon at the time. .. _RFC 2045: http://www.ietf.org/rfc/rfc2045.txt Contributing thread: - `bytes.from_hex() <http://mail.python.org/pipermail/python-dev/2006-March/061914.html>`__ [SJB] ----------------------------- Coverity scans of Python code ----------------------------- Ben Chelf of Coverity presented scan.coverity.com, which provides the results of some static source code analysis looking for a variety of code defects in a variety of open source projects, including Python. Full access to the reports is limited to core developers, but Neal Norwitz explained a bit what had been made available. The types of problems reported in Python include unintialized variables, resource leak, negative return values, using a NULL pointer, dead code, use after free and some other similar conditions. The reports provide information about what condition is violated and where, and according to Neal have been high quality and accurate, though of course there were some false positives. Generally, developers seemed quite happy with the reports, and a number of bugs have subsequently been fixed. Contributing threads: - `Coverity Open Source Defect Scan of Python <http://mail.python.org/pipermail/python-dev/2006-March/062088.html>`__ - `About "Coverity Study Ranks LAMP Code Quality" <http://mail.python.org/pipermail/python-dev/2006-March/062322.html>`__ - `Coverity report <http://mail.python.org/pipermail/python-dev/2006-March/062434.html>`__ [SJB] ------------------------------- Speeding up lookups of builtins ------------------------------- Steven Elliott was looking into reducing the cost of looking up Python builtins. Two PEPs (`PEP 280`_ and `PEP 329`_) had already been proposed for similar purposes, but Steven felt these were biting off too much at once as they tried to optimize all global attribute lookups instead of just those of builtins. His proposal would replace the global-builtin lookup chain with an array that indexed the builtins. A fast check for builtin shadowing would be performed before using a builtin; if no shadowing existed, the builtin would simply be extracted from the array, and if shadowing was present, the longer lookup sequence would be followed. Guido indicated that he would like to be able to assume that builtins are not shadowed in Python 3.0, but made no comment on the particular implementation strategy suggested. Steven Elliott promised a PEP, though it was not yet available at the time of this summary. .. _PEP 280: http://www.python.org/doc/peps/pep-0280/ .. _PEP 329: http://www.python.org/doc/peps/pep-0329/ Contributing thread: - `Making builtins more efficient <http://mail.python.org/pipermail/python-dev/2006-March/062200.html>`__ [SJB] ------------------------------------------------- Requiring parentheses for conditional expressions ------------------------------------------------- Upon seeing a recent checkin using conditional expressions, Jim Jewett suggested that parentheses should be required around all conditional expressions for the sake of readability. The usual syntax debate ensued, and in the end it looked like the most likely result was that `PEP 8`_ would be updated to suggest parentheses around the "test" part of the conditional expression if it contained any internal whitespace. .. _PEP 8: http://www.python.org/doc/peps/pep-0008/ Contributing threads: - `conditional expressions - add parens? <http://mail.python.org/pipermail/python-dev/2006-March/062089.html>`__ - `(no subject) <http://mail.python.org/pipermail/python-dev/2006-March/062101.html>`__ [SJB] ----------------------------- Exposing a global thread lock ----------------------------- Raymond Hettinger suggested exposing the global interpreter lock to allow code to temporarily suspend all thread switching. Guido was strongly against the idea as it was not portable to Jython or IronPython and it was likely to cause deadlocks. However, there was some support for it, and others indicated that the only way it could cause deadlocks is if locks were acquired within the sections where thread switching was disabled, and that even these could be avoided by having locks raise an exception if acquired in such a section. However, Michael Chermside explained that supporting such a thing in Jython and IronPython would really be impossible, and suggested that the functionality be made available in an extension module instead. Raymond Hettinger then suggested modifying sys.setcheckinterval() to allow thread switching to be stopped, but Martin v. L?wis explained that even disabling this "release the GIL from time to time" setting would not disable thread switching as, for example, the file_write call inside a PRINT_* opcode releases the GIL around fwrite regardless. Contributing thread: - `Threading idea -- exposing a global thread lock <http://mail.python.org/pipermail/python-dev/2006-March/062346.html>`__ [SJB] ----------------------------- Making quit and exit callable ----------------------------- Ian Bicking resucitated a previous proposal to make ``quit`` and ``exit`` callable objects with informative __repr__ messages like ``license`` and ``help`` already have. Georg Brandl submitted a patch that makes ``quit`` and ``exit`` essentially synonymous with ``sys.exit`` but with better __repr__ messages. The patch was accepted and should appear in Python 2.5. Contributing thread: - `quit() on the prompt <http://mail.python.org/pipermail/python-dev/2006-March/062156.html>`__ [SJB] --------------------- Python 3.0: Using C++ --------------------- Fredrik Lundh suggested that in Python 3.0 it might be useful to switch to C++ instead of C for Python's implementation. This would allow some additional type checking, some more reliable reference counting with local variable destructors and smart pointers, and "native" exception handling. However, it would likely make linking and writing extension modules more difficult as C++ does not interoperate with others as happily as C does. Stephen J. Turnbull suggested that it might also be worth considering following the route of XEmacs -- all code must compile without warnings under both C and C++. No final decision was made, but for the moment it looks like Python will stick with the status quo. Contributing thread: - `C++ for CPython 3? (Re: str.count is slow) <http://mail.python.org/pipermail/python-dev/2006-March/061920.html>`__ [SJB] ---------------------- A @decorator decorator ---------------------- Georg Brandl presented a `patch providing a ``decorator`` decorator`_ that would transfer a function's __name__, __doc__ and __dict__ attributes to the wrapped function. Initially, he had placed it in a new ``decorator`` module, but a number of folks suggested that this module and the ``functional`` module -- which currently only contains partial() -- should be merged into a ``functools`` module. At the time of this summary, the patch had not been applied. .. _patch providing a ``decorator`` decorator: http://bugs.python.org/1448297 Contributing thread: - `decorator module patch <http://mail.python.org/pipermail/python-dev/2006-March/062290.html>`__ [SJB] ----------------------- Expanding the use of as ----------------------- Georg Brandl proposed that since ``as`` is finally becoming a keyword, other statements might allow ``as`` to be used for binding a name like the ``import`` and ``with`` statements do now. Generally people thought that using ``as`` to name the ``while`` or ``if`` conditions was not really that useful, especially since the parallel to ``with`` statements was pretty weak -- ``with`` statements bind the result of the context manager's __enter__() call, not the context manager itself. Contributing thread: - `"as" mania <http://mail.python.org/pipermail/python-dev/2006-March/062128.html>`__ [SJB] ------------------------------ Relative imports in the stdlib ------------------------------ Guido made a checkin using the new `relative import feature`_ in a few places. Because using relative imports can cause custom __import__'s to break (because they don't take the new optional fifth argument), Guido backed out the changes, and updated `PEP 8`_ to indicate that absolute imports were to be preferred over relative imports whenever possible. .. _relative import feature: http://www.python.org/doc/peps/pep-0328/ Contributing threads: - `Using relative imports in std lib packages ([Python-checkins] r43033 - in python/trunk/Lib: distutils/sysconfig.py encodings/__init__.py) <http://mail.python.org/pipermail/python-dev/2006-March/062421.html>`__ - `[Python-checkins] r43033 - in python/trunk/Lib: distutils/sysconfig.py encodings/__init__.py <http://mail.python.org/pipermail/python-dev/2006-March/062427.html>`__ [SJB] ------------------------------------ Making staticmethod objects callable ------------------------------------ Nicolas Fleury suggested that staticmethod objects should be made callable so that code like:: class A(object): @staticmethod def foo(): pass bar = foo() would work instead of compaining that staticmethod objects are not callable. Generally, people seemed to feel that most uses of staticmethods were better expressed as module-level functions anyway, and so catering to odd uses of staticmethods was not something Python needed to do. Contributing thread: - `Making staticmethod objects callable? <http://mail.python.org/pipermail/python-dev/2006-March/061948.html>`__ [SJB] ---------------------------------- Adding a uuid module to the stdlib ---------------------------------- Fredrik Lundh suggested `adding Ka-Ping Yee's uuid module`_ to the standard library. Most people were agreeable to the idea, but with other uuid implementations around, there was some discussion about the details. Phillip J. Eby suggested something more along the lines of `PEAK's uuid module`_, but no final decisions were made. .. _adding Ka-Ping Yee's uuid module: http://bugs.python.org/1368955 .. _PEAK's uuid module: http://svn.eby-sarna.com/PEAK/src/peak/util/uuid.py?view=markup Contributing thread: - `how about adding ping's uuid module to the standard lib ? <http://mail.python.org/pipermail/python-dev/2006-March/062119.html>`__ [SJB] ------------------------------- A dict that takes key= callable ------------------------------- Neil Schemenauer suggested providing dict variants in the collections module that would use the ids of the objects instead of the objects themselves. Guido suggested that it would likely be more useful to design a dict variant that took a key= argument like list.sort() does, and apply that key function to all keys in the dict. That would make implementing Neil's id-dict almost trivial and support a variety of other use-cases, like case-insensitive dicts. Peoplke seemed quite supportive of the proposal, but no patch was available at the time of this summary. Contributing thread: - `collections.idset and collections.iddict? <http://mail.python.org/pipermail/python-dev/2006-March/062100.html>`__ [SJB] ---------------------------- Bug in __future__ processing ---------------------------- Martin Maly found `Guido's previously encountered bug`_ that Python 2.2 through 2.4 allow some assignment statements before the ``__future__`` import. Tim Peters correctly channeled Guido that this was a bug and would be fixed in Python 2.5. .. _Guido's previously encountered bug: http://mail.python.org/pipermail/python-dev/2006-January/060247.html Contributing thread: - `Bug in from __future__ processing? <http://mail.python.org/pipermail/python-dev/2006-March/062047.html>`__ [SJB] ----------------------- Cleaning up string code ----------------------- Chris Perkins noted that ``str.count()`` is substantially slower than ``unicode.count()``. Ben Cartwright and others indicated that the source for these functions showed clearly that the unicode version had been better optimized. Fredrik Lundh and Armin Rigo both mentioned cleaning up the string code to avoid some of the duplication and potentially to merge the str and unicode implementations together. At the time of this summary, it didn't seem that any progress towards this goal had yet been made. Contributing thread: - `str.count is slow <http://mail.python.org/pipermail/python-dev/2006-February/061885.html>`__ - `str.count is slow <http://mail.python.org/pipermail/python-dev/2006-March/061915.html>`__ [SJB] --------------------------------------------- Freeing Python-allocated memory to the system --------------------------------------------- Tim Peters spent his PyCon time working on `a patch by Evan Jones`_, originally `discussed in January`_. The patch enables Python to free memory back to the operating system so that code like:: x = [] for i in xrange(1000000): x.append([]) del x[:] does not continue to consume massive memory after the del statement. Tim gave python-devvers some time to review the patch for any speed or other issues, and then committed it. .. _a patch by Evan Jones: http://bugs.python.org/1123430 .. _discussed in January: http://mail.python.org/pipermail/python-dev/2005-January/051255.html Contributing thread: - `Arena-freeing obmalloc ready for testing <http://mail.python.org/pipermail/python-dev/2006-March/061991.html>`__ [SJB] ------------------------------------------ Modifying the context manager __exit__ API ------------------------------------------ After thinking things over, Guido decided that the context manager ``__exit__`` method should be required to return True if it wants to suppress an exception. This addressed the main concerns about the previous API, that if ``__exit__`` methods were required to reraise exceptions, a lot of ``__exit__`` methods might end up with easily-missed bugs. Contributing thread: - `__exit__ API? <http://mail.python.org/pipermail/python-dev/2006-March/062050.html>`__ [SJB] ------------------------------- Python 3.0: default comparisons ------------------------------- In Python 3.0, Guido plans to ditch the default ``< <= > >=`` comparisons currently provided, and only provide ``== !=`` where by default all objects compare as unequal. Contributing thread: - `Keep default comparisons - or add a second set? <http://mail.python.org/pipermail/python-dev/2006-March/062404.html>`__ [SJB] ================ Deferred Threads ================ - `[Python-checkins] r43041 - python/trunk/Modules/_ctypes/cfield.c <http://mail.python.org/pipermail/python-dev/2006-March/062416.html>`__ ================== Previous Summaries ================== - `Proposal: defaultdict <http://mail.python.org/pipermail/python-dev/2006-March/061945.html>`__ =============== Skipped Threads =============== - `New test failure on Windows <http://mail.python.org/pipermail/python-dev/2006-March/061928.html>`__ - `.py and .txt files missing svn:eol-style in trunk <http://mail.python.org/pipermail/python-dev/2006-March/061929.html>`__ - `Using and binding relative names (was Re: PEP forBetter Control of Nested Lexical Scopes) <http://mail.python.org/pipermail/python-dev/2006-March/061934.html>`__ - `Stateful codecs [Was: str object going in Py3K] <http://mail.python.org/pipermail/python-dev/2006-March/061946.html>`__ - `bytes thoughts <http://mail.python.org/pipermail/python-dev/2006-March/061966.html>`__ - `wiki as scratchpad <http://mail.python.org/pipermail/python-dev/2006-March/061967.html>`__ - `test_compiler failure <http://mail.python.org/pipermail/python-dev/2006-March/061970.html>`__ - `DRAFT: python-dev Summary for 2006-01-16 through 2005-01-31 <http://mail.python.org/pipermail/python-dev/2006-March/061986.html>`__ - `Weekly Python Patch/Bug Summary <http://mail.python.org/pipermail/python-dev/2006-March/061987.html>`__ - `When will regex really go away? <http://mail.python.org/pipermail/python-dev/2006-March/061988.html>`__ - `ref leak w/except hooks <http://mail.python.org/pipermail/python-dev/2006-March/061995.html>`__ - `Faster list comprehensions <http://mail.python.org/pipermail/python-dev/2006-March/062030.html>`__ - `PEP 357 <http://mail.python.org/pipermail/python-dev/2006-March/062032.html>`__ - `Lib/test/test_compiler.py fails <http://mail.python.org/pipermail/python-dev/2006-March/062035.html>`__ - `FrOSCon 2006 - Call for {Papers|Projects} <http://mail.python.org/pipermail/python-dev/2006-March/062057.html>`__ - `Outdated Python Info on www.unicode.org (fwd) <http://mail.python.org/pipermail/python-dev/2006-March/062058.html>`__ - `My buildbot host upgraded to OSX 10.4 <http://mail.python.org/pipermail/python-dev/2006-March/062059.html>`__ - `Slightly OT: Replying to posts <http://mail.python.org/pipermail/python-dev/2006-March/062074.html>`__ - `New Future Keywords <http://mail.python.org/pipermail/python-dev/2006-March/062080.html>`__ - `[Python-checkins] Python Regression Test Failures refleak (1) <http://mail.python.org/pipermail/python-dev/2006-March/062082.html>`__ - `[Python-checkins] Python humor <http://mail.python.org/pipermail/python-dev/2006-March/062090.html>`__ - `Two gcmodule patches <http://mail.python.org/pipermail/python-dev/2006-March/062105.html>`__ - `Scientific Survey: Working Conditions in Open Source Projects <http://mail.python.org/pipermail/python-dev/2006-March/062134.html>`__ - `_bsddb.c ownership <http://mail.python.org/pipermail/python-dev/2006-March/062137.html>`__ - `str(Exception) changed, is that intended? <http://mail.python.org/pipermail/python-dev/2006-March/062149.html>`__ - `Long-time shy failure in test_socket_ssl <http://mail.python.org/pipermail/python-dev/2006-March/062173.html>`__ - `mail.python.org disruption <http://mail.python.org/pipermail/python-dev/2006-March/062175.html>`__ - `Bug Day? <http://mail.python.org/pipermail/python-dev/2006-March/062191.html>`__ - `Generated code in test_ast.py <http://mail.python.org/pipermail/python-dev/2006-March/062197.html>`__ - `fixing log messages <http://mail.python.org/pipermail/python-dev/2006-March/062202.html>`__ - `[Python-checkins] r42929 - python/trunk/Tools/scripts/svneol.py <http://mail.python.org/pipermail/python-dev/2006-March/062224.html>`__ - `unicodedata.c no longer compiles on Windows <http://mail.python.org/pipermail/python-dev/2006-March/062241.html>`__ - `multidict API <http://mail.python.org/pipermail/python-dev/2006-March/062250.html>`__ - `Google ads on python.org? <http://mail.python.org/pipermail/python-dev/2006-March/062276.html>`__ - `libbzip2 version? <http://mail.python.org/pipermail/python-dev/2006-March/062287.html>`__ - `PythonCore\CurrentVersion <http://mail.python.org/pipermail/python-dev/2006-March/062297.html>`__ - `Strange behavior in Python 2.5a0 (trunk) --- possible error in AST? <http://mail.python.org/pipermail/python-dev/2006-March/062318.html>`__ - `Why are so many built-in types inheritable? <http://mail.python.org/pipermail/python-dev/2006-March/062334.html>`__ - `checkin r43015 <http://mail.python.org/pipermail/python-dev/2006-March/062342.html>`__ - `Topic suggestions from the PyCon feedback <http://mail.python.org/pipermail/python-dev/2006-March/062345.html>`__ - `Another threading idea <http://mail.python.org/pipermail/python-dev/2006-March/062383.html>`__ - `Octal literals <http://mail.python.org/pipermail/python-dev/2006-March/062400.html>`__ - `[Python-checkins] r43022 - in python/trunk: Modules/xxmodule.c Objects/object.c <http://mail.python.org/pipermail/python-dev/2006-March/062407.html>`__ - `[Python-checkins] Python Regression Test Failuresrefleak (1) <http://mail.python.org/pipermail/python-dev/2006-March/062409.html>`__ - `[Python-checkins] r43028 - python/trunk/Modules/_ctypes/cfield.c <http://mail.python.org/pipermail/python-dev/2006-March/062413.html>`__ - `PEP 338 implemented in SVN <http://mail.python.org/pipermail/python-dev/2006-March/062422.html>`__ From tim.peters at gmail.com Tue Apr 11 02:55:16 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 20:55:16 -0400 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <20060411001753.GD4092@activestate.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> <20060411001753.GD4092@activestate.com> Message-ID: <1f7befae0604101755v706a3c13q61952caaac8b34d7@mail.gmail.com> [Trent] > C:\trentm\src\python\python\PCbuild>python_d > Python 2.5a1 (trunk, Apr 10 2006, 14:48:00) [MSC v.1310 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import socket > [25133 refs] > >>> s = socket.socket() > [25145 refs] > >>> s.settimeout(30.0) > [25145 refs] > >>> s.connect(("gmail.org", 995)) > [25145 refs] > >>> > > Sorry that I took so long to run this. It is a little unfortunate that > with the last build step being "clean", I couldn't just cd into the > build directory and try to run this. > > Seems like that was a good thing that I did take so long because it > passed in the most recent build :) > http://www.python.org/dev/buildbot/trunk/x86%20W2k%20trunk/builds/371/step-test/0 Excellent! Thanks to the buildbot's blamelist, we can definitely conclude that your Win2K box's problem was cured by Andrew checking in a change to whatsnew25.tex. Works for me :-) > ... > It is possible that this was due to network changes that we are doing at > work here. We are preparing for an office move in a couple of weeks > (http://blogs.activestate.com/activestate/2006/02/free_as_in_will.html). > My eyes glaze over whenever the systems dudes mention VPN, SSH, DNS, > VMWare, sub-domains and DHCP in the same breath. Ya, life will get a lot better if you leave the sysadmins behind ;-) Good luck with the move and the heady wine of corporate independence, BTW -- and be sure to visit the old folks on holidays. From tim.peters at gmail.com Tue Apr 11 03:02:41 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 21:02:41 -0400 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> <20060411001753.GD4092@activestate.com> <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> Message-ID: <1f7befae0604101802w35e82323l9af72fe84e56250@mail.gmail.com> [Neal Norwitz] > Maybe we should clean before we configure/compile? That would leave > the last build in tact until the next run. It doesn't matter much to me -- I do all changes and checkins from a non-buildbot checkout. The one thing I like about the current scheme is that when I run my daily incremental backup, I don't have to wait on backing up mounds of stale .pyc and .pyd files left from the last buildbot run. In fact, that reminds me I added a "delete all the .pyc files" step to the Windows buildbot clean.bat precisely so I didn't have to burn time and space backing up 1600 stale files each day. So -0 on changing. From trentm at ActiveState.com Tue Apr 11 03:09:07 2006 From: trentm at ActiveState.com (Trent Mick) Date: Mon, 10 Apr 2006 18:09:07 -0700 Subject: [Python-Dev] Who understands _ssl.c on Windows? In-Reply-To: <1f7befae0604101802w35e82323l9af72fe84e56250@mail.gmail.com> References: <1f7befae0604072255q21007ecfmd45e0d4a3d43e8ef@mail.gmail.com> <1f7befae0604082149q38eec16eo4be7b3f8d406d293@mail.gmail.com> <20060411001753.GD4092@activestate.com> <ee2a432c0604101728y37bbdfd7gdb2215b9d91e2d87@mail.gmail.com> <1f7befae0604101802w35e82323l9af72fe84e56250@mail.gmail.com> Message-ID: <20060411010907.GA29878@activestate.com> [Tim Peters wrote] > In fact, that reminds me I added a "delete all the .pyc files" step to > the Windows buildbot clean.bat precisely so I didn't have to burn time > and space backing up 1600 stale files each day. So -0 on changing. Good enough for me. Let's not bother. Trent -- Trent Mick TrentM at ActiveState.com From greg.ewing at canterbury.ac.nz Tue Apr 11 03:16:55 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 11 Apr 2006 13:16:55 +1200 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <1f7befae0604101701r24ce6171pb83903cfd366debd@mail.gmail.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> <ca471dc20604101506y767b0c2cv1ecb0d12414d11b0@mail.gmail.com> <1f7befae0604101701r24ce6171pb83903cfd366debd@mail.gmail.com> Message-ID: <443B0387.7010809@canterbury.ac.nz> Tim Peters wrote: > The PyObject_ memory family is generally faster and more > memory-efficient for small allocations than the PyMem_ memory family. > Lines of source code, and encoding strings, are usually small enough > to exploit that. The "ob" in obmalloc.c doesn't really have anything > to do with objects either. PyMem_SmallMalloc (etc) may have been > better names (although I doubt that ;-)). However, if they're not exclusively for objects, having "Object" in the name would seem to be highly confusing, perhaps dangerously so. (Person A writes PyObject_Alloc(some_chars), Person B writing the code to free it thinks "What??? That can't be right!" and uses PyMem_Free.) -- Greg From tim.peters at gmail.com Tue Apr 11 03:33:08 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 10 Apr 2006 21:33:08 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <443B0387.7010809@canterbury.ac.nz> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> <1f7befae0604101357y3b983714k97cfb3cd9e801d79@mail.gmail.com> <ca471dc20604101506y767b0c2cv1ecb0d12414d11b0@mail.gmail.com> <1f7befae0604101701r24ce6171pb83903cfd366debd@mail.gmail.com> <443B0387.7010809@canterbury.ac.nz> Message-ID: <1f7befae0604101833t5d46204emeb62d97629e08d22@mail.gmail.com> [Greg Ewing] > However, if they're not exclusively for objects, > having "Object" in the name would seem to be > highly confusing, perhaps dangerously so. (Person > A writes PyObject_Alloc(some_chars), Person B > writing the code to free it thinks "What??? > That can't be right!" and uses PyMem_Free.) Given that it's been this way since the PyObject_ memory family was introduced, and a real Person B hasn't posted to complain about being led into temptation yet, sorry, I don't take this argument seriously. The comments in objimpl.h tell the truth, and it's really quite simple: For allocating objects, use PyObject_{New, NewVar} instead whenever possible. The PyObject_{Malloc, Realloc, Free} family is exposed so that you can exploit Python's small-block allocator for non-object uses. If you must use these routines to allocate object memory, make sure the object gets initialized via PyObject_{Init, InitVar} after obtaining the raw memory. From nnorwitz at gmail.com Tue Apr 11 10:22:06 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 01:22:06 -0700 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <ee2a432c0604101344l25b8df8fv56bdbd061bb2e48b@mail.gmail.com> References: <5.1.1.6.0.20060410152606.03683c20@mail.telecommunity.com> <5.1.1.6.0.20060410162712.03685fb0@mail.telecommunity.com> <ee2a432c0604101344l25b8df8fv56bdbd061bb2e48b@mail.gmail.com> Message-ID: <ee2a432c0604110122l1ab2febbi6fb1ec6a4fffc359@mail.gmail.com> On 4/10/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > On 4/10/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > > > It appears the problem is an object/mem mismatch: both PyOS_Readline in > > pgenmain.c, and PyOS_StdioReadline use PyObject_MALLOC, but bltinmodule.c > > is freeing the pointer with PyMem_FREE. > > This (Readline using PyObject) was due to my recent change to fix > Guido's problem last night. > I didn't realize anything seeped out. All calls to PyOS_StdioReadline > would need to be updated. I can do that tonight. Hmm, that means this > will be an API change. I wonder if I should revert my fix and just > deal with a second alloc and copy of the data to fix Guido's original > problem. I partially reverted my fix from last night. It appears to work for both Guido's original problem and Phillip's subsequent problem. YMMV. It would be great if someone could review all the PyMem_* and PyObject_* allocs and frees to ensure consistency. I wonder if the code would be clearer if the encoding was changed back to using PyObject_*. Then there would only be a few clear cases for using PyMem_* in Parser/. n From martin at v.loewis.de Tue Apr 11 13:50:30 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 11 Apr 2006 13:50:30 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely Message-ID: <443B9806.4010707@v.loewis.de> Several places in the documentation claim that Py_Finalize will release all memory: http://docs.python.org/api/embedding.html says that literally, and http://docs.python.org/api/initialization.html#l2h-778 suggests it is a bug when memory is not freed in Py_Finalize. This has left people to believe that this is a bug: https://sourceforge.net/tracker/index.php?func=detail&aid=1445210&group_id=5470&atid=105470 However, I don't see any chance to make this promise even remotely. Objects allocated in extension modules, and held in global variables (e.g. socketmodule.c:socket_error, socket_herror, socket_gaierror, socket_timeout) will never be released, right? And because of the small objects allocator, their pool will remain allocated, right? And, then, the arena. So ISTM that invoking Py_Finalize after importing socket will yield atleast 256KiB garbage. Of course, that's not real garbage, because the next Py_Initialize'd interpreter will continue to allocate from the arenas. But still, the actual objects that the modules hold on to will not be reclaimed until the process terminates. Please correct me if I'm wrong. Regards, Martin From theller at python.net Tue Apr 11 18:36:49 2006 From: theller at python.net (Thomas Heller) Date: Tue, 11 Apr 2006 18:36:49 +0200 Subject: [Python-Dev] DRAFT: python-dev summary for 2006-03-01 to 2006-03-15 In-Reply-To: <d11dcfba0604101750v10a1627ev4e5a99702370ab82@mail.gmail.com> References: <d11dcfba0604101750v10a1627ev4e5a99702370ab82@mail.gmail.com> Message-ID: <443BDB21.3030207@python.net> Steven Bethard wrote: > Ok, if I summarize any more of python-dev, my brain's going to explode. ;-) > > Here's the summaries for the first half of March. Let me know what to fix! > > ------------------------------- > Maintaining ctypes in SVN trunk > ------------------------------- > > Thomas Heller put ctypes into the Python SVN repository, and with the > help of perky, Neal Norwitz and Thomas Wouters, updated it to take > advantage of the new ssize_t feature. Thomas Heller has now moved the > "official" ctypes development to the Python SVN. The last sentence is not correct, sorry if the discussion led you to this conclusion. At least for some time, the "official version" will stay in sourceforge CVS, because it is easier to test ctypes on the SF compile farm. Thanks for the summaries, Thomas From guido at python.org Tue Apr 11 19:56:50 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 11 Apr 2006 10:56:50 -0700 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <443B9806.4010707@v.loewis.de> References: <443B9806.4010707@v.loewis.de> Message-ID: <ca471dc20604111056u55fe5cbm334ba08afeaeb0a7@mail.gmail.com> I'm afraid it was wishful thinking on my part. The best we can try to hope for is to ensure that repeatedly calling Py_Initialize and Py_Finalize doesn't leak too much memory. --Guido On 4/11/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Several places in the documentation claim that Py_Finalize will > release all memory: > > http://docs.python.org/api/embedding.html > > says that literally, and > > http://docs.python.org/api/initialization.html#l2h-778 > > suggests it is a bug when memory is not freed in Py_Finalize. > > This has left people to believe that this is a bug: > > https://sourceforge.net/tracker/index.php?func=detail&aid=1445210&group_id=5470&atid=105470 > > However, I don't see any chance to make this promise even remotely. > Objects allocated in extension modules, and held in global variables > (e.g. socketmodule.c:socket_error, socket_herror, socket_gaierror, > socket_timeout) will never be released, right? > > And because of the small objects allocator, their pool will remain > allocated, right? And, then, the arena. > > So ISTM that invoking Py_Finalize after importing socket will yield > atleast 256KiB garbage. Of course, that's not real garbage, because > the next Py_Initialize'd interpreter will continue to allocate from > the arenas. But still, the actual objects that the modules hold > on to will not be reclaimed until the process terminates. > > Please correct me if I'm wrong. > > Regards, > Martin > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Tue Apr 11 20:20:29 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 11 Apr 2006 20:20:29 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <ca471dc20604111056u55fe5cbm334ba08afeaeb0a7@mail.gmail.com> References: <443B9806.4010707@v.loewis.de> <ca471dc20604111056u55fe5cbm334ba08afeaeb0a7@mail.gmail.com> Message-ID: <443BF36D.6030601@v.loewis.de> Guido van Rossum wrote: > I'm afraid it was wishful thinking on my part. > > The best we can try to hope for is to ensure that repeatedly calling > Py_Initialize and Py_Finalize doesn't leak too much memory. Ok. Unless somebody still claims to the contrary, I will go ahead and weaken the promises in the documentation. Then it's not a bug anymore that it leaks :-) Regards, Martin From tim.peters at gmail.com Tue Apr 11 20:47:36 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 11 Apr 2006 14:47:36 -0400 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <443B9806.4010707@v.loewis.de> References: <443B9806.4010707@v.loewis.de> Message-ID: <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> [Martin v. L?wis] > Several places in the documentation claim that Py_Finalize will > release all memory: > > http://docs.python.org/api/embedding.html > > says that literally, It's wrong ;-). > and > > http://docs.python.org/api/initialization.html#l2h-778 > > suggests it is a bug when memory is not freed in Py_Finalize. > > This has left people to believe that this is a bug: > > <https://sourceforge.net/tracker/index.php?func=detail&aid=1445210&group_id=5470&atid=105470> Well, there may well be a bug (or multiple bugs) underlying that one too. It's one thing for Py_Finalize() not to release all memory (it doesn't and probably never will), but it's not necessarily the same thing if running Py_Initialize() ... Py_Finalize() repeatedly keeps leaking more and more memory. > However, I don't see any chance to make this promise even remotely. > Objects allocated in extension modules, and held in global variables > (e.g. socketmodule.c:socket_error, socket_herror, socket_gaierror, > socket_timeout) will never be released, right? Not unless the module has a finalization function called by Py_Finalize() that frees such things (like PyString_Fini and PyInt_Fini). Other globals allocated via a static PyObject *someglobal = NULL; ... if (someglobal == NULL) someglobal = allocate_an_object_somehow(); pattern shouldn't contribute to continuing leaks across Py_Initialize() ... Py_Finalize() loops. > And because of the small objects allocator, their pool will remain > allocated, right? And, then, the arena. Before Python 2.5, arenas are never freed, period. In Python 2.5, an arena will be freed if and only if it contains no allocated object by the time Py_Finalize ends. There may also be trash cycles that aren't collected during Py_Finalize because I had to comment out Py_Finalize's second call to PyGC_Collect(); new-style class objects are among the trash thingies leaked (although if Py_Initialize() is called again, it's possible that they'll get cleaned up by cyclic gc after all). See Misc/SpecialBuilds.txt, section Py_TRACE_REFS, entry PYTHONDUMPREFS, for a way to get a dump of all heap objects Py_Finalize leaves alive. I doubt that's been run in years; Guido and I used it in 2.3b1 to cure some unreasonably high finalization leakage at the time. > So ISTM that invoking Py_Finalize after importing socket will yield > atleast 256KiB garbage. Of course, that's not real garbage, because > the next Py_Initialize'd interpreter will continue to allocate from > the arenas. I'm not clear on whether, e.g., init_socket() may get called more than once if socket-slinging code appears in a Py_Initialize() ... Py_Finalize(). If it doesn't, then, e.g., the unconditional socket_gaierror = PyErr_NewException("socket.gaierror", ...); won't contribute to ongoing leaks. But if it does, then we'll systematically leak the exception object on each loop trip. > But still, the actual objects that the modules hold on to will not be reclaimed > until the process terminates. Without a _Fini() function called by Py_Finalize, that's correct. > Please correct me if I'm wrong. If you ever are, I will ;-) From pje at telecommunity.com Tue Apr 11 23:04:24 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 11 Apr 2006 17:04:24 -0400 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' Message-ID: <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> I just noticed that two stdlib modules (runpy and test.test_importhooks) contain reimplementations of the base PEP 302 algorithm, or loaders wrapping the standard (pre-302) import machinery. Meanwhile, the 'imp' module exports an undocumented IMP_HOOK constant (since Python 2.3), that is used internally to Python/import.c but never actually returned from the imp API. Further, the machinery called by imp.find_module() actually does the full PEP 302 search dance - but then skips any PEP 302 importers in the process, because the wrapper doesn't let it return a loader object. What I'd like to do is make the necessary modifications to import.c that would allow you to access the loaders found by the C version of find_module. I propose to create a new API, 'imp.find_loader()' and have it return a PEP 302-compatible loader object, even for cases that would normally be handled via 'imp.load_module()'. In such cases, the loader returned would be an instance of one of a loader class similar to those in runpy, test_importhooks, and setuptools (which also has similar code). What I'm not sure of is where to put the loader class. It seems to me there should be a stdlib module, but it doesn't seem worth writing in C, especially with so many Python implementations floating around. I could create a new Python module for them, but we already have so many import-related modules floating around. Would it be reasonable to add them to 'pkgutil', which until now has contained only one function? This would help cut down on some of the code duplication, without adding yet another module to the stdlib. An additional issue: "pydoc" needs to be able to determine what submodules, if any, exist in a package, but the PEP 302 protocol does not provide for this ability. I'd like to add optional additional methods to the PEP 302 "importer" protocol (and to any stdlib importer objects) to support the type of filesystem-like queries performed by pydoc. This should allow pydoc to be compatible with zip files as well as regular files, and any future PEP 302 importers that provide the necessary features. Having these features would also let me cut some code out of setuptools' "pkg_resources" module, that adds some of these features using adapter registries. It may be too late for me to be able to implement all of this in time for alpha 2, but at minimum I think the 'find_loader()' addition and the move of the import wrapper classes to 'pkgutil' could be accomplished. Replacing pydoc's myriad stat(), listdir() and other path hacking with clean importer protocols might be out of reach, but I think it's worth a try. For one thing, it would mean that shipping the Python stdlib in a zipfile would not inhibit pydoc's ability to display help for stdlib modules and packages. (Currently, pydoc does not work correctly with packages that are in zipfiles or which are spread across multiple directories, and it is unable to "discover" modules that are in zipfiles.) From tdelaney at avaya.com Tue Apr 11 23:12:31 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Wed, 12 Apr 2006 07:12:31 +1000 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? Message-ID: <2773CAC687FD5F4689F526998C7E4E5F07436C@au3010avexu1.global.avaya.com> Neal Norwitz wrote: > I partially reverted my fix from last night. It appears to work for > both Guido's original problem and Phillip's subsequent problem. YMMV. > It would be great if someone could review all the PyMem_* and > PyObject_* allocs and frees to ensure consistency. Definitely seems to me that it would be worthwhile in debug mode adding a field specifying which memory allocator was used, and checking for mismatches in the deallocators. I know this has been suggested before, but with number of mismatches being found now it seems like it should be put into place. I'm sure it will cause buildbot to go red ... ;) I might see if I can work up a patch over the easter long weekend if no one beats me to it. What files should I be looking at (it would be my first C-level python patch)? Tim Delaney From ncoghlan at gmail.com Tue Apr 11 23:43:49 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 12 Apr 2006 07:43:49 +1000 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' In-Reply-To: <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> References: <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> Message-ID: <443C2315.5050903@gmail.com> Phillip J. Eby wrote: > I propose to create a new API, 'imp.find_loader()' and have it return a PEP > 302-compatible loader object, even for cases that would normally be handled > via 'imp.load_module()'. In such cases, the loader returned would be an > instance of one of a loader class similar to those in runpy, > test_importhooks, and setuptools (which also has similar code). runpy tries to retrieve "imp.get_loader()" before falling back to its own emulation - switching that to look for "find_loader()" instead would be fine (although I'd probably leave the emulation in place, since it allows the module to be used on 2.4 as well as 2.5). Also, 'find' more accurately reflects the search effort that goes on. > What I'm not sure of is where to put the loader class. It seems to me > there should be a stdlib module, but it doesn't seem worth writing in C, > especially with so many Python implementations floating around. > > I could create a new Python module for them, but we already have so many > import-related modules floating around. Would it be reasonable to add them > to 'pkgutil', which until now has contained only one function? This would > help cut down on some of the code duplication, without adding yet another > module to the stdlib. Would it be worth the effort to switch 'imp' to being a hybrid module? > An additional issue: "pydoc" needs to be able to determine what submodules, > if any, exist in a package, but the PEP 302 protocol does not provide for > this ability. I'd like to add optional additional methods to the PEP 302 > "importer" protocol (and to any stdlib importer objects) to support the > type of filesystem-like queries performed by pydoc. This should allow > pydoc to be compatible with zip files as well as regular files, and any > future PEP 302 importers that provide the necessary features. runpy needs a get_filename() method, so it knows what to set __file__ too - currently its emulation supports that, but it isn't officially part of the PEP 302 API. > Having these features would also let me cut some code out of setuptools' > "pkg_resources" module, that adds some of these features using adapter > registries. > > It may be too late for me to be able to implement all of this in time for > alpha 2, but at minimum I think the 'find_loader()' addition and the move > of the import wrapper classes to 'pkgutil' could be accomplished. > > Replacing pydoc's myriad stat(), listdir() and other path hacking with > clean importer protocols might be out of reach, but I think it's worth a > try. For one thing, it would mean that shipping the Python stdlib in a > zipfile would not inhibit pydoc's ability to display help for stdlib > modules and packages. (Currently, pydoc does not work correctly with > packages that are in zipfiles or which are spread across multiple > directories, and it is unable to "discover" modules that are in zipfiles.) FWIW, I've already had this experience between the original version of runpy, and the PEP 302 based version. As I recall, the "real" code in that module is now only a few dozen lines - everything else is there to work around the absence of imp.find_loader(). Prior to the rewrite, I don't think the module was that much shorter overall, and it only supported file system based single-directory packages. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From pje at telecommunity.com Wed Apr 12 00:12:32 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 11 Apr 2006 18:12:32 -0400 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' In-Reply-To: <443C2315.5050903@gmail.com> References: <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> At 07:43 AM 4/12/2006 +1000, Nick Coghlan wrote: >Phillip J. Eby wrote: > > I propose to create a new API, 'imp.find_loader()' and have it return a > PEP > > 302-compatible loader object, even for cases that would normally be > handled > > via 'imp.load_module()'. In such cases, the loader returned would be an > > instance of one of a loader class similar to those in runpy, > > test_importhooks, and setuptools (which also has similar code). > >runpy tries to retrieve "imp.get_loader()" before falling back to its own >emulation - switching that to look for "find_loader()" instead would be fine >(although I'd probably leave the emulation in place, since it allows the >module to be used on 2.4 as well as 2.5). I was proposing to move the emulation features (at least the classes) to pkgutil, or in the alternative, making them a published/documented/supported API, rather than private. >Would it be worth the effort to switch 'imp' to being a hybrid module? I have no idea; what's a hybrid module? >runpy needs a get_filename() method, so it knows what to set __file__ too - >currently its emulation supports that, but it isn't officially part of the >PEP >302 API. It sounds like maybe a new PEP is needed to document all the extensions to the importer/loader protocols. :( Interestingly, these are all more examples of where adaptation or generic functions would be handy to have in the stdlib. Duck-typing protocols based on attribute names are hard to extend and have to wait for new library releases for upgrades. :( From tim.peters at gmail.com Wed Apr 12 00:29:32 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 11 Apr 2006 18:29:32 -0400 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5F07436C@au3010avexu1.global.avaya.com> References: <2773CAC687FD5F4689F526998C7E4E5F07436C@au3010avexu1.global.avaya.com> Message-ID: <1f7befae0604111529q6227f5eava4934e378f62c198@mail.gmail.com> [Delaney, Timothy (Tim)] > Definitely seems to me that it would be worthwhile in debug mode adding > a field specifying which memory allocator was used, and checking for > mismatches in the deallocators. > > I know this has been suggested before, but with number of mismatches > being found now it seems like it should be put into place. I'm sure it > will cause buildbot to go red ... ;) > > I might see if I can work up a patch over the easter long weekend if no > one beats me to it. What files should I be looking at (it would be my > first C-level python patch)? A couple weeks back Adam DePrince said he was going to do this, although I haven't heard more about it. See that thread for hints about a workable approach: http://mail.python.org/pipermail/python-dev/2006-March/062848.html The bulk of the work would be in obmalloc.c. More-or-less excruciating #if'ery would also be needed in objimpl.h and pymem.h, to remap the build-type-independent API names to appropriate build-type-dependent concrete calls. For example, the current debug-build: #define PyMem_MALLOC PyObject_MALLOC would have to get messier, so that a call to PyMem_MALLOC could be distinguished from a call to PyObject_MALLOC to begin with. They'd probably both have to change, to call a common doesn't-yet-exist entry point with a "which flavor of malloc is this?" flag argument. From thomas at python.org Wed Apr 12 01:04:22 2006 From: thomas at python.org (Thomas Wouters) Date: Wed, 12 Apr 2006 01:04:22 +0200 Subject: [Python-Dev] PyObject_REPR() Message-ID: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> I hadn't noticed this macro defined in Include/object.h before: /* Helper for passing objects to printf and the like */ #define PyObject_REPR(obj) PyString_AS_STRING(PyObject_Repr(obj)) I may not be right, but I don't see how this can't help but not free the intermediate PyString object. It doesn't seem to be used, except for in situations where Python is not going to continue working much longer anyway (specifically, in compile.c and ceval.c.) I could not be right about *that*, too, though ;-) It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. It's wasn't added in 2.5 or 2.4, though, so it's not particularly new and I can't guarantee that it's not used in any third party code. But, then again, anyone using it isn't free of leaks. Should removing or renaming it be done in 2.5 or in Py3K? Triple-negative'ly y'rs, -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060412/dadee382/attachment.htm From raymond.hettinger at verizon.net Wed Apr 12 01:37:10 2006 From: raymond.hettinger at verizon.net (Raymond Hettinger) Date: Tue, 11 Apr 2006 16:37:10 -0700 Subject: [Python-Dev] PyObject_REPR() References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> Message-ID: <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> > I may not be right, but I don't see how this can't help but not free the > intermediate PyString object. Good catch. > It doesn't seem to be used, except for in situations where Python is not going > to continue working > much longer anyway (specifically, in compile.c and ceval.c.) Those internal uses ought to be replaced with better code. > It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. > It's wasn't added in 2.5 or 2.4, though, so it's not particularly new and I > can't guarantee > that it's not used in any third party code. But, then again, anyone using it > isn't free of leaks. > Should removing or renaming it be done in 2.5 or in Py3K? Since it is intrinsically buggy, I would support removal in Py2.5 Raymond From nnorwitz at gmail.com Wed Apr 12 03:03:59 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 18:03:59 -0700 Subject: [Python-Dev] PyObject_REPR() In-Reply-To: <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> Message-ID: <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> On 4/11/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote: > > > It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. > > Should removing or renaming it be done in 2.5 or in Py3K? > > Since it is intrinsically buggy, I would support removal in Py2.5 +1 on removal. Google only turned up a handleful of uses that I saw. n From jeremy at alum.mit.edu Wed Apr 12 05:05:17 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 11 Apr 2006 23:05:17 -0400 Subject: [Python-Dev] PyObject_REPR() In-Reply-To: <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> Message-ID: <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> It's intended as an internal debugging API. I find it very convenient for adding with a printf() here and there, which is how it got added long ago. It should really have a comment mentioning that it leaks the repr object, and starting with an _ wouldn't be bad either. Jeremy On 4/11/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > On 4/11/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote: > > > > > It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. > > > Should removing or renaming it be done in 2.5 or in Py3K? > > > > Since it is intrinsically buggy, I would support removal in Py2.5 > > +1 on removal. Google only turned up a handleful of uses that I saw. > > n > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu > From anthony at interlink.com.au Wed Apr 12 06:53:13 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 12 Apr 2006 14:53:13 +1000 Subject: [Python-Dev] building with C++ Message-ID: <200604121453.17443.anthony@interlink.com.au> I've done a lot of the work to get Python to build with g++ - this was suggested a while ago as a useful thing to take advantage of C++'s more stringent type checking. There's a few more little bits and pieces to be done, and it looks like a bunch of extern "C" {}s need to be sprinkled through the code as a form of magic pixie dust. I can't spend much more time on this now, but if someone wants to take over and finish it off, that'd be great. Note that I have only addressed errors, not warnings - there's a bunch of odd looking warnings around the place, such as the code in PyLong_AsUnsignedLongLong that returns a -1 despite the return value being declared as unsigned... Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From tim.peters at gmail.com Wed Apr 12 07:16:04 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 01:16:04 -0400 Subject: [Python-Dev] Preserving the blamelist Message-ID: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> All the trunk buildbots started failing about 5 hours ago, in test_parser. There have been enough checkins since then that the boundary between passing and failing is about to scroll off forever. So, here's the blamelist before it vanishes: """ 1. Changed by: thomas.wouters Changed at: Wed 12 Apr 2006 00:06:36 Branch: trunk Revision: 45285 Changed files: * trunk/Grammar/Grammar Comments: Fix SF bug #1466641: multiple adjacent 'if's in listcomps and genexps, as in [x for x in it if x if x], were broken for no good reason by the PEP 308 patch. 2. Changed by: thomas.wouters Changed at: Wed 12 Apr 2006 00:08:00 Branch: trunk Revision: 45286 Changed files: * trunk/Lib/test/test_grammar.py * trunk/Python/graminit.c Comments: Part two of the fix for SF bug #1466641: Regenerate graminit.c and add test for the bogus failure. """ I'd whine about not checking buildbot health after a code change, except that it's much more tempting to point out that Thomas couldn't have run the test suite successfully on his own box in this case :-) While I'm spreading guilt, a bunch of filepath-related tests started failing Monday on the "g4 osx.4 trunk" buildbot, but the blamelist for that one is long gone. From nnorwitz at gmail.com Wed Apr 12 07:29:51 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 22:29:51 -0700 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> Message-ID: <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> On 4/11/06, Tim Peters <tim.peters at gmail.com> wrote: > > I'd whine about not checking buildbot health after a code change, > except that it's much more tempting to point out that Thomas couldn't > have run the test suite successfully on his own box in this case :-) Tsk, tsk, Thomas. Should be fixed now. When the grammar changes, Modules/parsermodule.c also needs to be updated. > While I'm spreading guilt, a bunch of filepath-related tests started > failing Monday on the "g4 osx.4 trunk" buildbot, but the blamelist for > that one is long gone. I didn't realize that. Once the tests are passing again, I can try to take a look. I'm concerned about the negative ref leak in test_contextlib. I wonder if this was a result of PJE's fix for generators? n From nnorwitz at gmail.com Wed Apr 12 07:39:04 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 22:39:04 -0700 Subject: [Python-Dev] PyObject_REPR() In-Reply-To: <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> Message-ID: <ee2a432c0604112239o21377e17v4a282618f61e42a1@mail.gmail.com> Ok, then how about prefixing with _, adding a comment saying in big, bold letters: FOR DEBUGGING PURPOSES ONLY, THIS LEAKS, and only defining in a debug build? n -- On 4/11/06, Jeremy Hylton <jeremy at alum.mit.edu> wrote: > It's intended as an internal debugging API. I find it very convenient > for adding with a printf() here and there, which is how it got added > long ago. It should really have a comment mentioning that it leaks > the repr object, and starting with an _ wouldn't be bad either. > > Jeremy > > On 4/11/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > > On 4/11/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote: > > > > > > > It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. > > > > Should removing or renaming it be done in 2.5 or in Py3K? > > > > > > Since it is intrinsically buggy, I would support removal in Py2.5 > > > > +1 on removal. Google only turned up a handleful of uses that I saw. > > > > n > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu > > > From tim.peters at gmail.com Wed Apr 12 08:15:26 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 02:15:26 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> Message-ID: <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> [Neal Norwitz] > ... > I'm concerned about the negative ref leak in test_contextlib. I > wonder if this was a result of PJE's fix for generators? I don't know, but if you do while 1: test_contextlib.test_main() gc.collect() under a debug build it eventually (> 500 iterations) crashes with Fatal Python error: deallocating None That's a pretty good clue ;-) The Py_XDECREF(gen->gi_frame); in gen_dealloc() is on the call stack at the time, and cyclic gc is active. From nnorwitz at gmail.com Wed Apr 12 08:30:07 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 23:30:07 -0700 Subject: [Python-Dev] IMPORTANT 2.5 API changes for C Extension Modules and Embedders Message-ID: <ee2a432c0604112330i7899dc86o3456d7dfcc2c0f71@mail.gmail.com> If you don't write or otherwise maintain Python Extension Modules written in C (or C++) or embed Python in your application, you can stop reading. Python 2.5 alpha 1 was released April 5, 2006. The second alpha should be released in a few weeks. There are several changes which can cause C extension modules or embedded applications to crash the interpreter if not fixed. Periodically, I will send out these reminders with updated information until 2.5 is released. * support for 64-bit sequences (eg, > 2GB strings) * memory allocation modifications 64-bit changes -------------- There are important changes that are in 2.5 to support 64-bit systems. The 64-bit changes can cause Python to crash if your module is not upgraded to support the changes. Python was changed internally to use 64-bit values on 64-bit machines for indices. If you've got a machine with more than 16 GB of RAM, it would be great if you can test Python with large (> 2GB) strings and other sequences. For more details about the Python 2.5 schedule: http://www.python.org/dev/peps/pep-0356/ For more details about the 64-bit change: http://www.python.org/dev/peps/pep-0353/ How to fix your module: http://www.python.org/dev/peps/pep-0353/#conversion-guidelines The effbot wrote a program to check your code and find potential problems with the 64-bit APIs. http://svn.effbot.python-hosting.com/stuff/sandbox/python/ssizecheck.py Memory Allocation Modifications ------------------------------- In previous versions of Python, it was possible to use different families of APIs (PyMem_* vs. PyObject_*) to allocate and free the same block of memory. APIs in these families include: PyMem_*: PyMem_Malloc, PyMem_Realloc, PyMem_Free, PyObject_*: PyObject_Malloc, PyObject_Realloc, PyObject_Free There are a few other APIs with similar names and also the macro variants. In 2.5, if allocate a block of memory with one family, you must reallocate or free with the same family. That means: If you allocate with PyMem_Malloc (or MALLOC), you must reallocate with PyMem_Realloc (or REALLOC) and free with PyMem_Free (or FREE). If you allocate with PyObject_Malloc (or MALLOC), you must reallocate with PyObject_Realloc (or REALLOC) and free with PyObject_Free (or FREE). Using inconsistent APIs can cause double frees or otherwise crash the interpreter. It is fine to mix and match functions or macros within the same family. Please test and upgrade your extension modules! Cheers, n From tim.peters at gmail.com Wed Apr 12 09:00:11 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 03:00:11 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> Message-ID: <1f7befae0604120000g117126e7jbbc42cbe7f39bb88@mail.gmail.com> [Neal] >> ... >> I'm concerned about the negative ref leak in test_contextlib. I >> wonder if this was a result of PJE's fix for generators? [Tim] > I don't know, but if you do > > while 1: > test_contextlib.test_main() > gc.collect() > > under a debug build it eventually (> 500 iterations) crashes with > > Fatal Python error: deallocating None > ... OK, I fixed that (incorrect decref in gen_throw()). Neal, that should also repair test_contextlib's ref leak oddities. The reason you didn't see this before is that test_contextlib didn't actually run any tests until recently. Phillip, when eyeballing gen_dealloc(), I didn't understand two things: 1. Why doesn't if (gen->gi_frame->f_stacktop!=NULL) { check first to be sure that gen->gi_frame != Py_None? Is that impossible here for some reason? 2. It _looks_ like "gi_frame != NULL" is an (undocumented) invariant. Right? If so, Py_XDECREF(gen->gi_frame); sends a confusing message (because of the "X", implying that NULL is OK). Regardless, it would be good to add comments to genobject.h explaining the possible values gi_frame can hold. For example, what does it mean when gi_frame is Py_None? Can it ever be NULL? It's very hard to reverse- engineer invariants and "special value" intents from staring at code. Not to say that isn't fun ;-) From martin at v.loewis.de Wed Apr 12 09:38:52 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 12 Apr 2006 09:38:52 +0200 Subject: [Python-Dev] PyObject_REPR() In-Reply-To: <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> Message-ID: <443CAE8C.4050908@v.loewis.de> Jeremy Hylton wrote: > It's intended as an internal debugging API. I find it very convenient > for adding with a printf() here and there, which is how it got added > long ago. It should really have a comment mentioning that it leaks > the repr object, and starting with an _ wouldn't be bad either. I'm still in favour of removing it. If you need it in debugging code, writing PyObject_AsString(PyObject_Repr(o)) doesn't seem too hard, IMO. If you write it so often that it hurts, you can put it at the top of your file (and perhaps also give it a shorter name, such as P(o)). Nobody but you seemed to know about its existence, and, given that it is flawed, neither its definition nor its use belong into Python (IMO, of course). Regards, Martin From nnorwitz at gmail.com Wed Apr 12 09:54:44 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 00:54:44 -0700 Subject: [Python-Dev] Checking assigned bugs/patches Message-ID: <ee2a432c0604120054u3a2bbaf0i6bbee45259bfc99f@mail.gmail.com> Can all developers try to logon to SF and take a look at the bugs/patches assigned to you? Sometimes we were assigned items and we may not have been notified (or forgot that we have outstanding items). This link should take you to your page: http://sourceforge.net/my/tracker.php If you aren't going to work on the items assigned to you, try to find someone else to work on it or at least unassign the item. Now would be a great time for volunteers to try to help fix some bugs as we try to stabilize 2.5. I suppose now is the time to admit I've got 12 bugs and 7 patches assigned to me. :-( Thanks, n From martin at v.loewis.de Wed Apr 12 09:55:31 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 12 Apr 2006 09:55:31 +0200 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> Message-ID: <443CB273.5010500@v.loewis.de> Tim Peters wrote: > All the trunk buildbots started failing about 5 hours ago, in > test_parser. There have been enough checkins since then that the > boundary between passing and failing is about to scroll off forever. It's not lost, though; it's just not displayed anymore. It would be possible to lengthen the waterfall from 12 hours to a larger period of time. > While I'm spreading guilt, a bunch of filepath-related tests started > failing Monday on the "g4 osx.4 trunk" buildbot, but the blamelist for > that one is long gone. You can go back in time yourself, by editing the build number in http://www.python.org/dev/buildbot/trunk/g4%20osx.4%20trunk/builds/361 The builds from Monday are around 330. The first failure of test_filecmp seems to have been in build 341, which has Georg (45259) and Anthony (45261) on the blame list. The failures might have nothing to with the changes, though: It appears that some files are still left from earlier tests, and now the setUp code fails because /tmp/dir already exists... Regards, Martin From nnorwitz at gmail.com Wed Apr 12 10:11:04 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 01:11:04 -0700 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <443CB273.5010500@v.loewis.de> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <443CB273.5010500@v.loewis.de> Message-ID: <ee2a432c0604120111m621ec1f8tcbe75ab46a09d01@mail.gmail.com> On 4/12/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > The failures might have nothing to with the changes, though: It appears > that some files are still left from earlier tests, and now the setUp > code fails because /tmp/dir already exists... That was my guess. I went in and cleaned up all the stray files I could find. I was able to run the tests manually and they passed. I just forced a build and we'll see if they all pass this time. At least some people will, I'm going to sleep. n From ncoghlan at gmail.com Wed Apr 12 12:39:46 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 12 Apr 2006 20:39:46 +1000 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' In-Reply-To: <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> References: <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> Message-ID: <443CD8F2.6080100@gmail.com> Phillip J. Eby wrote: > At 07:43 AM 4/12/2006 +1000, Nick Coghlan wrote: >> Phillip J. Eby wrote: >> > I propose to create a new API, 'imp.find_loader()' and have it >> return a PEP >> > 302-compatible loader object, even for cases that would normally be >> handled >> > via 'imp.load_module()'. In such cases, the loader returned would >> be an >> > instance of one of a loader class similar to those in runpy, >> > test_importhooks, and setuptools (which also has similar code). >> >> runpy tries to retrieve "imp.get_loader()" before falling back to its own >> emulation - switching that to look for "find_loader()" instead would >> be fine >> (although I'd probably leave the emulation in place, since it allows the >> module to be used on 2.4 as well as 2.5). > > I was proposing to move the emulation features (at least the classes) to > pkgutil, or in the alternative, making them a > published/documented/supported API, rather than private. Either works for me - I managed to momentarily forget that you're planning to write this in Python. >> Would it be worth the effort to switch 'imp' to being a hybrid module? > > I have no idea; what's a hybrid module? In this particular case, it would be a Python module "imp.py" that included the line "from _imp import *" (where _imp is the current C-only module with a different name). The Python module could then provide access to the bits which don't need to be blazingly fast and would be painful to write in C. It's proved to be a convenient approach in a few other places. >> runpy needs a get_filename() method, so it knows what to set __file__ >> too - >> currently its emulation supports that, but it isn't officially part of >> the PEP >> 302 API. > > It sounds like maybe a new PEP is needed to document all the extensions > to the importer/loader protocols. :( > > Interestingly, these are all more examples of where adaptation or > generic functions would be handy to have in the stdlib. Duck-typing > protocols based on attribute names are hard to extend and have to wait > for new library releases for upgrades. :( Yep. For example, one limitation of runpy.py is that it doesn't get the __file__ attribute right for module's inside a zip package - the zipimporter objects don't expose the information that run_module() needs in order to set the value correctly. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From jason.orendorff at gmail.com Wed Apr 12 15:42:29 2006 From: jason.orendorff at gmail.com (Jason Orendorff) Date: Wed, 12 Apr 2006 09:42:29 -0400 Subject: [Python-Dev] String initialization (was: The "i" string-prefix: I18n'ed strings) Message-ID: <bb8868b90604120642h3178e66y67eb84ba27cd3123@mail.gmail.com> A compiler hook on string initialization, eh? I have a distantly related story--this isn't important, just another random Python use case for the file. (The i"xyzzy" proposal wouldn't help this case.) In scons, your SConscripts (makefiles, essentially) are Python source code. You typically have SConscripts throughout your source tree. Any SConscript could have something like this: sort_exe = Program('sort', ['main.c', 'timsort.c']) The problem is dealing with relative filenames. The only sane way to resolve "main.c" to an abspath is relative to the source file that physically contains that string literal token.[1] But that's impossible to determine at run time. scons uses some cleverness to guess the directory. It's always right, except when it's wrong. Maddening. So, what does this have to do with string initialization hooks? If scons could "decorate" string constants as part of SConscript compilation/execution, this problem could actually be solved. -j [1] Well, this is my opinion, but it's the right one. From skip at pobox.com Wed Apr 12 17:02:48 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 12 Apr 2006 10:02:48 -0500 Subject: [Python-Dev] building with C++ In-Reply-To: <200604121453.17443.anthony@interlink.com.au> References: <200604121453.17443.anthony@interlink.com.au> Message-ID: <17469.5784.930427.878222@montanaro.dyndns.org> Anthony> I've done a lot of the work to get Python to build with g++ - Anthony> ... I can't spend much more time on this now, but if someone Anthony> wants to take over and finish it off, that'd be great. Is this on a branch or available as a patch somewhere? Skip From martin at v.loewis.de Wed Apr 12 18:38:17 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 12 Apr 2006 18:38:17 +0200 Subject: [Python-Dev] building with C++ In-Reply-To: <17469.5784.930427.878222@montanaro.dyndns.org> References: <200604121453.17443.anthony@interlink.com.au> <17469.5784.930427.878222@montanaro.dyndns.org> Message-ID: <443D2CF9.2020400@v.loewis.de> skip at pobox.com wrote: > Anthony> I've done a lot of the work to get Python to build with g++ - > Anthony> ... I can't spend much more time on this now, but if someone > Anthony> wants to take over and finish it off, that'd be great. > > Is this on a branch or available as a patch somewhere? It's the trunk. Regards, Martin From pje at telecommunity.com Wed Apr 12 18:49:35 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 12 Apr 2006 12:49:35 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604120000g117126e7jbbc42cbe7f39bb88@mail.gmail.co m> References: <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> Message-ID: <5.1.1.6.0.20060412123802.021d0008@mail.telecommunity.com> At 03:00 AM 4/12/2006 -0400, Tim Peters wrote: >Phillip, when eyeballing gen_dealloc(), I didn't understand two things: > >1. Why doesn't > > if (gen->gi_frame->f_stacktop!=NULL) { > > check first to be sure that gen->gi_frame != Py_None? Apparently, it's because I'm an idiot, and because nobody else realized this during the initial review of the patch. :) >Is that impossible here for some reason? No, I goofed. It's amazing that this doesn't dump core whenever a generator exits. :( >2. It _looks_ like "gi_frame != NULL" is an (undocumented) invariant. >Right? If so, > > Py_XDECREF(gen->gi_frame); > > sends a confusing message (because of the "X", implying that NULL is OK). > Regardless, it would be good to add comments to genobject.h explaining > the possible values gi_frame can hold. For example, what does it mean > when gi_frame is Py_None? Can it ever be NULL? I think what happened is that at one point I thought I was going to set gi_frame=NULL when there's no active frame, in order to speed up reclamation of the frame. However, I think I then thought that it would break the operation of the generator's 'gi_frame' attribute, which is defined as T_OBJECT. Or else I thought that the tp_visit was screwed up in that case. So, it looks to me like what this *should* do is simply allow gi_frame to be NULL instead of Py_None, which would get rid of all the silly casting, and retroactively make the XDECREF correct. :) Does gen_traverse() need to do anything special to visit a null pointer? Should it only conditionally visit it? From pje at telecommunity.com Wed Apr 12 18:58:29 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 12 Apr 2006 12:58:29 -0400 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' In-Reply-To: <443CD8F2.6080100@gmail.com> References: <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060412125727.020d0078@mail.telecommunity.com> At 08:39 PM 4/12/2006 +1000, Nick Coghlan wrote: >In this particular case, it would be a Python module "imp.py" that >included the line "from _imp import *" (where _imp is the current C-only >module with a different name). I don't think that's going to work, since importing won't work until _imp itself is imported... :) From arigo at tunes.org Wed Apr 12 19:04:01 2006 From: arigo at tunes.org (Armin Rigo) Date: Wed, 12 Apr 2006 19:04:01 +0200 Subject: [Python-Dev] refleaks in 2.4 In-Reply-To: <ee2a432c0603262339s1686123jd06e6470b8551057@mail.gmail.com> References: <ee2a432c0603262339s1686123jd06e6470b8551057@mail.gmail.com> Message-ID: <20060412170401.GA31576@code0.codespeak.net> Hi all, On Sun, Mar 26, 2006 at 11:39:50PM -0800, Neal Norwitz wrote: > There are 5 tests that leak references that are present in 2.4.3c1, > but not on HEAD. It would be great if someone can diagnose these and > suggest a fix. > > test_doctest leaked [1, 1, 1] references > test_pkg leaked [10, 10, 10] references > test_pkgimport leaked [2, 2, 2] references > test_traceback leaked [11, 11, 11] references > test_unicode leaked [7, 7, 7] references > > test_traceback leaks due to test_bug737473. A follow-up on this: all the tests apart from test_traceback are due to the dummy object in dictionaries. I modified the code to ignore exactly these references and all the 4 other tests no longer leak. I'm about to check in this nice time saver :-) For information, the 2.5 HEAD now reports the following remaining leaks: test_generators leaked [1, 1, 1, 1] references test_threadedtempfile leaked [-85, 85, -85, 85] references test_threading_local leaked [34, 40, 26, 28] references test_urllib2 leaked [-66, 143, -77, -66] references A bientot, Armin From tim.peters at gmail.com Wed Apr 12 19:17:12 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 13:17:12 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <20060412092434.GA4974@xs4all.nl> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <20060412092434.GA4974@xs4all.nl> Message-ID: <1f7befae0604121017k63035194g93cd66063d89a7a9@mail.gmail.com> [Tim ] >> I'd whine about not checking buildbot health after a code change, >> except that it's much more tempting to point out that Thomas couldn't >> have run the test suite successfully on his own box in this case :-) [Thomas] > Not with -uall, yes. Without -uall it ran fine (by chance, I admit.) That shouldn't be: test_parser doesn't require (or check for) any resources. test_tokenize is the one that runs a random sample in the absence of `-u compiler`, and that one was passing even with -uall. > I didn't feel like running the rather lengthy -uall test on my box right > before bedtime; I won't be skipping that again anytime soon ;) It sounds like you're letting your life interfere with Python development. Stop that ;-) From tim.peters at gmail.com Wed Apr 12 19:43:43 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 13:43:43 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <5.1.1.6.0.20060412123802.021d0008@mail.telecommunity.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> <5.1.1.6.0.20060412123802.021d0008@mail.telecommunity.com> Message-ID: <1f7befae0604121043x7396548ehfde3c6064bcc028b@mail.gmail.com> [Tim] >> Phillip, when eyeballing gen_dealloc(), I didn't understand two things: >> >> 1. Why doesn't >> >> if (gen->gi_frame->f_stacktop!=NULL) { >> >> check first to be sure that gen->gi_frame != Py_None? [Phillip] > Apparently, it's because I'm an idiot, and because nobody else realized > this during the initial review of the patch. :) Then you were the bold adventurer, and the reviewers were the idiots ;-) >> Is that impossible here for some reason? > No, I goofed. It's amazing that this doesn't dump core whenever a > generator exits. :( Well, it's extremely likely that &((PyGenObject *)Py_None)->f_stacktop is a legit part of the process address space, so that dereferencing is non-problematic. We just read up nonsense then, test against it, and the conditional gen->ob_type->tp_del(self); is harmless as gen_del() returns at once (because it does check for Py_None right away -- we never try to treat gi_frame->f_stacktop _as_ a frame pointer here, we just check it against NULL). If we have to have insane code, harmlessly insane is the best kind :--) >> 2. It _looks_ like "gi_frame != NULL" is an (undocumented) invariant. >> Right? If so, >> >> Py_XDECREF(gen->gi_frame); >> >> sends a confusing message (because of the "X", implying that NULL is OK). >> Regardless, it would be good to add comments to genobject.h explaining >> the possible values gi_frame can hold. For example, what does it mean >> when gi_frame is Py_None? Can it ever be NULL? > I think what happened is that at one point I thought I was going to set > gi_frame=NULL when there's no active frame, in order to speed up > reclamation of the frame. However, I think I then thought that it would > break the operation of the generator's 'gi_frame' attribute, which is > defined as T_OBJECT. That shouldn't be a problem: when PyMember_GetOne() fetches a T_OBJECT that's NULL, it returns (a properly incref'ed) Py_None instead. > Or else I thought that the tp_visit was screwed up in that case. > > So, it looks to me like what this *should* do is simply allow gi_frame to > be NULL instead of Py_None, which would get rid of all the silly casting, > and retroactively make the XDECREF correct. :) That indeed sounds better to me, although you still don't get out of writing gi_frame comments for genobject.h :-) > Does gen_traverse() need to do anything special to visit a null > pointer? Should it only conditionally visit it? The body of gen_traverse() is best written: Py_VISIT(gen->gi_frame); return 0; Py_VISIT is defined in objimpl.h, and takes care of NULLs and exceptional returns. BTW, someone looking for an easy task might enjoy rewriting other tp_traverse slots to use Py_VISIT. We even have cases now (like super_traverse) where modules define their own workalike traverse-visit macros, which has become confusing since a standard macro was defined for this purpose. From tinuviel at sparcs.kaist.ac.kr Wed Apr 12 19:23:14 2006 From: tinuviel at sparcs.kaist.ac.kr (Seo Sanghyeon) Date: Thu, 13 Apr 2006 02:23:14 +0900 Subject: [Python-Dev] Request for review Message-ID: <20060412172314.GA17825@sparcs.kaist.ac.kr> Can someone have a look at #860326? I got bitten by it today, and I can see no reason not to apply suggested patch. Thanks! Seo Sanghyeon From amk at amk.ca Wed Apr 12 20:09:00 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 12 Apr 2006 14:09:00 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604121043x7396548ehfde3c6064bcc028b@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> <5.1.1.6.0.20060412123802.021d0008@mail.telecommunity.com> <1f7befae0604121043x7396548ehfde3c6064bcc028b@mail.gmail.com> Message-ID: <20060412180900.GB12958@localhost.localdomain> On Wed, Apr 12, 2006 at 01:43:43PM -0400, Tim Peters wrote: > BTW, someone looking for an easy task might enjoy rewriting other > tp_traverse slots to use Py_VISIT. We even have cases now (like > super_traverse) where modules define their own workalike > traverse-visit macros, which has become confusing since a standard > macro was defined for this purpose. Do we need a list of CPython cleanup projects? (Compare to the Linux kernel janitors: <http://janitor.kernelnewbies.org/TODO>.) --amk From nnorwitz at gmail.com Wed Apr 12 20:36:26 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 11:36:26 -0700 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <20060412180900.GB12958@localhost.localdomain> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <ee2a432c0604112229s5d313ffbsf83c29ea738cb871@mail.gmail.com> <1f7befae0604112315r39cc066du9f5967157f05b78@mail.gmail.com> <5.1.1.6.0.20060412123802.021d0008@mail.telecommunity.com> <1f7befae0604121043x7396548ehfde3c6064bcc028b@mail.gmail.com> <20060412180900.GB12958@localhost.localdomain> Message-ID: <ee2a432c0604121136h23206917iecc24fe7a00e6123@mail.gmail.com> On 4/12/06, A.M. Kuchling <amk at amk.ca> wrote: > On Wed, Apr 12, 2006 at 01:43:43PM -0400, Tim Peters wrote: > > BTW, someone looking for an easy task might enjoy rewriting other > > tp_traverse slots to use Py_VISIT. We even have cases now (like > > super_traverse) where modules define their own workalike > > traverse-visit macros, which has become confusing since a standard > > macro was defined for this purpose. > > Do we need a list of CPython cleanup projects? > (Compare to the Linux kernel janitors: > <http://janitor.kernelnewbies.org/TODO>.) +1. Could add a page to the wiki and link to it from the dev faq. n From jimjjewett at gmail.com Wed Apr 12 20:47:37 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Wed, 12 Apr 2006 14:47:37 -0400 Subject: [Python-Dev] cleanup list Message-ID: <fb6fbf560604121147o9acffbbh8d45ffad7e46735a@mail.gmail.com> On Wed, Apr 12, 2006 at 01:43:43PM -0400, Tim Peters wrote: > BTW, someone looking for an easy task might enjoy rewriting other > tp_traverse slots to use Py_VISIT. We even have cases now (like > super_traverse) where modules define their own workalike > traverse-visit macros, which has become confusing since a standard > macro was defined for this purpose. A.M. Kuchling amk asked: > Do we need a list of CPython cleanup projects? > (Compare to the Linux kernel janitors: > <http://janitor.kernelnewbies.org/TODO>.) It would probably be useful, if there were someone ready to verify and commit the cleanups. A similar list for python modules might be even more useful; I doubt I'm the only one often without a working C setup. (Common on windows, I've seen it even on unix boxes that did have python preinstalled.) -jJ From greg at electricrain.com Wed Apr 12 21:10:18 2006 From: greg at electricrain.com (Gregory P. Smith) Date: Wed, 12 Apr 2006 12:10:18 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <20060412191018.GM10641@zot.electricrain.com> On Sat, Apr 08, 2006 at 02:47:28PM -0700, Brett Cannon wrote: > OK, I am going to write the PEP I proposed a week or so ago, listing > all modules and packages within the stdlib that are maintained > externally so we have a central place to go for contact info or where > to report bugs on issues. This should only apply to modules that want > bugs reported outside of the Python tracker and have a separate dev > track. People who just use the Python repository as their mainline > version can just be left out. > > For each package I need the name of the module, the name of the > maintainer, homepage of the module outside of Python, and where to > report bugs. Do people think we need to document the version that has > been imported into Python and when that was done as well? > > Anyway, here is a list of the packages that I think have outside > maintenance (or at least have been at some point). Anyone who has > info on them that I need, please let me know the details. Also, if I > missed any, obviously speak up: > > - bsddb (still external?) the python repository code is the mainline version. http://pybsddb.sf.net/ feel free to list me as the maintainer, but i'd prefer pybsddb-users at lists.sf.net be listed as the contact. the pybsddb project exists only to supply a more up to date module to people on older pythons or to supply new versions of BerkeleyDB to people on current python. bug reports sometimes do go to the pybsddb bug+patch tracker and mailing list where I eventually take care of them. i'd be happier with them going to the python tracker (some have) but haven't attempted to make that the only way. using the python tracker means that i'm not on the critical path to look at a bsddb bug report. From g.brandl at gmx.net Wed Apr 12 23:14:41 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 12 Apr 2006 23:14:41 +0200 Subject: [Python-Dev] Request for review In-Reply-To: <20060412172314.GA17825@sparcs.kaist.ac.kr> References: <20060412172314.GA17825@sparcs.kaist.ac.kr> Message-ID: <e1jqk1$378$1@sea.gmane.org> Seo Sanghyeon wrote: > Can someone have a look at #860326? I got bitten by it today, and I can > see no reason not to apply suggested patch. I've reviewed it and checked it in. Cheers, Georg From g.brandl at gmx.net Wed Apr 12 23:56:25 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 12 Apr 2006 23:56:25 +0200 Subject: [Python-Dev] Request for review In-Reply-To: <e1jqk1$378$1@sea.gmane.org> References: <20060412172314.GA17825@sparcs.kaist.ac.kr> <e1jqk1$378$1@sea.gmane.org> Message-ID: <e1jt29$b50$1@sea.gmane.org> Georg Brandl wrote: > Seo Sanghyeon wrote: >> Can someone have a look at #860326? I got bitten by it today, and I can >> see no reason not to apply suggested patch. > > I've reviewed it and checked it in. Hm. This broke a few doctests. I can fix them, but I wonder if doctest should accept a bare exception name if the exception is defined in the current module. Or should it ignore the module name altogether? (Background: In normal exception tracebacks, non-builtin exceptions are printed with their module name prepended: Traceback: [...] decimal.InvalidOperation: ... When formatted by traceback.format_exception_only, the module name was omitted, which the patch mentioned above corrected. Since doctest relies on that behavior, three stdlib doctests broke.) Georg From ncoghlan at gmail.com Thu Apr 13 00:04:07 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 13 Apr 2006 08:04:07 +1000 Subject: [Python-Dev] Proposal: expose PEP 302 facilities via 'imp' and 'pkgutil' In-Reply-To: <5.1.1.6.0.20060412125727.020d0078@mail.telecommunity.com> References: <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411160446.01f98278@mail.telecommunity.com> <5.1.1.6.0.20060411180823.01f8e0a0@mail.telecommunity.com> <5.1.1.6.0.20060412125727.020d0078@mail.telecommunity.com> Message-ID: <443D7957.8000604@gmail.com> Phillip J. Eby wrote: > At 08:39 PM 4/12/2006 +1000, Nick Coghlan wrote: >> In this particular case, it would be a Python module "imp.py" that >> included the line "from _imp import *" (where _imp is the current >> C-only module with a different name). > > I don't think that's going to work, since importing won't work until > _imp itself is imported... :) > > Details, details. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From skip at pobox.com Thu Apr 13 02:59:10 2006 From: skip at pobox.com (Skip Montanaro) Date: Thu, 13 Apr 2006 00:59:10 +0000 (UTC) Subject: [Python-Dev] building with C++ References: <200604121453.17443.anthony@interlink.com.au> <17469.5784.930427.878222@montanaro.dyndns.org> <443D2CF9.2020400@v.loewis.de> Message-ID: <loom.20060413T025710-272@post.gmane.org> > > Anthony> I've done a lot of the work to get Python to build with g++ - > > Is this on a branch or available as a patch somewhere? > > It's the trunk. Is there a primer that will get me to where Anthony is? I tried the obvious CC=g++ ./configure --with-cxx=g++ and the build fails trying to compile Objects/genobject.c. From the sounds of Anthony's email he was at the point where it built and was having test problems. Skip From anthony at interlink.com.au Thu Apr 13 03:09:25 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 13 Apr 2006 11:09:25 +1000 Subject: [Python-Dev] building with C++ In-Reply-To: <loom.20060413T025710-272@post.gmane.org> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> Message-ID: <200604131109.29576.anthony@interlink.com.au> On Thursday 13 April 2006 10:59, Skip Montanaro wrote: > > > Anthony> I've done a lot of the work to get Python to build > > > with g++ - > > > > > > Is this on a branch or available as a patch somewhere? > > > > It's the trunk. > > Is there a primer that will get me to where Anthony is? I tried > the obvious > > CC=g++ ./configure --with-cxx=g++ That's what I've been doing. > and the build fails trying to compile Objects/genobject.c. From > the sounds of Anthony's email he was at the point where it built > and was having test problems. The genobject.c error is new - I just fixed it, it was shallow. The code is _nearly_ building fine. there's an issue in _sre.c with some code that either returns a Py_UNICODE* or an SRE_CHAR* (unsigned char*) in a void*. The code probably needs a refactoring to deal with that. There's also Python/compile.c: In function ?int compiler_compare(compiler*, _expr*)?: Python/compile.c:3065: error: invalid cast from type ?void*? to type ?cmpop_ty? Python/compile.c:3075: error: invalid cast from type ?void*? to type ?cmpop_ty? which I haven't looked at yet. Anyone else is welcome to fix these. To get past those two, I've been building just those two files with gcc, with "make CC=gcc Python/compile.o Modules/_sre.o" From tim.peters at gmail.com Thu Apr 13 03:14:56 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 21:14:56 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <20060412211409.000EF1E4003@bag.python.org> References: <20060412211409.000EF1E4003@bag.python.org> Message-ID: <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> [georg.brandl] > Author: georg.brandl > Date: Wed Apr 12 23:14:09 2006 > New Revision: 45321 > > Modified: > python/trunk/Lib/test/test_traceback.py > python/trunk/Lib/traceback.py > python/trunk/Misc/NEWS > Log: > Patch #860326: traceback.format_exception_only() now prepends the > exception's module name to non-builtin exceptions, like the interpreter > itself does. And all the trunk buildbot runs have failed since, in at least test_decimal, test_doctest and test_unpack. Please run tests before checking in. The 2.4 backport of this patch should be reverted, since it changes visible behavior (for example, all the 2.4 branch buildbot runs also fail now). Fine by me if we change the failing tests on the trunk to pass (noting that should have been done before checking in). From pje at telecommunity.com Thu Apr 13 03:26:54 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 12 Apr 2006 21:26:54 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.co m> References: <20060412211409.000EF1E4003@bag.python.org> <20060412211409.000EF1E4003@bag.python.org> Message-ID: <5.1.1.6.0.20060412212225.01fa5260@mail.telecommunity.com> At 09:14 PM 4/12/2006 -0400, Tim Peters wrote: >The 2.4 backport of this patch should be reverted, since it changes >visible behavior (for example, all the 2.4 branch buildbot runs also >fail now). > >Fine by me if we change the failing tests on the trunk to pass (noting >that should have been done before checking in). It's not fine by me if the reason for the failure is that doctests trapping specific exceptions no longer work with the patch. If that's the case, the patch should be reverted entirely, absent any compelling reasoning for breaking *everyone else's* doctests in 2.5. From tim.peters at gmail.com Thu Apr 13 03:26:37 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 21:26:37 -0400 Subject: [Python-Dev] Request for review In-Reply-To: <e1jt29$b50$1@sea.gmane.org> References: <20060412172314.GA17825@sparcs.kaist.ac.kr> <e1jqk1$378$1@sea.gmane.org> <e1jt29$b50$1@sea.gmane.org> Message-ID: <1f7befae0604121826g71f8d0dbhc22d8ed9cc8dfccf@mail.gmail.com> [Georg Brandl] > Hm. This broke a few doctests. I can fix them, but I wonder if > doctest should accept a bare exception name if the exception > is defined in the current module. No. > Or should it ignore the module name altogether? No. doctest strives to be magic-free WYSIWYG. If someone _intends_ the module name to be optional in a doctest, they should explicitly use doctest's ELLIPSIS option. > (Background: > > In normal exception tracebacks, non-builtin exceptions are printed with > their module name prepended: > > Traceback: > [...] > decimal.InvalidOperation: ... > > When formatted by traceback.format_exception_only, the module name was > omitted, which the patch mentioned above corrected. Since doctest relies > on that behavior, three stdlib doctests broke.) Changes to visible behavior should not be introduced in bugfix releases, unless that happens as an unavoidable consequence of repairing a critical bug, so this should be yanked from 2.4. I agree the traceback formatting inconsistency was a bug, but it was hardly critical (for example, nobody noticed it for 15 years <0.5 wink>). From anthony at interlink.com.au Thu Apr 13 03:36:07 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 13 Apr 2006 11:36:07 +1000 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> Message-ID: <200604131136.10270.anthony@interlink.com.au> On Thursday 13 April 2006 11:14, Tim Peters wrote: > > Patch #860326: traceback.format_exception_only() now prepends the > > exception's module name to non-builtin exceptions, like the > > interpreter itself does. > > And all the trunk buildbot runs have failed since, in at least > test_decimal, test_doctest and test_unpack. Please run tests > before checking in. > > The 2.4 backport of this patch should be reverted, since it changes > visible behavior (for example, all the 2.4 branch buildbot runs > also fail now). Correct. I have reverted this on the release24-maint branch. > Fine by me if we change the failing tests on the trunk to pass > (noting that should have been done before checking in). I'm reverting on the trunk, too. Per PJE's email as well, I think this needs discussion before committing (and it needs the tests updated, first!) Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From tim.peters at gmail.com Thu Apr 13 04:02:10 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 22:02:10 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <5.1.1.6.0.20060412212225.01fa5260@mail.telecommunity.com> References: <20060412211409.000EF1E4003@bag.python.org> <5.1.1.6.0.20060412212225.01fa5260@mail.telecommunity.com> Message-ID: <1f7befae0604121902r1ab139b7uacc47da56364ed6f@mail.gmail.com> [Tim] >> ... >> Fine by me if we change the failing tests on the trunk to pass (noting >> that should have been done before checking in). [Phillip] > It's not fine by me if the reason for the failure is that doctests trapping > specific exceptions no longer work with the patch.> > If that's the case, the patch should be reverted entirely, absent any > compelling reasoning for breaking *everyone else's* doctests in 2.5. I agree with the patch author's position that "it's a bug" that the traceback module's functions' output doesn't match the interpreter's traceback output. That certainly wasn't intended; it's most likely left over from the days when only strings could be exceptions, and nobody noticed that the old traceback-module code "didn't work exactly right" anymore when the notion of exceptions was generalized. WRT doctests, it's a mixed bag. Yes, some tests will fail. doctests are always vulnerable to changes "like this". OTOH, the change also simplifies writing new doctests. I remember being baffled at doing things like: >>> import decimal >>> 1 / decimal.Decimal(0) Traceback (most recent call last): File "<stdin>", line 1, in ? File "C:\Python24\lib\decimal.py", line 1314, in __rdiv__ return other.__div__(self, context=context) File "C:\Python24\lib\decimal.py", line 1140, in __div__ return self._divide(other, context=context) File "C:\Python24\lib\decimal.py", line 1222, in _divide return context._raise_error(DivisionByZero, 'x / 0', sign) File "C:\Python24\lib\decimal.py", line 2267, in _raise_error raise error, explanation decimal.DivisionByZero: x / 0 in an interactive shell, pasting it into a doctest, and then seeing the doctest fail because the exception name "somehow" magically changed to just "DivisionByZero". I got used to that before I found time to understand it, though. Future generations wouldn't suffer this cognitive dissonance. A less-desirable (IMO) alternative is to change the Python core to display bare (non-dotted) exception names when it produces a traceback. The fundamental bug here is that the core and the traceback module produce different tracebacks given the same exception info, and changing either could repair that. From jeremy at alum.mit.edu Thu Apr 13 04:03:20 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 12 Apr 2006 22:03:20 -0400 Subject: [Python-Dev] building with C++ In-Reply-To: <200604131109.29576.anthony@interlink.com.au> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> <200604131109.29576.anthony@interlink.com.au> Message-ID: <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> On 4/12/06, Anthony Baxter <anthony at interlink.com.au> wrote: > The code is _nearly_ building fine. there's an issue in _sre.c with > some code that either returns a Py_UNICODE* or an SRE_CHAR* (unsigned > char*) in a void*. The code probably needs a refactoring to deal with > that. There's also > Python/compile.c: In function 'int compiler_compare(compiler*, > _expr*)': > Python/compile.c:3065: error: invalid cast from type 'void*' to type > 'cmpop_ty' > Python/compile.c:3075: error: invalid cast from type 'void*' to type > 'cmpop_ty' > which I haven't looked at yet. Anyone else is welcome to fix these. The code in compile.c is pretty dodgy. I'd like to think of a better way to represent an array of cmpop_ty objects than casting ints to void* and then back. Jeremy > To get past those two, I've been building just those two files with > gcc, with "make CC=gcc Python/compile.o Modules/_sre.o" > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu > From anthony at interlink.com.au Thu Apr 13 04:07:10 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 13 Apr 2006 12:07:10 +1000 Subject: [Python-Dev] building with C++ In-Reply-To: <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> References: <200604121453.17443.anthony@interlink.com.au> <200604131109.29576.anthony@interlink.com.au> <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> Message-ID: <200604131207.11566.anthony@interlink.com.au> On Thursday 13 April 2006 12:03, Jeremy Hylton wrote: > The code in compile.c is pretty dodgy. I'd like to think of a > better way to represent an array of cmpop_ty objects than casting > ints to void* and then back. Well, it's compiling and working with C++ now, at least. But yeah, that's pretty nasty. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Thu Apr 13 04:10:03 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 13 Apr 2006 12:10:03 +1000 Subject: [Python-Dev] building with C++ In-Reply-To: <200604121453.17443.anthony@interlink.com.au> References: <200604121453.17443.anthony@interlink.com.au> Message-ID: <200604131210.04926.anthony@interlink.com.au> So I lied - I found the time today to spread the magic pixie dust of extern "C" {} around to get the Python core to build and link with g++. (It still builds with gcc). There's still an issue with Modules/_sre.c, and now that we can run setup.py, there's lots and lots of errors from the various code in Modules that's not C++ safe. Anthony From skip at pobox.com Thu Apr 13 04:17:12 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 12 Apr 2006 21:17:12 -0500 Subject: [Python-Dev] building with C++ In-Reply-To: <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> <200604131109.29576.anthony@interlink.com.au> <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> Message-ID: <17469.46248.822513.233118@montanaro.dyndns.org> Jeremy> The code in compile.c is pretty dodgy. I'd like to think of a Jeremy> better way to represent an array of cmpop_ty objects than Jeremy> casting ints to void* and then back. http://python.org/sf/1469594 might be a step in the right direction. I assigned it to Anthony. Feel free to assign it back to me. Skip From jeremy at alum.mit.edu Thu Apr 13 04:19:30 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Wed, 12 Apr 2006 22:19:30 -0400 Subject: [Python-Dev] building with C++ In-Reply-To: <17469.46248.822513.233118@montanaro.dyndns.org> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> <200604131109.29576.anthony@interlink.com.au> <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> <17469.46248.822513.233118@montanaro.dyndns.org> Message-ID: <e8bf7a530604121919l65a2c36av5f51c5c12c48566b@mail.gmail.com> Looks good to me. Why don't you check it in. Jeremy On 4/12/06, skip at pobox.com <skip at pobox.com> wrote: > > Jeremy> The code in compile.c is pretty dodgy. I'd like to think of a > Jeremy> better way to represent an array of cmpop_ty objects than > Jeremy> casting ints to void* and then back. > > http://python.org/sf/1469594 > > might be a step in the right direction. > > I assigned it to Anthony. Feel free to assign it back to me. > > Skip > From greg.ewing at canterbury.ac.nz Thu Apr 13 04:55:30 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 13 Apr 2006 14:55:30 +1200 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604121902r1ab139b7uacc47da56364ed6f@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <5.1.1.6.0.20060412212225.01fa5260@mail.telecommunity.com> <1f7befae0604121902r1ab139b7uacc47da56364ed6f@mail.gmail.com> Message-ID: <443DBDA2.6050707@canterbury.ac.nz> Tim Peters wrote: > A less-desirable (IMO) alternative is to change the Python core to > display bare (non-dotted) exception names when it produces a > traceback. Much less desirable, considering that there are modules which define an exception just called "error" on the assumption that you're going to see the module name as well. -- Greg Ewing, Computer Science Dept, +--------------------------------------+ University of Canterbury, | Carpe post meridiam! | Christchurch, New Zealand | (I'm not a morning person.) | greg.ewing at canterbury.ac.nz +--------------------------------------+ From tim.peters at gmail.com Thu Apr 13 05:23:51 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 12 Apr 2006 23:23:51 -0400 Subject: [Python-Dev] TODO Wiki (was: Preserving the blamelist) Message-ID: <1f7befae0604122023r5a34d59el11ac13973706bd1@mail.gmail.com> [A.M. Kuchling] >> Do we need a list of CPython cleanup projects? >> (Compare to the Linux kernel janitors: >> <http://janitor.kernelnewbies.org/TODO>.) [Neal Norwitz] > +1. Could add a page to the wiki and link to it from the dev faq. Not the same thing, but I just added: http://wiki.python.org/moin/SimpleTodo and linked to it from http://wiki.python.org/moin/CodingProjectIdeas and populated it with the Py_VISIT task. Feel free to add MessyTodo and FantasyTodo pages too ;-) From nnorwitz at gmail.com Thu Apr 13 05:47:46 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 20:47:46 -0700 Subject: [Python-Dev] TODO Wiki (was: Preserving the blamelist) In-Reply-To: <1f7befae0604122023r5a34d59el11ac13973706bd1@mail.gmail.com> References: <1f7befae0604122023r5a34d59el11ac13973706bd1@mail.gmail.com> Message-ID: <ee2a432c0604122047o403d1836rff1a7a62df87f0b0@mail.gmail.com> On 4/12/06, Tim Peters <tim.peters at gmail.com> wrote: > [A.M. Kuchling] > >> Do we need a list of CPython cleanup projects? > >> (Compare to the Linux kernel janitors: > >> <http://janitor.kernelnewbies.org/TODO>.) > > [Neal Norwitz] > > +1. Could add a page to the wiki and link to it from the dev faq. > > Not the same thing, but I just added: > > http://wiki.python.org/moin/SimpleTodo > > and linked to it from > > http://wiki.python.org/moin/CodingProjectIdeas > > and populated it with the Py_VISIT task. Feel free to add MessyTodo > and FantasyTodo pages too ;-) Cool, now can we also get a SummerOfCodeTodo? :-) Google should be doing another Summer Of Code soon. We should start thinking about any projects we would like to see done. Also, we will need to find mentors. I hope everyone who was a mentor last year will mentor again. I plan to mentor and I'll find a way to rope Guido in too. :-) Cheers, n From nnorwitz at gmail.com Thu Apr 13 07:13:51 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 22:13:51 -0700 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <442CF0BB.6040901@egenix.com> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> Message-ID: <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> On 3/31/06, M.-A. Lemburg <mal at egenix.com> wrote: > Martin v. L?wis wrote: > > Neal Norwitz wrote: > >> See http://python.org/sf/1454485 for the gory details. Basically if > >> you create a unicode array (array.array('u')) and try to append an > >> 8-bit string (ie, not unicode), you can crash the interpreter. > >> > >> The problem is that the string is converted without question to a > >> unicode buffer. Within unicode, it assumes the data to be valid, but > >> this isn't necessarily the case. We wind up accessing an array with a > >> negative index and boom. > > > > There are several problems combined here, which might need discussion: > > > > - why does the 'u#' converter use the buffer interface if available? > > it should just support Unicode objects. The buffer object makes > > no promise that the buffer actually is meaningful UCS-2/UCS-4, so > > u# shouldn't guess that it is. > > (FWIW, it currently truncates the buffer size to the next-smaller > > multiple of sizeof(Py_UNICODE), and silently so) > > > > I think that part should just go: u# should be restricted to unicode > > objects. > > 'u#' is intended to match 's#' which also uses the buffer > interface. It expects the buffer returned by the object > to a be a Py_UNICODE* buffer, hence the calculation of the > length. > > However, we already have 'es#' which is a lot safer to use > in this respect: you can explicity define the encoding you > want to see, e.g. 'unicode-internal' and the associated > codec also takes care of range checks, etc. > > So, I'm +1 on restricting 'u#' to Unicode objects. Note: 2.5 no longer crashes, 2.4 does. Does this mean you would like to see this patch checked in to 2.5? What should we do about 2.4? Index: Python/getargs.c =================================================================== --- Python/getargs.c (revision 45333) +++ Python/getargs.c (working copy) @@ -1042,11 +1042,8 @@ STORE_SIZE(PyUnicode_GET_SIZE(arg)); } else { - char *buf; - Py_ssize_t count = convertbuffer(arg, p, &buf); - if (count < 0) - return converterr(buf, arg, msgbuf, bufsize); - STORE_SIZE(count/(sizeof(Py_UNICODE))); + return converterr("cannot convert raw buffers"", + arg, msgbuf, bufsize); } format++; } else { > > - should Python guarantee that all characters in a Unicode object > > are between 0 and sys.maxunicode? Currently, it is possible to > > create Unicode strings with either negative or very large Py_UNICODE > > elements. > > > > - if the answer to the last question is no (i.e. if it is intentional > > that a unicode object can contain arbitrary Py_UNICODE values): should > > Python then guarantee that Py_UNICODE is an unsigned type? > > Py_UNICODE must always be unsigned. The whole implementation > relies on this and has been designed with this in mind (see > PEP 100). AFAICT, the configure does check that Py_UNICODE > is always unsigned. Martin fixed the crashing problem in 2.5 by making wchar_t unsigned which was a bug. (A configure test was reversed IIRC.) Can this change to wchar_t be made in 2.4? That technically changes all the interfaces even though it was a mistake. What should be done for 2.4? n From nnorwitz at gmail.com Thu Apr 13 07:44:26 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 12 Apr 2006 22:44:26 -0700 Subject: [Python-Dev] int vs ssize_t in unicode Message-ID: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> Martin, In Include/ucnhash.h I notice some integers and wonder if those should be Py_ssize_t. It looks like they are just names so they should be pretty short. But in Objects/unicodeobject.c, I notice a bunch of ints and casts to int and wonder if they should be changed to Py_ssize_t/removed: 235: assert(length<INT_MAX); unicode->length = (int)length; 376, 404: int i; 1366: (seems like this could be a 64-bit value) int nneeded; (i stopped at this point, there are probably more) Modules/unicodedata.c (lots of ints, not sure if they are a problem) 494: isize = PyUnicode_GET_SIZE(input); n From martin at v.loewis.de Thu Apr 13 08:37:49 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 08:37:49 +0200 Subject: [Python-Dev] int vs ssize_t in unicode In-Reply-To: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> References: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> Message-ID: <443DF1BD.30200@v.loewis.de> Neal Norwitz wrote: > In Include/ucnhash.h I notice some integers and wonder if those should > be Py_ssize_t. It looks like they are just names so they should be > pretty short. Right: int size is below 100; the name length of a Unicode character is below 40 (I believe). OTOH, changing them to Py_ssize_t wouldn't hurt, either. > 235: > assert(length<INT_MAX); > unicode->length = (int)length; Right: I just changed it. It may date back to a version of the patch where I only changed the signatures of the functions, but not the object layout. > 376, 404: > int i; Right, changed. > 1366: (seems like this could be a 64-bit value) > int nneeded; Right; also, the safe downcast can go away. > (i stopped at this point, there are probably more) I looked through the entire file, and fixed all I could find. > Modules/unicodedata.c (lots of ints, not sure if they are a problem) Most of them are not, since most functions deal only with Unicode characters, character names, and character numbers (which are below 2**21). > 494: > isize = PyUnicode_GET_SIZE(input); This is a problem, though; I fixed it. Regards, Martin From nnorwitz at gmail.com Thu Apr 13 09:00:43 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 13 Apr 2006 00:00:43 -0700 Subject: [Python-Dev] int vs ssize_t in unicode In-Reply-To: <443DF1BD.30200@v.loewis.de> References: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> <443DF1BD.30200@v.loewis.de> Message-ID: <ee2a432c0604130000v405ffcc4rc7b50ade6015f379@mail.gmail.com> On 4/12/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > > 235: > > assert(length<INT_MAX); > > unicode->length = (int)length; > > Right: I just changed it. It may date back to a version of the patch > where I only changed the signatures of the functions, but not the > object layout. I just grepped for INT_MAX and there's a ton of them still (well 83 in */*.c). Some aren't an issue like posixmodule.c, those are _SC_INT_MAX. marshal is probably ok, but all uses should be verified. Really all uses of {INT,LONG}_{MIN,MAX} should be verified and converted to PY_SSIZE_T_{MIN,MAX} as appropriate. There are only a few {INT,LONG}_MIN and 22 LONG_MAX. I'll try to review these soon unless someone beats me to it. n From g.brandl at gmx.net Thu Apr 13 09:17:44 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 13 Apr 2006 09:17:44 +0200 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> Message-ID: <e1ktuo$o1$1@sea.gmane.org> Tim Peters wrote: > [georg.brandl] >> Author: georg.brandl >> Date: Wed Apr 12 23:14:09 2006 >> New Revision: 45321 >> >> Modified: >> python/trunk/Lib/test/test_traceback.py >> python/trunk/Lib/traceback.py >> python/trunk/Misc/NEWS >> Log: >> Patch #860326: traceback.format_exception_only() now prepends the >> exception's module name to non-builtin exceptions, like the interpreter >> itself does. > > And all the trunk buildbot runs have failed since, in at least > test_decimal, test_doctest and test_unpack. Please run tests before > checking in. Well, it's tempting to let the buildbots run the tests for you <wink> Honestly, I didn't realize that doctest relies on traceback. Running the test suite takes over half an hour on this box, so I decided to take a chance. > The 2.4 backport of this patch should be reverted, since it changes > visible behavior (for example, all the 2.4 branch buildbot runs also > fail now). Right. That's too much of a behavior change. > Fine by me if we change the failing tests on the trunk to pass (noting > that should have been done before checking in). I'm not the one to decide, but at some time the traceback module should be rewritten to match the interpreter behavior. Cheers, Georg From bbman at mail.ru Thu Apr 13 09:43:59 2006 From: bbman at mail.ru (Mikhail Glushenkov) Date: Thu, 13 Apr 2006 07:43:59 +0000 (UTC) Subject: [Python-Dev] Any reason that any()/all() do not take a predicate argument? Message-ID: <loom.20060413T093154-130@post.gmane.org> Hi, sorry if this came up before, but I tried searching the archives and found nothing. It would be really nice if new builtin truth functions in 2.5 took a predicate argument(defaults to bool), so one could write, for example: seq = [1,2,3,4,5] if any(seq, lambda x: x==5): ... which is clearly more readable than reduce(seq, lambda x,y: x or y==5, False) IIRC, something like that is called junctions in Perl 6. From martin at v.loewis.de Thu Apr 13 10:06:21 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 10:06:21 +0200 Subject: [Python-Dev] int vs ssize_t in unicode In-Reply-To: <ee2a432c0604130000v405ffcc4rc7b50ade6015f379@mail.gmail.com> References: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> <443DF1BD.30200@v.loewis.de> <ee2a432c0604130000v405ffcc4rc7b50ade6015f379@mail.gmail.com> Message-ID: <443E067D.5060901@v.loewis.de> Neal Norwitz wrote: > I just grepped for INT_MAX and there's a ton of them still (well 83 in > */*.c). Some aren't an issue like posixmodule.c, those are > _SC_INT_MAX. marshal is probably ok, but all uses should be verified. > Really all uses of {INT,LONG}_{MIN,MAX} should be verified and > converted to PY_SSIZE_T_{MIN,MAX} as appropriate. I replaced all the trivial ones; the remaining ones are (IMO) more involved, or correct. In particular: - collectionsmodule: deque is still restricted to 2GiB elements - cPickle: pickling does not support huge strings (and probably shouldn't); likewise marshal - _sre is still limited to INT_MAX completely - not sure why the mbcs codec is restricted to INT_MAX; somebody should check the Win64 API whether the restriction can be removed (most likely, it can) - PyObject_CallFunction must be duplicated for PY_SSIZE_T_CLEAN, then exceptions.c can be extended. Regards, Martin From g.brandl at gmx.net Thu Apr 13 10:10:42 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 13 Apr 2006 10:10:42 +0200 Subject: [Python-Dev] Any reason that any()/all() do not take a predicate argument? In-Reply-To: <loom.20060413T093154-130@post.gmane.org> References: <loom.20060413T093154-130@post.gmane.org> Message-ID: <e1l122$a2p$1@sea.gmane.org> Mikhail Glushenkov wrote: > Hi, > > sorry if this came up before, but I tried searching the archives and > found nothing. It would be really nice if new builtin truth functions > in 2.5 took a predicate argument(defaults to bool), so one could > write, for example: > > seq = [1,2,3,4,5] > if any(seq, lambda x: x==5): > ... Well, does if any(x == 5 for x in seq) meet your needs? Georg From ncoghlan at gmail.com Thu Apr 13 10:32:19 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 13 Apr 2006 18:32:19 +1000 Subject: [Python-Dev] Request for review In-Reply-To: <1f7befae0604121826g71f8d0dbhc22d8ed9cc8dfccf@mail.gmail.com> References: <20060412172314.GA17825@sparcs.kaist.ac.kr> <e1jqk1$378$1@sea.gmane.org> <e1jt29$b50$1@sea.gmane.org> <1f7befae0604121826g71f8d0dbhc22d8ed9cc8dfccf@mail.gmail.com> Message-ID: <443E0C93.4090700@gmail.com> Tim Peters wrote: > [Georg Brandl] >> Hm. This broke a few doctests. I can fix them, but I wonder if >> doctest should accept a bare exception name if the exception >> is defined in the current module. > > No. > >> Or should it ignore the module name altogether? > > No. doctest strives to be magic-free WYSIWYG. If someone _intends_ > the module name to be optional in a doctest, they should explicitly > use doctest's ELLIPSIS option. There's already a little bit of magic in the way doctest handles exceptions. I always use doctest.IGNORE_EXCEPTION_DETAIL so that everything after the first colon on the last line gets ignored when matching exceptions. That allows me to include the text of the exception in the documentation so its clear what the associated problem is, without worrying about spurious failures due to variations in the exact wording of the error message. Using an ellipsis instead would get essentially the same effect from a test point of view, but would result in less effective documentation. Given that the option exists, I assume I'm not the only who thinks that way. To get back to the original question of ignoring module names, I wonder if doctext should actually try to be a bit cleverer about the way it matches exceptions when the ignore exception detail flag is in effect. At the moment, documenting something as raising ZeroDivisionError in a doctest, but then actually raising a differently named subclass (such as decimal.DivisionByZero) in the code will result in a failed test. IMO, this is a spurious failure if I've told the doctest engine that I don't care about the details of exceptions raised - there's nothing wrong with the documentation, as the following relationship holds: Py> issubclass(decimal.DivisionByZero, ZeroDivisionError) True The fact that what the call raises is technically an instance of a subclass rather than of the listed class is really an implementation detail, just like the text in the associated error message. The important point is that an except clause covering the documented exception type will catch the raised exception. The doctest machinery has access to both the expected exception name, the actual exception raised, and the dictionary used to execute the test, so something like "issubclass(exc_type, eval(expected_exc_name, test_globs))" should do the trick (with a bit of work to extract the expected exception name, naturally). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From mal at egenix.com Thu Apr 13 12:20:49 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 13 Apr 2006 12:20:49 +0200 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> Message-ID: <443E2601.3060101@egenix.com> Neal Norwitz wrote: > On 3/31/06, M.-A. Lemburg <mal at egenix.com> wrote: >> Martin v. L?wis wrote: >>> Neal Norwitz wrote: >>>> See http://python.org/sf/1454485 for the gory details. Basically if >>>> you create a unicode array (array.array('u')) and try to append an >>>> 8-bit string (ie, not unicode), you can crash the interpreter. >>>> >>>> The problem is that the string is converted without question to a >>>> unicode buffer. Within unicode, it assumes the data to be valid, but >>>> this isn't necessarily the case. We wind up accessing an array with a >>>> negative index and boom. >>> There are several problems combined here, which might need discussion: >>> >>> - why does the 'u#' converter use the buffer interface if available? >>> it should just support Unicode objects. The buffer object makes >>> no promise that the buffer actually is meaningful UCS-2/UCS-4, so >>> u# shouldn't guess that it is. >>> (FWIW, it currently truncates the buffer size to the next-smaller >>> multiple of sizeof(Py_UNICODE), and silently so) >>> >>> I think that part should just go: u# should be restricted to unicode >>> objects. >> 'u#' is intended to match 's#' which also uses the buffer >> interface. It expects the buffer returned by the object >> to a be a Py_UNICODE* buffer, hence the calculation of the >> length. >> >> However, we already have 'es#' which is a lot safer to use >> in this respect: you can explicity define the encoding you >> want to see, e.g. 'unicode-internal' and the associated >> codec also takes care of range checks, etc. >> >> So, I'm +1 on restricting 'u#' to Unicode objects. > > Note: 2.5 no longer crashes, 2.4 does. > > Does this mean you would like to see this patch checked in to 2.5? Yes. > What should we do about 2.4? Perhaps you could add a warning that is displayed when using u# with non-Unicode objects ?! > Index: Python/getargs.c > =================================================================== > --- Python/getargs.c (revision 45333) > +++ Python/getargs.c (working copy) > @@ -1042,11 +1042,8 @@ > STORE_SIZE(PyUnicode_GET_SIZE(arg)); > } > else { > - char *buf; > - Py_ssize_t count = convertbuffer(arg, p, &buf); > - if (count < 0) > - return converterr(buf, arg, msgbuf, bufsize); > - STORE_SIZE(count/(sizeof(Py_UNICODE))); > + return converterr("cannot convert raw buffers"", > + arg, msgbuf, bufsize); > } > format++; > } else { > >>> - should Python guarantee that all characters in a Unicode object >>> are between 0 and sys.maxunicode? Currently, it is possible to >>> create Unicode strings with either negative or very large Py_UNICODE >>> elements. >>> >>> - if the answer to the last question is no (i.e. if it is intentional >>> that a unicode object can contain arbitrary Py_UNICODE values): should >>> Python then guarantee that Py_UNICODE is an unsigned type? >> Py_UNICODE must always be unsigned. The whole implementation >> relies on this and has been designed with this in mind (see >> PEP 100). AFAICT, the configure does check that Py_UNICODE >> is always unsigned. > > Martin fixed the crashing problem in 2.5 by making wchar_t unsigned > which was a bug. (A configure test was reversed IIRC.) Can this > change to wchar_t be made in 2.4? That technically changes all the > interfaces even though it was a mistake. What should be done for 2.4? If users want to interface from wchar_t to Python's Unicode type they have to go through the PyUnicode_FromWideChar() and PyUnicode_AsWideChar() interfaces. Assuming that Py_UNICODE is the same as wchar_t is simply wrong (and always was). I also think that changing the type from signed to unsigned by backporting the configure fix will only make things safer for the user, since extensions will probably not even be aware of the fact that Py_UNICODE could be signed (it has always been documented to be unsigned). So +1 on backporting the configure test fix to 2.4. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 13 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From skip at pobox.com Thu Apr 13 13:54:24 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 13 Apr 2006 06:54:24 -0500 Subject: [Python-Dev] building with C++ In-Reply-To: <e8bf7a530604121919l65a2c36av5f51c5c12c48566b@mail.gmail.com> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> <200604131109.29576.anthony@interlink.com.au> <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> <17469.46248.822513.233118@montanaro.dyndns.org> <e8bf7a530604121919l65a2c36av5f51c5c12c48566b@mail.gmail.com> Message-ID: <17470.15344.239283.888381@montanaro.dyndns.org> >>>>> "Jeremy" == Jeremy Hylton <jeremy at alum.mit.edu> writes: Jeremy> Looks good to me. Why don't you check it in. I did, but it broke the C build, so I reverted it and reopened the patch. I'll try to take a look at it more later today. Skip From martin at v.loewis.de Thu Apr 13 14:34:28 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 14:34:28 +0200 Subject: [Python-Dev] building with C++ In-Reply-To: <17470.15344.239283.888381@montanaro.dyndns.org> References: <200604121453.17443.anthony@interlink.com.au> <443D2CF9.2020400@v.loewis.de> <loom.20060413T025710-272@post.gmane.org> <200604131109.29576.anthony@interlink.com.au> <e8bf7a530604121903y4a1f7f61xb1333d127b8f2d1d@mail.gmail.com> <17469.46248.822513.233118@montanaro.dyndns.org> <e8bf7a530604121919l65a2c36av5f51c5c12c48566b@mail.gmail.com> <17470.15344.239283.888381@montanaro.dyndns.org> Message-ID: <443E4554.4050506@v.loewis.de> skip at pobox.com wrote: > Jeremy> Looks good to me. Why don't you check it in. > > I did, but it broke the C build, so I reverted it and reopened the patch. > I'll try to take a look at it more later today. I took a different approach, introducing a second kind of sequence. Since most accesses go through macros, they are already typed correctly as long as the field have the same names (templates in C :-). Regards, Martin From amk at amk.ca Thu Apr 13 16:47:26 2006 From: amk at amk.ca (A.M. Kuchling) Date: Thu, 13 Apr 2006 10:47:26 -0400 Subject: [Python-Dev] TODO Wiki (was: Preserving the blamelist) In-Reply-To: <1f7befae0604122023r5a34d59el11ac13973706bd1@mail.gmail.com> References: <1f7befae0604122023r5a34d59el11ac13973706bd1@mail.gmail.com> Message-ID: <20060413144726.GA23140@rogue.amk.ca> On Wed, Apr 12, 2006 at 11:23:51PM -0400, Tim Peters wrote: > Not the same thing, but I just added: > http://wiki.python.org/moin/SimpleTodo I added a list of the Demo directories so that people know which ones need updating. --amk From ark at acm.org Thu Apr 13 16:39:23 2006 From: ark at acm.org (Andrew Koenig) Date: Thu, 13 Apr 2006 10:39:23 -0400 Subject: [Python-Dev] Any reason that any()/all() do not take a predicateargument? In-Reply-To: <loom.20060413T093154-130@post.gmane.org> Message-ID: <000401c65f08$11fd0710$6402a8c0@arkdesktop> > sorry if this came up before, but I tried searching the archives and > found nothing. It would be really nice if new builtin truth functions > in 2.5 took a predicate argument(defaults to bool), so one could > write, for example: > > seq = [1,2,3,4,5] > if any(seq, lambda x: x==5): > ... > > which is clearly more readable than > > reduce(seq, lambda x,y: x or y==5, False) How about this? if any(x==5 for x in seq): From tim.peters at gmail.com Thu Apr 13 17:08:03 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Apr 2006 11:08:03 -0400 Subject: [Python-Dev] Exceptions doctest Re: Request for review Message-ID: <1f7befae0604130808q309004f6w34be2356d4a36a43@mail.gmail.com> [Tim] >> ... >> doctest strives to be magic-free WYSIWYG. If someone _intends_ >> the module name to be optional in a doctest, they should explicitly >> use doctest's ELLIPSIS option. [Nick Coghlan] > There's already a little bit of magic in the way doctest handles exceptions. I > always use doctest.IGNORE_EXCEPTION_DETAIL so that everything after the first > colon on the last line gets ignored when matching exceptions. Explicitly specifiying IGNORE_EXCEPTION_DETAIL isn't "magic" -- it's explicitly asking for something. > That allows me to include the text of the exception in the documentation so > its clear what the associated problem is, without worrying about spurious > failures due to variations in the exact wording of the error message. Which is the documented purpose of IGNORE_EXCEPTION_DETAIL, for tests that want it. I rarely use it myself, but not never ;-) > Using an ellipsis instead would get essentially the same effect from a test > point of view, but would result in less effective documentation. Given that > the option exists, I assume I'm not the only who thinks that way. The reason ELLIPSIS isn't always suitable for this purpose is explained in the IGNORE_EXCEPTION_DETAIL docs: Note that a similar effect can be obtained using ELLIPSIS, and IGNORE_EXCEPTION_DETAIL may go away when Python releases prior to 2.4 become uninteresting. Until then, IGNORE_EXCEPTION_DETAIL is the only clear way to write a doctest that doesn't care about the exception detail yet continues to pass under Python releases prior to 2.4 (doctest directives appear to be comments to them). For example, [& see the docs for the example] > To get back to the original question of ignoring module names, I wonder if > doctext should actually try to be a bit cleverer about the way it matches > exceptions when the ignore exception detail flag is in effect. No. > At the moment, documenting something as raising ZeroDivisionError in a > doctest, but then actually raising a differently named subclass (such as > decimal.DivisionByZero) in the code will result in a failed test. Perfect: the doctest is lying in such a case, and should fail. WYSIWYG rules for doctest. > IMO, this is a spurious failure if I've told the doctest engine that I don't > care about the details of exceptions raised This is a doc problem that's my fault: you're reading "exception detail" as a vague catch-all description, which is reasonable enough. I intended to use it as a technical term, referring specifically to str() of the exception object (the thing bound to an `except` statement's second parameter). For example, as in (among other places) Lib/ctypes/test/test_bitfields.py's except Exception, detail: Maybe I'm hallucinating, but I believe that was standard usage of the term way back when. If so, terminology probably changed when exceptions no longer needed to be strings, and I didn't notice. Regardless, the doctest docs are almost <0.5 wink> clear enough about what it intends here. > - there's nothing wrong with the documentation, as the following relationship holds: > > Py> issubclass(decimal.DivisionByZero, ZeroDivisionError) > True > > The fact that what the call raises is technically an instance of a subclass > rather than of the listed class is really an implementation detail, just like > the text in the associated error message. The important point is that an > except clause covering the documented exception type will catch the raised > exception. > > The doctest machinery has access to both the expected exception name, the > actual exception raised, and the dictionary used to execute the test, so > something like "issubclass(exc_type, eval(expected_exc_name, test_globs))" > should do the trick (with a bit of work to extract the expected exception > name, naturally). Sorry, but it's horridly un-doctest-like to make inferences by magic. If you want to add an explicit ACCEPT_EXCEPTION_SUBCLASS doctest option, that's a different story. I would question the need, since I've never got close to needing it, and never heard anyone else bring it up. Is this something that happens to you often enough to be an irritation, or is it just that your brain tells you it's a cool idea? One reason I have to ask is that I'm pretty sure the example you gave is one that never bit you in real life. From p.f.moore at gmail.com Thu Apr 13 17:14:54 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 13 Apr 2006 16:14:54 +0100 Subject: [Python-Dev] Is test_sundry really expected to succeed on Windows? Message-ID: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> I've just managed to get Python built using the free MS compiler and tools (yay! full instructions to follow somewhere - probably the wiki and maybe as a patch to PCBuild\readme.txt) There's one thing that puzzled me - test_sundry is marked as an "unexpected skip". As it imports tty, which imports termios, which doesn't exist on Windows, I'd imagine that this is an expected skip. Is this just something people are used to ignoring, or have I got a build issue I missed? Thanks, Paul. From tim.peters at gmail.com Thu Apr 13 17:34:15 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Apr 2006 11:34:15 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <e1ktuo$o1$1@sea.gmane.org> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> Message-ID: <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> [Georg Brandl] > Well, it's tempting to let the buildbots run the tests for you <wink> > Honestly, I didn't realize that doctest relies on traceback. Running > the test suite takes over half an hour on this box, so I decided to > take a chance. Nobody ever expects that a checkin will break tests, so merely expecting that a checkin won't break tests is not sufficient reason to skip running tests. You made a checkin that broke every buildbot slave we have, and I suggest you're taking a wrong lesson from that ;-) Do release-build tests without -uall take over half an hour on your box? Running that much is "good enough" precaution. Even (on boxes with makefiles) running "make quicktest" is mounds better than doing no testing. All the cases of massive buildbot test breakage we've seen this week would have been caught by doing just that much before checkin. When previously-passing buildbots start failing, it at least stops me cold, and (I hope) stops others too. Sometimes it's unavoidable. For example, I spent almost all my Python time Monday repairing a variety of new test failures unique to the OpenBSD buildbot (that platform is apparently unique in assigning addresses with "the sign bit" set, which broke all sorts of tests after someone changed id() to always return a positive value). That's fine -- it happens. It's the seemingly routine practice this week of checking in changes that break the tests everywhere that destroys productivity without good cause. Relatedly, if someone makes a checkin and sees that it breaks lots of buildbot runs, they should revert the patch themself instead of waiting for someone else to get so frustrated that they do it. Reverting is very easy with svn, but people are reluctant to revert someone else's checkin. The buildbot system is useless so long as the tests fail, and having the tests pass isn't optional. > ... > I'm not the one to decide, but at some time the traceback module should be > rewritten to match the interpreter behavior. No argument from me about that. From tim.peters at gmail.com Thu Apr 13 17:44:07 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Apr 2006 11:44:07 -0400 Subject: [Python-Dev] Is test_sundry really expected to succeed on Windows? In-Reply-To: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> References: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> Message-ID: <1f7befae0604130844s6a9bc9f6q9c207d20baf1b1a5@mail.gmail.com> [Paul Moore] > I've just managed to get Python built using the free MS compiler and > tools (yay! full instructions to follow somewhere - probably the wiki > and maybe as a patch to PCBuild\readme.txt) > > There's one thing that puzzled me - test_sundry is marked as an > "unexpected skip". As it imports tty, which imports termios, which > doesn't exist on Windows, I'd imagine that this is an expected skip. > > Is this just something people are used to ignoring, or have I got a > build issue I missed? You didn't say from where, or when, you got the Python source code. Someone recently added a bare "import tty" to test_sundry on the trunk, without realizing that would cause test_sundry to get skipped on Windows. I repaired that on the trunk about 12 hours ago, in revision 45332. From p.f.moore at gmail.com Thu Apr 13 17:56:24 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 13 Apr 2006 16:56:24 +0100 Subject: [Python-Dev] Is test_sundry really expected to succeed on Windows? In-Reply-To: <1f7befae0604130844s6a9bc9f6q9c207d20baf1b1a5@mail.gmail.com> References: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> <1f7befae0604130844s6a9bc9f6q9c207d20baf1b1a5@mail.gmail.com> Message-ID: <79990c6b0604130856v3d235c78s83a5272235a3c3ee@mail.gmail.com> On 4/13/06, Tim Peters <tim.peters at gmail.com> wrote: > [Paul Moore] > You didn't say from where, or when, you got the Python source code. > Someone recently added a bare "import tty" to test_sundry on the > trunk, without realizing that would cause test_sundry to get skipped > on Windows. I repaired that on the trunk about 12 hours ago, in > revision 45332. Sorry, it's a trunk checkout, but I'm currently working on a machine without svn access, so my checkout is from last night (before your checkin). So that's the issue. Apologies - I should have checked the svn logs first. Paul. From steven.bethard at gmail.com Thu Apr 13 17:58:14 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 09:58:14 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement Message-ID: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> I know 2.5's not out yet, but since I now have a PEP number, I'm going to go ahead and post this for discussion. Currently, the target version is Python 2.6. You can also see the PEP at: http://www.python.org/dev/peps/pep-0359/ Thanks in advance for the feedback! PEP: 359 Title: The "make" Statement Version: $Revision: 45366 $ Last-Modified: $Date: 2006-04-13 07:36:24 -0600 (Thu, 13 Apr 2006) $ Author: Steven Bethard <steven.bethard at gmail.com> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 05-Apr-2006 Python-Version: 2.6 Post-History: 05-Apr-2006, 06-Apr-2006 Abstract ======== This PEP proposes a generalization of the class-declaration syntax, the ``make`` statement. The proposed syntax and semantics parallel the syntax for class definition, and so:: make <callable> <name> <tuple>: <block> is translated into the assignment:: <name> = <callable>("<name>", <tuple>, <namespace>) where ``<namespace>`` is the dict created by executing ``<block>``. The PEP is based on a suggestion [1]_ from Michele Simionato on the python-dev list. Motivation ========== Class statements provide two nice facilities to Python: (1) They are the standard Python means of creating a namespace. All statements within a class body are executed, and the resulting local name bindings are passed as a dict to the metaclass. (2) They encourage DRY (don't repeat yourself) by allowing the class being created to know the name it is being assigned. Thus in a simple class statement like:: class C(object): x = 1 def foo(self): return 'bar' the metaclass (``type``) gets called with something like:: C = type('C', (object,), {'x':1, 'foo':<function foo at ...>}) The class statement is just syntactic sugar for the above assignment statement, but clearly a very useful sort of syntactic sugar. It avoids not only the repetition of ``C``, but also simplifies the creation of the dict by allowing it to be expressed as a series of statements. Historically, type instances (a.k.a. class objects) have been the only objects blessed with this sort of syntactic support. But other sorts of objects could benefit from such support. For example, property objects take three function arguments, but because the property type cannot be passed a namespace, these functions, though relevant only to the property, must be declared before it and then passed as arguments to the property call, e.g.:: class C(object): ... def get_x(self): ... def set_x(self): ... x = property(get_x, set_x, ...) There have been a few recipes [2]_ trying to work around this behavior, but with the new make statement (and an appropriate definition of property), the getter and setter functions can be defined in the property's namespace like:: class C(object): ... make property x: def get(self): ... def set(self): ... The definition of such a property callable could be as simple as:: def property(name, args, namespace): fget = namespace.get('get') fset = namespace.get('set') fdel = namespace.get('delete') doc = namespace.get('__doc__') return __builtin__.property(fget, fset, fdel, doc) Of course, properties are only one of the many possible uses of the make statement. The make statement is useful in essentially any situation where a name is associated with a namespace. So, for example, namespaces could be created as simply as:: make namespace ns: """This creates a namespace named ns with a badger attribute and a spam function""" badger = 42 def spam(): ... And if Python acquires interfaces, given an appropriately defined ``interface`` callable, the make statement can support interface creation through the syntax:: make interface C(...): ... This would mean that interface systems like that of Zope would no longer have to abuse the class syntax to create proper interface instances. Specification ============= Python will translate a make statement:: make <callable> <name> <tuple>: <block> into the assignment:: <name> = <callable>("<name>", <tuple>, <namespace>) where ``<namespace>`` is the dict created by executing ``<block>``. The ``<tuple>`` expression is optional; if not present, an empty tuple will be assumed. A patch is available implementing these semantics [3]_. The make statement introduces a new keyword, ``make``. Thus in Python 2.6, the make statement will have to be enabled using ``from __future__ import make_statement``. Open Issues =========== Does the ``make`` keyword break too much code? Originally, the make statement used the keyword ``create`` (a suggestion due to Nick Coghlan). However, investigations into the standard library [4]_ and Zope+Plone code [5]_ revealed that ``create`` would break a lot more code, so ``make`` was adopted as the keyword instead. However, there are still a few instances where ``make`` would break code. Is there a better keyword for the statement? ********** Currently, there are not many functions which have the signature ``(name, args, kwargs)``. That means that something like:: make dict params: x = 1 y = 2 is currently impossible because the dict constructor has a different signature. Does this sort of thing need to be supported? One suggestion, by Carl Banks, would be to add a ``__make__`` magic method that would be called before ``__call__``. For types, the ``__make__`` method would be identical to ``__call__`` (and thus unnecessary), but dicts could support the make statement by defining a ``__make__`` method on the dict type that looks something like:: def __make__(cls, name, args, kwargs): return cls(**kwargs) Of course, rather than adding another magic method, the dict type could just grow a classmethod something like ``dict.fromblock`` that could be used like:: make dict.fromblock params: x = 1 y = 2 Optional Extensions =================== Remove the make keyword ------------------------- It might be possible to remove the make keyword so that such statements would begin with the callable being called, e.g.:: namespace ns: badger = 42 def spam(): ... interface C(...): ... However, almost all other Python statements begin with a keyword, and removing the keyword would make it harder to look up this construct in the documentation. Additionally, this would add some complexity in the grammar and so far I (Steven Bethard) have not been able to implement the feature without the keyword. Removing __metaclass__ in Python 3000 ------------------------------------- As a side-effect of its generality, the make statement mostly eliminates the need for the ``__metaclass__`` attribute in class objects. Thus in Python 3000, instead of:: class <name> <bases-tuple>: __metaclass__ = <metaclass> <block> metaclasses could be supported by using the metaclass as the callable in a make statement:: make <metaclass> <name> <bases-tuple>: <block> Removing the ``__metaclass__`` hook would simplify the BUILD_CLASS opcode a bit. Removing class statements in Python 3000 ---------------------------------------- In the most extreme application of make statements, the class statement itself could be deprecated in favor of ``make type`` statements. References ========== .. [1] Michele Simionato's original suggestion (http://mail.python.org/pipermail/python-dev/2005-October/057435.html) .. [2] Namespace-based property recipe (http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418) .. [3] Make Statement patch (http://ucsu.colorado.edu/~bethard/py/make_statement.patch) .. [4] Instances of create in the stdlib (http://mail.python.org/pipermail/python-list/2006-April/335159.html) .. [5] Instances of create in Zope+Plone (http://mail.python.org/pipermail/python-list/2006-April/335284.html) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From martin at v.loewis.de Thu Apr 13 18:28:31 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 18:28:31 +0200 Subject: [Python-Dev] Is test_sundry really expected to succeed on Windows? In-Reply-To: <79990c6b0604130856v3d235c78s83a5272235a3c3ee@mail.gmail.com> References: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> <1f7befae0604130844s6a9bc9f6q9c207d20baf1b1a5@mail.gmail.com> <79990c6b0604130856v3d235c78s83a5272235a3c3ee@mail.gmail.com> Message-ID: <443E7C2F.6060101@v.loewis.de> Paul Moore wrote: > Sorry, it's a trunk checkout, but I'm currently working on a machine > without svn access, so my checkout is from last night (before your > checkin). So that's the issue. > > Apologies - I should have checked the svn logs first. I find this kind of mistake (working on old sources) quite natural, so no need to apologize (of course, apologizing can't hurt except yourself...) Regards, Martin From g.brandl at gmx.net Thu Apr 13 18:28:20 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 13 Apr 2006 18:28:20 +0200 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> Message-ID: <e1lu75$fu8$1@sea.gmane.org> Tim Peters wrote: > [Georg Brandl] >> Well, it's tempting to let the buildbots run the tests for you <wink> >> Honestly, I didn't realize that doctest relies on traceback. Running >> the test suite takes over half an hour on this box, so I decided to >> take a chance. > > Nobody ever expects that a checkin will break tests, so merely > expecting that a checkin won't break tests is not sufficient reason to > skip running tests. You made a checkin that broke every buildbot > slave we have, and I suggest you're taking a wrong lesson from that > ;-) No objection, mylord ;) > Do release-build tests without -uall take over half an hour on your > box? Running that much is "good enough" precaution. Even (on boxes > with makefiles) running "make quicktest" is mounds better than doing > no testing. All the cases of massive buildbot test breakage we've > seen this week would have been caught by doing just that much before > checkin. Until now, I've always run with -uall. Running "make quicktest" is fine, but if the buildbots starts failing with -uall afterwards, no one will accept that as an excuse ;) > When previously-passing buildbots start failing, it at least stops me > cold, and (I hope) stops others too. Sometimes it's unavoidable. For > example, I spent almost all my Python time Monday repairing a variety > of new test failures unique to the OpenBSD buildbot (that platform is > apparently unique in assigning addresses with "the sign bit" set, > which broke all sorts of tests after someone changed id() to always > return a positive value). That's fine -- it happens. It's the > seemingly routine practice this week of checking in changes that break > the tests everywhere that destroys productivity without good cause. I see, and I'm sorry I was part of it. > Relatedly, if someone makes a checkin and sees that it breaks lots of > buildbot runs, they should revert the patch themself instead of > waiting for someone else to get so frustrated that they do it. > Reverting is very easy with svn, but people are reluctant to revert > someone else's checkin. The buildbot system is useless so long as the > tests fail, and having the tests pass isn't optional. For excuse, I posted here immediately after I saw that the tests failed, asking whether to change doctest or revert my change. >> I'm not the one to decide, but at some time the traceback module should be >> rewritten to match the interpreter behavior. > > No argument from me about that. Fine. There's another one, python.org/sf/1326077, which looks suspicious to me too. Georg From martin at v.loewis.de Thu Apr 13 18:31:52 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 18:31:52 +0200 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> Message-ID: <443E7CF8.3020102@v.loewis.de> Tim Peters wrote: >> I'm not the one to decide, but at some time the traceback module should be >> rewritten to match the interpreter behavior. > > No argument from me about that. I also think the traceback module should be corrected, and the test cases updated, despite the objections that it may break other people's doctest test cases. Regards, Martin From oliphant.travis at ieee.org Thu Apr 13 18:31:48 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Thu, 13 Apr 2006 10:31:48 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <e1ludl$go1$1@sea.gmane.org> Steven Bethard wrote: > I know 2.5's not out yet, but since I now have a PEP number, I'm going > to go ahead and post this for discussion. Currently, the target > version is Python 2.6. You can also see the PEP at: > http://www.python.org/dev/peps/pep-0359/ > > Thanks in advance for the feedback! I generally like the idea. A different name would be better. Here's a list of approximate synonyms that might work (ordered by my preference...) induce compose realize furnish produce And others I found in no particular order... invent originate organize build author generate construct erect concoct coin establish instigate trigger offer From tim.peters at gmail.com Thu Apr 13 18:38:52 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Apr 2006 12:38:52 -0400 Subject: [Python-Dev] Is test_sundry really expected to succeed on Windows? In-Reply-To: <443E7C2F.6060101@v.loewis.de> References: <79990c6b0604130814h3cad8a5by99b509c5b1d2e4fc@mail.gmail.com> <1f7befae0604130844s6a9bc9f6q9c207d20baf1b1a5@mail.gmail.com> <79990c6b0604130856v3d235c78s83a5272235a3c3ee@mail.gmail.com> <443E7C2F.6060101@v.loewis.de> Message-ID: <1f7befae0604130938jd9c6d0dgfe10b89a5a11bbbe@mail.gmail.com> [Paul Moore] >> Sorry, it's a trunk checkout, but I'm currently working on a machine >> without svn access, so my checkout is from last night (before your >> checkin). So that's the issue. >> >> Apologies - I should have checked the svn logs first. [Martin v. L?wis] > I find this kind of mistake (working on old sources) quite natural, > so no need to apologize (of course, apologizing can't hurt except > yourself...) Yup, I sure didn't take offense -- and I'd think there's something wrong with Paul if he took time to scour SVN logs for obscure Windows bugs ;-) From martin at v.loewis.de Thu Apr 13 18:40:34 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 18:40:34 +0200 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <443E7F02.6090407@v.loewis.de> Steven Bethard wrote: > I know 2.5's not out yet, but since I now have a PEP number, I'm going > to go ahead and post this for discussion. Currently, the target > version is Python 2.6. You can also see the PEP at: > http://www.python.org/dev/peps/pep-0359/ > > Thanks in advance for the feedback! Interesting! Sometimes, people want the ability to refer to the namespace already in the block, or have the namespace "control" execution of the block. For example, sometimes people what to get the class object while the class is being defined. It's not there yet, of course. In other cases (also properties) they want the name to be bound not the one of the get method, but instead the name of the property (of course, the make statement solves this problem already in a different way). Would it be possible/useful to have a pre-block hook to the callable, which would provide the dictionary; this dictionary might not be a proper dictionary (but only some mapping), or it might be pre-initialized. Regards, Martin From s.percivall at chello.se Thu Apr 13 18:43:09 2006 From: s.percivall at chello.se (Simon Percivall) Date: Thu, 13 Apr 2006 18:43:09 +0200 Subject: [Python-Dev] Checkin 45232: Patch #1429775 Message-ID: <9CF73E91-C4D3-4B3C-A814-DA3E6308C116@chello.se> Building SVN trunk with --enable-shared has been broken on Mac OS X Intel since rev. 45232 a couple of days ago. I can't say if this is the case anywhere else as well. What happens is simply that ld can't find the file to link the shared mods against. //Simon From tim.peters at gmail.com Thu Apr 13 18:57:55 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 13 Apr 2006 12:57:55 -0400 Subject: [Python-Dev] [Python-checkins] r45334 - python/trunk/Lib/test/leakers/test_gen1.py python/trunk/Lib/test/leakers/test_generator_cycle.py python/trunk/Lib/test/leakers/test_tee.py In-Reply-To: <fb6fbf560604130945x6f56a541h4c47a0fb2520e91f@mail.gmail.com> References: <20060413043540.51D6F1E404D@bag.python.org> <fb6fbf560604130945x6f56a541h4c47a0fb2520e91f@mail.gmail.com> Message-ID: <1f7befae0604130957p36a15ba5l26160dba0635bd9c@mail.gmail.com> [neal.norwitz] >> Author: neal.norwitz >> Date: Thu Apr 13 06:35:36 2006 >> New Revision: 45334 >> >> Added: >> python/trunk/Lib/test/leakers/test_gen1.py (contents, props changed) >> Removed: >> python/trunk/Lib/test/leakers/test_generator_cycle.py >> python/trunk/Lib/test/leakers/test_tee.py >> Log: >> Remove tests that no longer leak. There is still one leaking generator test [Jim Jewett] > Should these really be removed, or just added to the regular tests -- > to ensure that the leakage doesn't get worse. I was going to ask much the same: bugs that have been fixed have a delightful way of reappearing a release or two (or ...) later, perhaps because they traversed unusually delicate code paths to begin with. Rather than delete a leak test, perhaps we could simply move it into a new old-leaking-tests subdirectory? Likewise for crash tests. When the bug reappears, it's helfpul to have the focussed (whittled-down) old test that provoked it handy. From steven.bethard at gmail.com Thu Apr 13 19:29:48 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 11:29:48 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <e1ludl$go1$1@sea.gmane.org> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <e1ludl$go1$1@sea.gmane.org> Message-ID: <d11dcfba0604131029v3c324cf6j97e906d80cdbc46@mail.gmail.com> On 4/13/06, Travis Oliphant <oliphant.travis at ieee.org> wrote: > Steven Bethard wrote: > > I know 2.5's not out yet, but since I now have a PEP number, I'm going > > to go ahead and post this for discussion. Currently, the target > > version is Python 2.6. You can also see the PEP at: > > http://www.python.org/dev/peps/pep-0359/ > > > > Thanks in advance for the feedback! > > I generally like the idea. A different name would be better. I'm going to sidestep the keyword issue for the moment. It's basically a style decision, and I try to leave those to Guido. Once (if) people have agreed upon the statement semantics, then we can discuss the keyword. STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From theller at python.net Thu Apr 13 19:42:08 2006 From: theller at python.net (Thomas Heller) Date: Thu, 13 Apr 2006 19:42:08 +0200 Subject: [Python-Dev] ia64 debian buildbot Message-ID: <e1m2hg$u05$1@sea.gmane.org> [Please let me know if this should go to the machine owner instead] Why does the ia64 debian buildbot now complain about unaligned accesses, and how can we find out where they occur? http://www.python.org/dev/buildbot/trunk/ia64%20Debian%20unstable%20trunk/builds/123/step-test/0 Thanks, Thomas From ncoghlan at gmail.com Thu Apr 13 19:50:11 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 14 Apr 2006 03:50:11 +1000 Subject: [Python-Dev] Exceptions doctest Re: Request for review In-Reply-To: <1f7befae0604130808q309004f6w34be2356d4a36a43@mail.gmail.com> References: <1f7befae0604130808q309004f6w34be2356d4a36a43@mail.gmail.com> Message-ID: <443E8F53.5060609@gmail.com> Tim Peters wrote: > Sorry, but it's horridly un-doctest-like to make inferences by magic. > If you want to add an explicit ACCEPT_EXCEPTION_SUBCLASS doctest > option, that's a different story. I would question the need, since > I've never got close to needing it, and never heard anyone else bring > it up. Is this something that happens to you often enough to be an > irritation, or is it just that your brain tells you it's a cool idea? > One reason I have to ask is that I'm pretty sure the example you gave > is one that never bit you in real life. > You'd be guessing right - and the explanation of how you meant the term makes a lot of sense, too. Consider my crazy idea withdrawn :) A real limitation I *have* encountered is that doctest doesn't like the idea of a single statement that triggers normal output followed by an exception (it expects one or the other). However, the case where I wanted that was rather esoteric and easy enough to deal with by using a custom result checker. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From steven.bethard at gmail.com Thu Apr 13 20:05:26 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 12:05:26 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <443E7F02.6090407@v.loewis.de> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> Message-ID: <d11dcfba0604131105q576acd53h180d10dbfb89de8@mail.gmail.com> On 4/13/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Steven Bethard wrote: > > I know 2.5's not out yet, but since I now have a PEP number, I'm going > > to go ahead and post this for discussion. Currently, the target > > version is Python 2.6. You can also see the PEP at: > > http://www.python.org/dev/peps/pep-0359/ > > > > Thanks in advance for the feedback! > [snip] > > Would it be possible/useful to have a pre-block hook to the callable, > which would provide the dictionary; this dictionary might not be > a proper dictionary (but only some mapping), or it might be pre-initialized. Yeah, something along these lines came up in dicussing using the make statement for XML generation. You might want to write something like: make Element html: make Element head: ... make Element body: ... however, this doesn't work with the current semantics because: (1) dict's are unordered (2) dict's can't have the same name (key) twice and so you can only generate XML/HTML where the order of elements doesn't matter and you never have repeated elements. That's not really XML/HTML anymore. You could probably solve this if you could supply a different type of dict-like object for the block to be executed in. Then we'd have to have a translation from something like:: make <callable> <name> <tuple> in <mapping>: <block> to something like:: <name> = <callable>("<name>", <tuple>, <namespace>) where <namespace> is created by executing the statements of <block> in the <mapping> object. Skipping the syntax discussion for the moment, I guess I have two problems with this: (1) It complicates the statement semantics pretty substantially (2) It breaks the parallel with the class statement since you can't supply an alternate mapping type for class bodies to be executed in (3) It adds some degree of coupling between the mapping type and the callable. For the example above, I expect I'd have to do something like:: make Element html in ElementDict(): make Element head in ElementDict(): ... make Element body in ElementDict(): ... where I'd expect that ElementDict was really only useful if you were making Elements. It also seems a little inelegant to have to create a new ElementDict each time. To avoid this latter problem, I guess we could allow Element to supply a __getmapping__ hook which would be called to get the mapping to execute the block in, but then simple functions would not have access to the full functionality of the make statement. I guess I'm willing to consider the idea, but only if someone else presents a concrete proposal for the additional semantics (that's better than the one above) and a few use-cases showing how it would improve some code. STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From pje at telecommunity.com Thu Apr 13 20:30:07 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 13 Apr 2006 14:30:07 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604131105q576acd53h180d10dbfb89de8@mail.gmail.com > References: <443E7F02.6090407@v.loewis.de> <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> Message-ID: <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> At 12:05 PM 4/13/2006 -0600, Steven Bethard wrote: >On 4/13/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > Steven Bethard wrote: > > > I know 2.5's not out yet, but since I now have a PEP number, I'm going > > > to go ahead and post this for discussion. Currently, the target > > > version is Python 2.6. You can also see the PEP at: > > > http://www.python.org/dev/peps/pep-0359/ > > > > > > Thanks in advance for the feedback! > > >[snip] > > > > Would it be possible/useful to have a pre-block hook to the callable, > > which would provide the dictionary; this dictionary might not be > > a proper dictionary (but only some mapping), or it might be > pre-initialized. > >Yeah, something along these lines came up in dicussing using the make >statement for XML generation. You might want to write something like: > > make Element html: > make Element head: > ... > make Element body: > ... > >however, this doesn't work with the current semantics because: > >(1) dict's are unordered >(2) dict's can't have the same name (key) twice Why not just use 'with' statements for this? The context gets invoked before and after, so it can track any local bindings that occur -- including the 'as' binding. with maker(...) as foo: blah blah While it's true that frame-locals hacking isn't especially portable, you can at least use it to develop your actual use cases for this stuff, and then show why the syntax is awkward or doesn't support something. From thomas at xs4all.net Wed Apr 12 11:24:34 2006 From: thomas at xs4all.net (Thomas Wouters) Date: Wed, 12 Apr 2006 11:24:34 +0200 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> Message-ID: <20060412092434.GA4974@xs4all.nl> On Wed, Apr 12, 2006 at 01:16:04AM -0400, Tim Peters wrote: > I'd whine about not checking buildbot health after a code change, > except that it's much more tempting to point out that Thomas couldn't > have run the test suite successfully on his own box in this case :-) Not with -uall, yes. Without -uall it ran fine (by chance, I admit.) I didn't feel like running the rather lengthy -uall test on my box right before bedtime; I won't be skipping that again anytime soon ;) -- Thomas Wouters <thomas at xs4all.net> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! From jimjjewett at gmail.com Thu Apr 13 20:54:11 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Thu, 13 Apr 2006 14:54:11 -0400 Subject: [Python-Dev] [Python-3000] Removing 'self' from method definitions In-Reply-To: <443E9193.7000908@colorstudy.com> References: <443E7F47.9060308@colorstudy.com> <20060413171301.GA9869@localhost.localdomain> <443E9193.7000908@colorstudy.com> Message-ID: <fb6fbf560604131154o1b5ef33fg19d8c01a7e887118@mail.gmail.com> (Cc to python dev, as my question is really about 2.x) On 4/13/06, Ian Bicking <ianb at colorstudy.com> wrote: > ... the self in the signature (and the miscount of arguments > in TypeError exceptions) ... Even in the 2.x line, the TypeError messages should be improved. >>> # No syntax errors when creating m() >>> class C: def m(): pass but the method can't actually be called, and won't quite say why. >>> # This message is good >>> C.m() Traceback (most recent call last): File "<pyshell#101>", line 1, in -toplevel- C.m() TypeError: unbound method m() must be called with C instance as first argument (got nothing instead) but the obvious fix of using an instance is just confusing >>> C().m() Traceback (most recent call last): File "<pyshell#102>", line 1, in -toplevel- C().m() TypeError: m() takes no arguments (1 given) Could it at least say something like "(1 given, including self)"? I suppose the argument might be named something other than self, particularly in C code, but ... that strikes me as a smaller problem than the mysteriously appearing invisible argument. -jJ From ian.bollinger at gmail.com Thu Apr 13 21:14:13 2006 From: ian.bollinger at gmail.com (Ian D. Bollinger) Date: Thu, 13 Apr 2006 15:14:13 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <443EA305.70805@gmail.com> I guess I fail to see how this syntax is a significant improvement over metaclasses (though __metaclass__ = xyz may not be the most aesthetic construct.) -- Ian D. Bollinger From martin at v.loewis.de Thu Apr 13 21:32:15 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 13 Apr 2006 21:32:15 +0200 Subject: [Python-Dev] ia64 debian buildbot In-Reply-To: <e1m2hg$u05$1@sea.gmane.org> References: <e1m2hg$u05$1@sea.gmane.org> Message-ID: <443EA73F.90301@v.loewis.de> Thomas Heller wrote: > Why does the ia64 debian buildbot now complain about unaligned accesses, > and how can we find out where they occur? I don't know why they started show up suddenly; on Debian-Itanium, it is a configuration option (even per process, through prctl(1)) whether they just produce a log message, or a signal, or nothing; if they produce a log message, this also goes to the process' terminal. These unaligned access must have been there for some time now; perhaps something on the machine has changed. Matthias? Finding where they originate from is really hard. You need to set the process into the "signal unaligned access" mode, and then run it under a debugger, so you know where it crashes. This requires shell access to the machine. I have tried to reproduce it on my Itanium machine, and found that that they come from the AST compiler's arena (I /knew/ it was wrong to implement your own memory management algorithms :-): it was returning memory blocks that were only 4-aligned, so the the pointers in the ASDL sequences were all unaligned; hence the errors. I changed that to always guarantee 8-alignment for all blocks returned from the arena; that seems to help. Regards, Martin From steven.bethard at gmail.com Thu Apr 13 21:47:41 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 13:47:41 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <443EA305.70805@gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443EA305.70805@gmail.com> Message-ID: <d11dcfba0604131247s713588f7u357140d7b71a9a85@mail.gmail.com> On 4/13/06, Ian D. Bollinger <ian.bollinger at gmail.com> wrote: > I guess I fail to see how this syntax is a significant improvement over > metaclasses (though __metaclass__ = xyz may not be the most aesthetic > construct.) It doesn't seem strange to you to have to use a *class* statement and a __meta*class*__ hook to create something that's not a class at all? Consider >>> def get_dict(name, args, kwargs): ... return kwargs ... >>> class C(object): ... __metaclass__ = get_dict ... x = 1 ... y = 2 ... >>> C {'y': 2, 'x': 1, '__module__': '__main__', '__metaclass__': <function get_dict at 0x00DB9F70>} When I read a class statement, even if it specifies __metaclass__, I assume that it will create a class object. I believe the average reader of Python code will make similar assumptions. Sure, we can abuse class/__metaclass__ to do something similar[1], but is that really a good idea? [1] Minor issue - you have to be okay with the class statement adding __module__ and __metaclass__ to your dict. Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From steven.bethard at gmail.com Thu Apr 13 21:51:50 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 13:51:50 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> Message-ID: <d11dcfba0604131251h5aadd390q81930eeda258cf1b@mail.gmail.com> On 4/13/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 12:05 PM 4/13/2006 -0600, Steven Bethard wrote: > >On 4/13/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > > Steven Bethard wrote: > > > > I know 2.5's not out yet, but since I now have a PEP number, I'm going > > > > to go ahead and post this for discussion. Currently, the target > > > > version is Python 2.6. You can also see the PEP at: > > > > http://www.python.org/dev/peps/pep-0359/ > > > > > > > > Thanks in advance for the feedback! > > > > >[snip] > > > > > > Would it be possible/useful to have a pre-block hook to the callable, > > > which would provide the dictionary; this dictionary might not be > > > a proper dictionary (but only some mapping), or it might be > > pre-initialized. > > > >Yeah, something along these lines came up in dicussing using the make > >statement for XML generation. You might want to write something like: > > > > make Element html: > > make Element head: > > ... > > make Element body: > > ... > > > >however, this doesn't work with the current semantics because: > > > >(1) dict's are unordered > >(2) dict's can't have the same name (key) twice > > Why not just use 'with' statements for this? The context gets invoked > before and after, so it can track any local bindings that occur -- > including the 'as' binding. > > with maker(...) as foo: > blah blah > > While it's true that frame-locals hacking isn't especially portable, you > can at least use it to develop your actual use cases for this stuff, and > then show why the syntax is awkward or doesn't support something. Sorry, I'm not clear on exactly what you're suggesting. Are you suggesting I try to implement the make-statement using context managers? Or that I use a context manager to address Martin's problem? STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From theller at python.net Thu Apr 13 22:02:19 2006 From: theller at python.net (Thomas Heller) Date: Thu, 13 Apr 2006 22:02:19 +0200 Subject: [Python-Dev] ia64 debian buildbot In-Reply-To: <443EA73F.90301@v.loewis.de> References: <e1m2hg$u05$1@sea.gmane.org> <443EA73F.90301@v.loewis.de> Message-ID: <e1maob$pt0$1@sea.gmane.org> Martin v. L?wis wrote: > Thomas Heller wrote: >> Why does the ia64 debian buildbot now complain about unaligned accesses, >> and how can we find out where they occur? > > I don't know why they started show up suddenly; on Debian-Itanium, it > is a configuration option (even per process, through prctl(1)) whether > they just produce a log message, or a signal, or nothing; if they > produce a log message, this also goes to the process' terminal. > > These unaligned access must have been there for some time now; > perhaps something on the machine has changed. Matthias? > > Finding where they originate from is really hard. You need to > set the process into the "signal unaligned access" mode, and > then run it under a debugger, so you know where it crashes. > This requires shell access to the machine. > > I have tried to reproduce it on my Itanium machine, and found that > that they come from the AST compiler's arena (I /knew/ it was > wrong to implement your own memory management algorithms :-): it > was returning memory blocks that were only 4-aligned, so the > the pointers in the ASDL sequences were all unaligned; hence the > errors. > > I changed that to always guarantee 8-alignment for all blocks > returned from the arena; that seems to help. Cool - thanks. I'm anxiously waiting until the buildbot runs the ctypes-test ;-). Thomas From pje at telecommunity.com Thu Apr 13 22:05:16 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 13 Apr 2006 16:05:16 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604131251h5aadd390q81930eeda258cf1b@mail.gmail.co m> References: <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> At 01:51 PM 4/13/2006 -0600, Steven Bethard wrote: >Sorry, I'm not clear on exactly what you're suggesting. Are you >suggesting I try to implement the make-statement using context >managers? Or that I use a context manager to address Martin's >problem? Yes. :) Both. Or neither. What I'm suggesting is that you implement the *use cases* for the make statement using 'with' and a bit of getframe hackery. Then, your PEP can be clearer as to whether there's actually any significant advantage to having a "make" statement. IOW, if "make" isn't anything more than yet another way to spell class decorators, metaclasses, or "with" statements, it's probably not a good idea to add it to the language. From steven.bethard at gmail.com Thu Apr 13 22:21:15 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 14:21:15 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> Message-ID: <d11dcfba0604131321w18371e5bm3d9bac9cd396662c@mail.gmail.com> On 4/13/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 01:51 PM 4/13/2006 -0600, Steven Bethard wrote: > >Sorry, I'm not clear on exactly what you're suggesting. Are you > >suggesting I try to implement the make-statement using context > >managers? Or that I use a context manager to address Martin's > >problem? > > Yes. :) Both. Or neither. What I'm suggesting is that you implement the > *use cases* for the make statement using 'with' and a bit of getframe > hackery. Then, your PEP can be clearer as to whether there's actually any > significant advantage to having a "make" statement. > > IOW, if "make" isn't anything more than yet another way to spell class > decorators, metaclasses, or "with" statements, it's probably not a good > idea to add it to the language. I'm against using anything with getframe hackery but here are the use cases written with the class/__metaclass__ abuse: class C(object): ... class x: __metaclass__ = property def get(self): ... def set(self): class ns: """This creates a namespace named ns with a badger attribute and a spam function""" __metaclass__ = namespace badger = 42 def spam(): ... class C(...): __metaclass__ = iterface ... Those should be mostly equivalent[1], except that all of the namespaces created will have additional __metaclass__ and __module__ attributes. The question is, is the intent still clear? When reading these, especially for the "namespace" example, I expect the result to be a class. The fact that it's not will be thoroughly confusing for anyone who doesn't know that metaclasses don't have to create class objects, and at least mildly misleading even for those who do understand metaclasses. Generally, the make statement has the advantage of not totally confusing your reader when your "class" statement creates something which is not a class at all. ;-) [1] So you don't have to check the PEP, here's what the make-statement versions look like: class C(object): ... make property x: def get(self): ... def set(self): ... make namespace ns: """This creates a namespace named ns with a badger attribute and a spam function""" badger = 42 def spam(): ... make interface C(...): ... STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From tjreedy at udel.edu Thu Apr 13 22:55:19 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 13 Apr 2006 16:55:19 -0400 Subject: [Python-Dev] [Python-3000] Removing 'self' from methoddefinitions References: <443E7F47.9060308@colorstudy.com><20060413171301.GA9869@localhost.localdomain><443E9193.7000908@colorstudy.com> <fb6fbf560604131154o1b5ef33fg19d8c01a7e887118@mail.gmail.com> Message-ID: <e1mdt1$52t$1@sea.gmane.org> "Jim Jewett" <jimjjewett at gmail.com> wrote in message news:fb6fbf560604131154o1b5ef33fg19d8c01a7e887118 at mail.gmail.com... > >>> # No syntax errors when creating m() > >>> class C: > def m(): pass > > but the method can't actually be called Unless it is wrapped as a staticmethod ;-) ... > >>> C().m() > > Traceback (most recent call last): > File "<pyshell#102>", line 1, in -toplevel- > C().m() > TypeError: m() takes no arguments (1 given) > > Could it at least say something like "(1 given, including self)"? or perhaps '(self + 0 given' or '(instance + 0 more given)' Terry Jan Reedy From pje at telecommunity.com Thu Apr 13 23:04:22 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 13 Apr 2006 17:04:22 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604131321w18371e5bm3d9bac9cd396662c@mail.gmail.co m> References: <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060413164949.01e193c8@mail.telecommunity.com> At 02:21 PM 4/13/2006 -0600, Steven Bethard wrote: >On 4/13/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 01:51 PM 4/13/2006 -0600, Steven Bethard wrote: > > >Sorry, I'm not clear on exactly what you're suggesting. Are you > > >suggesting I try to implement the make-statement using context > > >managers? Or that I use a context manager to address Martin's > > >problem? > > > > Yes. :) Both. Or neither. What I'm suggesting is that you implement the > > *use cases* for the make statement using 'with' and a bit of getframe > > hackery. Then, your PEP can be clearer as to whether there's actually any > > significant advantage to having a "make" statement. > > > > IOW, if "make" isn't anything more than yet another way to spell class > > decorators, metaclasses, or "with" statements, it's probably not a good > > idea to add it to the language. > >I'm against using anything with getframe hackery That's irrelevant; the point is that your PEP should show why a statement is *better*, either by showing the hackery isn't possible, isn't practical, or at least is hacky. > but here are the use >cases written with the class/__metaclass__ abuse: How is it *abuse* to use a language feature as it's designed? >The question is, is the intent still clear? Depends on your use case. I'm just saying that the PEP would be tons clearer if it started out by saying right up front that its goal is to provide a prominent form of syntax sugar for __metaclass__ uses in cases where the thing being create isn't particularly class-like. At the moment, that's not particularly clear from the PEP, which talks about generalizing the class declaration syntax -- something that's already sufficiently general, for classes. The PEP also doesn't explain why, for example, it shouldn't simply allow 'callable' to be an arbitrary expression, invoked with 'callable(name,namespace)'. I'd presume this is to allow backward compatibility with metaclasses... except the statement is for making things that *aren't* classes, so why is compatibility needed? The PEP also doesn't explain why the particular syntax order is chosen. Why is it "make callable name(tuple):" and not, say, "make name(tuple) from callable:" or "name = callable(tuple):" (which doesn't require a new keyword). From p.f.moore at gmail.com Thu Apr 13 23:11:04 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 13 Apr 2006 22:11:04 +0100 Subject: [Python-Dev] Building Python with the free MS Toolkit compiler Message-ID: <79990c6b0604131411j32a53793g5845107d3a648863@mail.gmail.com> I've just added some instructions on how to build Python on Windows with the free MS Toolkit C++ compiler. They are at http://wiki.python.org/moin/Building_Python_with_the_free_MS_C_Toolkit. Most of the credit for this goes to David Murmann, whose posting on the subject to python-list pointed out the existence of Nant, a free tool which will read Visual Studio solution files, and inspired me to see if I could get the process to work. If anyone finds any problems with the instructions given there, feel free to let me know (or better still, update the page with your correction!) Paul. From ianb at colorstudy.com Thu Apr 13 23:12:28 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 13 Apr 2006 16:12:28 -0500 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604131105q576acd53h180d10dbfb89de8@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <d11dcfba0604131105q576acd53h180d10dbfb89de8@mail.gmail.com> Message-ID: <443EBEBC.5000506@colorstudy.com> Steven Bethard wrote: > On 4/13/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > >>Steven Bethard wrote: >> >>>I know 2.5's not out yet, but since I now have a PEP number, I'm going >>>to go ahead and post this for discussion. Currently, the target >>>version is Python 2.6. You can also see the PEP at: >>> http://www.python.org/dev/peps/pep-0359/ >>> >>>Thanks in advance for the feedback! >> > [snip] > >>Would it be possible/useful to have a pre-block hook to the callable, >>which would provide the dictionary; this dictionary might not be >>a proper dictionary (but only some mapping), or it might be pre-initialized. > > > Yeah, something along these lines came up in dicussing using the make > statement for XML generation. You might want to write something like: > > make Element html: > make Element head: > ... > make Element body: > ... > > however, this doesn't work with the current semantics because: > > (1) dict's are unordered > (2) dict's can't have the same name (key) twice Is the body of the make statement going to work like the body of a class statement? I would assume so, in which case (2) would be a given. That is, if you can do: make Element html: title_text = 'foo' make Element title: content = title_text del title_text Then you really can't have multiple keys with the same name unless you give up the ability to refer in the body of the make statement to things defined earlier in that same body. Unless items that were rebound were hidden, but still somehow accessible to Element. > and so you can only generate XML/HTML where the order of elements > doesn't matter and you never have repeated elements. That's not > really XML/HTML anymore. > > You could probably solve this if you could supply a different type of > dict-like object for the block to be executed in. Then we'd have to > have a translation from something like:: > > make <callable> <name> <tuple> in <mapping>: > <block> > > to something like:: > > <name> = <callable>("<name>", <tuple>, <namespace>) > > where <namespace> is created by executing the statements of <block> in > the <mapping> object. Skipping the syntax discussion for the moment, > I guess I have two problems with this: > > (1) It complicates the statement semantics pretty substantially > (2) It breaks the parallel with the class statement since you can't > supply an alternate mapping type for class bodies to be executed in > (3) It adds some degree of coupling between the mapping type and the > callable. For the example above, I expect I'd have to do something > like:: > > make Element html in ElementDict(): > make Element head in ElementDict(): > ... > make Element body in ElementDict(): > ... Maybe Element.__make_dict__ could be ElementDict. This doesn't feel that unclean if you are also using Element.__make__ instead of Element.__call__; though there is a hidden cleverness factor (maybe in a bad way). -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From steven.bethard at gmail.com Thu Apr 13 23:23:08 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Thu, 13 Apr 2006 15:23:08 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <5.1.1.6.0.20060413164949.01e193c8@mail.telecommunity.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <443E7F02.6090407@v.loewis.de> <5.1.1.6.0.20060413142520.01fe80d8@mail.telecommunity.com> <5.1.1.6.0.20060413155735.01fff518@mail.telecommunity.com> <5.1.1.6.0.20060413164949.01e193c8@mail.telecommunity.com> Message-ID: <d11dcfba0604131423r24b1bdd2h7ff49183b1e201fe@mail.gmail.com> On 4/13/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 02:21 PM 4/13/2006 -0600, Steven Bethard wrote: [snip examples using class/__metaclass__ statements to create non-types] > >The question is, is the intent still clear? > > Depends on your use case. I'm just saying that the PEP would be tons > clearer if it started out by saying right up front that its goal is to > provide a prominent form of syntax sugar for __metaclass__ uses in cases > where the thing being create isn't particularly class-like. Good point. I'll try to elaborate on that a bunch. > The PEP also doesn't explain why, for example, it shouldn't simply allow > 'callable' to be an arbitrary expression, invoked with > 'callable(name,namespace)'. I'd presume this is to allow backward > compatibility with metaclasses... except the statement is for making > things that *aren't* classes, so why is compatibility needed? > > The PEP also doesn't explain why the particular syntax order is > chosen. Why is it "make callable name(tuple):" and not, say, "make > name(tuple) from callable:" or "name = callable(tuple):" (which doesn't > require a new keyword). Thanks for the feedback. I'll put that into the PEP too, but in short the answer is that the particular syntax follows the class syntax exactly except that "class" is replaced by "make <callable>". There have been a lot of other proposals that try to introduce dramatically new syntax for these sorts of things, e.g. `Nick Coghlan's suggestion`_ to allow ** to indicate that keyword arguments would be supplied on the following block, e.g.:: options = dict(**): def option1(*args, **kwds): # Do option 1 def option2(*args, **kwds): # Do option 2 (Nick's doesn't solve the DRY issue when you need to know the name of the thing being created, but I'll ignore that for the moment.) Basically, I was trying to be as un-creative syntactically as possible, hoping that Python programmers would be able to translate their knowledge of how the class statement works into knowledge of how the make statement works. .. _Nick Coghlan's suggestion: http://mail.python.org/pipermail/python-3000/2006-April/000689.html STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From kbk at shore.net Fri Apr 14 04:42:58 2006 From: kbk at shore.net (Kurt B. Kaiser) Date: Thu, 13 Apr 2006 22:42:58 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200604140242.k3E2gw9n005550@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 383 open ( -8) / 3156 closed (+14) / 3539 total ( +6) Bugs : 886 open (-12) / 5759 closed (+28) / 6645 total (+16) RFE : 210 open ( -5) / 212 closed ( +5) / 422 total ( +0) New / Reopened Patches ______________________ socket.py: remove misleading comment (2006-04-08) CLOSED http://python.org/sf/1466993 opened by John J Lee Many functions in socket module are not thread safe (2006-04-09) http://python.org/sf/1467080 opened by Maxim Sobolev fix double free with tqs in standard build (2006-04-09) CLOSED http://python.org/sf/1467512 opened by Neal Norwitz add Popen objects to _active only in __del__, (2006-04-10) CLOSED http://python.org/sf/1467770 opened by cheops tkFont: simple fix to error at shutdown (2006-04-11) CLOSED http://python.org/sf/1468808 opened by Russell Owen Make -tt the default (2006-04-12) http://python.org/sf/1469190 opened by Thomas Wouters PyErrr_Display and traceback.format_exception_only behaviour (2003-12-15) CLOSED http://python.org/sf/860326 reopened by tim_one Alternative to rev 45325 (2006-04-12) http://python.org/sf/1469594 opened by Skip Montanaro Alternative to rev 45325 (2006-04-12) http://python.org/sf/1469594 reopened by montanaro Patches Closed ______________ socket.py: remove misleading comment (2006-04-08) http://python.org/sf/1466993 closed by gbrandl Use dlopen() to load extensions on Darwin, where possible (2006-03-21) http://python.org/sf/1454844 closed by anthonybaxter Improved generator finalization (2006-04-03) http://python.org/sf/1463867 closed by pje "const int" was truncated to "char" (2006-03-31) http://python.org/sf/1461822 closed by ocean-city Mutable Iterators PEP (03/26/06) http://python.org/sf/1459011 closed by sf-robot fix double free with tqs in standard build (2006-04-09) http://python.org/sf/1467512 closed by nnorwitz clean up new calendar locale usage (2006-04-03) http://python.org/sf/1463288 closed by doerwalter Functioning Tix.Grid (2006-03-31) http://python.org/sf/1462222 closed by loewis Link Python modules to libpython on linux if --enable-shared (2006-02-11) http://python.org/sf/1429775 closed by loewis add Popen objects to _active only in __del__, (2006-04-10) http://python.org/sf/1467770 closed by loewis subprocess: optional auto-reaping fixing os.wait() lossage (2005-04-21) http://python.org/sf/1187312 closed by loewis tkFont: simple fix to error at shutdown (2006-04-11) http://python.org/sf/1468808 closed by gbrandl PEP 343 draft documentation (2005-07-07) http://python.org/sf/1234057 closed by ncoghlan PyErrr_Display and traceback.format_exception_only behaviour (2003-12-15) http://python.org/sf/860326 closed by gbrandl Alternative to rev 45325 (2006-04-12) http://python.org/sf/1469594 closed by montanaro 832799 proposed changes (2003-11-25) http://python.org/sf/849262 closed by loewis Updated Demo\parser\unpack.py (2006-03-02) http://python.org/sf/1441452 closed by loewis New / Reopened Bugs ___________________ size_t warnings on OSX 10.3 (2006-04-09) http://python.org/sf/1467201 opened by Anthony Baxter open should default to binary mode on windows (2006-04-09) CLOSED http://python.org/sf/1467309 opened by Adam Hupp test_ctypes fails on OSX 10.3 (2006-04-10) http://python.org/sf/1467450 opened by Anthony Baxter Header.decode_header eats up spaces (2006-04-10) http://python.org/sf/1467619 opened by Mathieu Goutelle %-formatting and dicts (2006-04-10) http://python.org/sf/1467929 opened by M.-A. Lemburg os.listdir on Linux returns empty list on some errors (2006-04-10) CLOSED http://python.org/sf/1467952 opened by Gary Stiehr re incompatibility in sre (2000-09-11) http://python.org/sf/214033 reopened by tmick Hitting CTRL-C while in a loop closes IDLE on cygwin (2006-04-11) http://python.org/sf/1468223 opened by Miki Tebeka test_grp test_pwd fail on Linux (2006-04-11) CLOSED http://python.org/sf/1468231 opened by Miki Tebeka Possible Integer overflow (2006-04-11) CLOSED http://python.org/sf/1468727 opened by ekellinis SimpleXMLRPCServer doesn't work anymore on Windows (2006-04-12) CLOSED http://python.org/sf/1469163 opened by kadeko Exception when handling http response (2006-04-12) CLOSED http://python.org/sf/1469498 opened by Richard Kasperski Build requires already built Python (2006-04-12) CLOSED http://python.org/sf/1469548 opened by Stephan R.A. Deibel FTP modue functions are not re-entrant,give odd exceptions (2006-04-12) http://python.org/sf/1469557 opened by Bruce __dict__ (2006-04-13) http://python.org/sf/1469629 opened by Dobes V distutils.core: link to list of Trove classifiers (2006-04-13) http://python.org/sf/1470026 opened by Ansel Halliburton Bugs Closed ___________ socket.ssl won't work together with socket.settimeout on Win (2006-03-31) http://python.org/sf/1462352 closed by loewis Race condition in popen2 (02/08/04) http://python.org/sf/892939 closed by sf-robot Popen3.poll race condition (07/26/04) http://python.org/sf/998246 closed by sf-robot Bogus SyntaxError in listcomp (2006-04-08) http://python.org/sf/1466641 closed by twouters open should default to binary mode on windows (2006-04-09) http://python.org/sf/1467309 closed by twouters Tix.Grid widgets not implemented yet (2004-09-28) http://python.org/sf/1036406 closed by loewis ctypes should be able to link with installed libffi (2006-04-04) http://python.org/sf/1464444 closed by loewis id() for large ptr should return a long (2003-11-06) http://python.org/sf/837242 closed by loewis Please link modules with shared lib (2003-10-30) http://python.org/sf/832799 closed by loewis Why not drop the _active list? (2006-03-29) http://python.org/sf/1460493 closed by loewis subprocess auto-reaps children (2005-06-04) http://python.org/sf/1214859 closed by loewis os.listdir on Linux returns empty list on some errors (2006-04-10) http://python.org/sf/1467952 closed by gbrandl test_grp test_pwd fail on Linux (2006-04-11) http://python.org/sf/1468231 closed by loewis Possible Integer overflow (2006-04-11) http://python.org/sf/1468727 closed by mwh logging.StreamHandler ignores argument if it compares False (2006-04-03) http://python.org/sf/1463840 closed by vsajip infrequent memory leak in pyexpat (2001-04-15) http://python.org/sf/416288 closed by gbrandl Python 2.3.3 make test core dump on HP-UX11i (2003-12-22) http://python.org/sf/864379 closed by loewis 2.3.5 SRPM fails to build without tkinter installed (2005-07-31) http://python.org/sf/1248997 closed by gbrandl SimpleXMLRPCServer doesn't work anymore on Windows (2006-04-12) http://python.org/sf/1469163 closed by anthonybaxter inspect.getdoc fails on objs that use property for __doc__ (2005-11-26) http://python.org/sf/1367183 closed by gbrandl Old bsddb version picked by setup.py (2003-07-02) http://python.org/sf/764839 closed by greg bsddb: bug with 'n' flag (2005-02-22) http://python.org/sf/1149413 closed by greg BSDDB openhash (2005-02-07) http://python.org/sf/1117761 closed by greg Exception when handling http response (2006-04-12) http://python.org/sf/1469498 closed by gbrandl Build requires already built Python (2006-04-13) http://python.org/sf/1469548 closed by anthonybaxter Windows _bsddb link warnings (2003-08-01) http://python.org/sf/781614 closed by loewis embedding Python causes memory leaks (2006-03-08) http://python.org/sf/1445210 closed by loewis Changes to generator object's gi_frame attr (2006-04-04) http://python.org/sf/1464571 closed by akuchling whatsnew 2.5: pep 353, python -m (2006-03-10) http://python.org/sf/1447031 closed by akuchling BSD DB test failures for BSD DB 3.2 (2005-10-19) http://python.org/sf/1332852 closed by greg ImportError: Module _subprocess not found (2006-04-07) http://python.org/sf/1466301 closed by loewis RFE Closed __________ Scripts invoked by -m should trim exceptions (2006-04-01) http://python.org/sf/1462486 closed by ncoghlan sys.setatomicexecution - for critical sections (2006-03-19) http://python.org/sf/1453341 closed by ncoghlan win32 silent installation: allow MAINDIR on command line (2004-10-04) http://python.org/sf/1039857 closed by sigmud Support for MSVC 7 and MSVC8 in msvccompiler (2006-02-06) http://python.org/sf/1425256 closed by loewis old/new class documentation (2004-05-25) http://python.org/sf/960454 closed by jimjjewett From anthony at interlink.com.au Fri Apr 14 04:58:29 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Fri, 14 Apr 2006 12:58:29 +1000 Subject: [Python-Dev] =?iso-8859-1?q?=5BPython-checkins=5D_r45321_-_in_pyt?= =?iso-8859-1?q?hon/trunk=3A=09Lib/test/test=5Ftraceback=2Epy_Lib/tracebac?= =?iso-8859-1?q?k=2Epy_Misc/NEWS?= In-Reply-To: <443E7CF8.3020102@v.loewis.de> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> Message-ID: <200604141258.31553.anthony@interlink.com.au> On Friday 14 April 2006 02:31, Martin v. L?wis wrote: > Tim Peters wrote: > >> I'm not the one to decide, but at some time the traceback module > >> should be rewritten to match the interpreter behavior. > > > > No argument from me about that. > > I also think the traceback module should be corrected, and the test > cases updated, despite the objections that it may break other > people's doctest test cases. I don't mind one way or the other, but with the number of people working actively on the code at the moment, I think reverting broken patches that don't have trivial test fixes is the way to go. The buildbot system is useless, otherwise. And yes, I'm working on the existing broken buildslaves trying to fix them. For instance - on ia64, sqlite is failing because of a bug in gcc - compiled with -O2 or higher, sqlite itself is broken. Yay! Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From pje at telecommunity.com Fri Apr 14 05:33:36 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 13 Apr 2006 23:33:36 -0400 Subject: [Python-Dev] PEP 302 support for pydoc, runpy, etc. Message-ID: <5.1.1.6.0.20060413231545.01e0b628@mail.telecommunity.com> Okay, so I've spent a bit more time thinking this through, and I think I have a plan that can work, and deals with all the weird little corner issues, including overlapping code with pkg_resources (a new module that's to-be-added by setuptools). My basic idea is as follows: 1. Move the support code from runpy to pkgutil and expose most of it as actual APIs. 2. Move additional code from pkg_resources to pkgutil and merge with the stuff brought over from runpy, fleshing it out enough so that pydoc can use it to find modules, subpackages, etc., and so that pkg_resources' other code uses pkgutil instead of having that stuff "in-house". 3. Write up docs for the expanded pkgutil 4. Add find_loader([path]), get_loader(module), and get_importer(string[,path_hooks]) functions to the 'imp' module, with pure-Python versions in pkgutil that kick in if the C ones are missing. (See next item for why.) 5. Distribute the Python 2.5 'pkgutil' module with setuptools releases for older Python versions, so that setuptools won't have to keep duplicates of the code I pulled out in item #2 above. Part of what would be being copied to pkgutil from pkg_resources is the "resource provider API", that allows you to treat a module, package, or sys.path entry as a kind of virtual filesystem. pydoc needs this in order to be able to document with zipped packages, and to find modules or packages on sys.path that are in zipfiles. (E.g. to document a zipped standard library package or module.) pkg_resources has been out there for almost a year now, and is widely used as part of easy_install and the egg runtime facilities, but it's a very large module and contains lots of other code besides the virtual filesystem stuff. So, this seems like the simplest way to share the virtual filesystem code between pydoc and setuptools, without making the former depend on the latter. It does also mean creating an addition to PEP 302: an optional method on importers and loaders that would allow obtaining a virtual filesystem for that path location or package. I believe this and the setuptools checkin can be completed by Monday evening, although the documentation might not be fancy at that point. I don't know when the documentation cutoff for alpha 2 is, but I assume I'd have time to finish fleshing it out, at least for the pkgutil bit and the PEP update. Anybody have any thoughts, comments, or questions? From pje at telecommunity.com Fri Apr 14 05:45:25 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 13 Apr 2006 23:45:25 -0400 Subject: [Python-Dev] Procedure for sandbox branches in SVN? Message-ID: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> I'd like to create a branch for maintaining the setuptools 0.6 line through its beta and final release, while new work proceeds on the trunk for integration with Python 2.5 and beginning the 0.7 line. Is there any special procedure I should follow, or can I just make a 'setuptools-0.6' branch under sandbox/branches? From tim.peters at gmail.com Fri Apr 14 06:37:03 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 14 Apr 2006 00:37:03 -0400 Subject: [Python-Dev] Procedure for sandbox branches in SVN? In-Reply-To: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> References: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> Message-ID: <1f7befae0604132137u2d785104y6c35e7b99f408c0c@mail.gmail.com> [Phillip J. Eby] > I'd like to create a branch for maintaining the setuptools 0.6 line through > its beta and final release, while new work proceeds on the trunk for > integration with Python 2.5 and beginning the 0.7 line. Is there any > special procedure I should follow, or can I just make a 'setuptools-0.6' > branch under sandbox/branches? Just do it. Branches under SVN are cheap, go fast, and are pretty easy to work with. Even better, because a branch is just another named directory in SVN's virtual file system, you can "svn remove" it when you're done with it (just like any other directory). BTW, if you're not already familiar with it, read about the `--stop-on-copy` option to SVN's `log` command. From nnorwitz at gmail.com Fri Apr 14 07:20:39 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 13 Apr 2006 22:20:39 -0700 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <443E2601.3060101@egenix.com> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> <443E2601.3060101@egenix.com> Message-ID: <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> On 4/13/06, M.-A. Lemburg <mal at egenix.com> wrote: > > > > Does this mean you would like to see this patch checked in to 2.5? > > Yes. Ok, I checked this in to 2.5 (minus the syntax error). > I also think that changing the type from signed to unsigned > by backporting the configure fix will only make things safer > for the user, since extensions will probably not even be aware > of the fact that Py_UNICODE could be signed (it has always been > documented to be unsigned). > > So +1 on backporting the configure test fix to 2.4. I'll leave this decision to Martin or someone else, since I'm not familiar with the ramifications. Since it was documented as unsigned, I think it's reasonable to consider changing. Though it could create signed-ness warnings in other modules. I'm not sure but it's possible it could create problems for C++ compilers since they are pickier. n From nnorwitz at gmail.com Fri Apr 14 08:40:33 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 13 Apr 2006 23:40:33 -0700 Subject: [Python-Dev] [Python-checkins] r45334 - python/trunk/Lib/test/leakers/test_gen1.py python/trunk/Lib/test/leakers/test_generator_cycle.py python/trunk/Lib/test/leakers/test_tee.py In-Reply-To: <1f7befae0604130957p36a15ba5l26160dba0635bd9c@mail.gmail.com> References: <20060413043540.51D6F1E404D@bag.python.org> <fb6fbf560604130945x6f56a541h4c47a0fb2520e91f@mail.gmail.com> <1f7befae0604130957p36a15ba5l26160dba0635bd9c@mail.gmail.com> Message-ID: <ee2a432c0604132340p614ea0sb69caacc430032d4@mail.gmail.com> On 4/13/06, Tim Peters <tim.peters at gmail.com> wrote: > > Rather than delete a leak test, perhaps we could simply move it into a > new old-leaking-tests subdirectory? Likewise for crash tests. When > the bug reappears, it's helfpul to have the focussed (whittled-down) > old test that provoked it handy. Good point. I updated both READMEs. Hell, one of the tests I removed still leaked. Who knows what I was smoking. The removed test that really doesn't leak was moved into test generators. I think it was different than all the other existing tests. Feel free to tweak the READMEs or policy. I agree with the intent, however it may be implemented. n From steve at holdenweb.com Fri Apr 14 08:52:25 2006 From: steve at holdenweb.com (Steve Holden) Date: Fri, 14 Apr 2006 07:52:25 +0100 Subject: [Python-Dev] Calling Thomas Heller and Richard Jones Message-ID: <e1ngre$l3r$1@sea.gmane.org> If Thomas Heller and Richard Jones haven't recently received email from me (and are reading this list, naturally) I'd appreciate it if they'd get in touch with me in the next day or so. regards Steve -- Steve Holden +44 150 684 7255 +1 800 494 3119 Holden Web LLC/Ltd www.holdenweb.com Love me, love my blog holdenweb.blogspot.com From ncoghlan at gmail.com Fri Apr 14 09:01:06 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 14 Apr 2006 17:01:06 +1000 Subject: [Python-Dev] PEP 302 support for pydoc, runpy, etc. In-Reply-To: <5.1.1.6.0.20060413231545.01e0b628@mail.telecommunity.com> References: <5.1.1.6.0.20060413231545.01e0b628@mail.telecommunity.com> Message-ID: <443F48B2.10907@gmail.com> Phillip J. Eby wrote: > Anybody have any thoughts, comments, or questions? This all sounds good to me. Will it also eliminate the current quirk in the runpy emulation where it searches the metapath in a slightly different order than the actual import machinery does? (From what you wrote, I believe it will, but I'm not sure). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From nnorwitz at gmail.com Fri Apr 14 09:10:30 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 14 Apr 2006 00:10:30 -0700 Subject: [Python-Dev] int vs ssize_t in unicode In-Reply-To: <443E067D.5060901@v.loewis.de> References: <ee2a432c0604122244o36d9f0c5u622759143ae39624@mail.gmail.com> <443DF1BD.30200@v.loewis.de> <ee2a432c0604130000v405ffcc4rc7b50ade6015f379@mail.gmail.com> <443E067D.5060901@v.loewis.de> Message-ID: <ee2a432c0604140010q20e32ef6q91035d4083a9e4f@mail.gmail.com> On 4/13/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Neal Norwitz wrote: > > I just grepped for INT_MAX and there's a ton of them still (well 83 in > > */*.c). Some aren't an issue like posixmodule.c, those are > > _SC_INT_MAX. marshal is probably ok, but all uses should be verified. > > Really all uses of {INT,LONG}_{MIN,MAX} should be verified and > > converted to PY_SSIZE_T_{MIN,MAX} as appropriate. > BTW, it would be great if someone could try to put together some tests for bigmem machines. I'll add it to the todo wiki. The tests should be broken up by those that require 2+ GB of memory, those that take 4+, etc. Many people won't have boxes with that much memory. The test cases should test all methods (don't forget slicing operations) at boundary points, particularly just before and after 2GB. Strings are probably the easiest. There's unicode too. lists, dicts are good but will take more than 16 GB of RAM, so those can be pushed out some. I have some machines available for testing. > I replaced all the trivial ones; the remaining ones are (IMO) more > involved, or correct. In particular: > > - collectionsmodule: deque is still restricted to 2GiB elements > - cPickle: pickling does not support huge strings (and probably > shouldn't); likewise marshal > - _sre is still limited to INT_MAX completely I've got outstanding changes somewhere to clean up _sre. > - not sure why the mbcs codec is restricted to INT_MAX; somebody > should check the Win64 API whether the restriction can be > removed (most likely, it can) > - PyObject_CallFunction must be duplicated for PY_SSIZE_T_CLEAN, > then exceptions.c can be extended. My new favorite static analysis tool is grep: grep '(int)' */*.c | egrep -v 'sizeof(int)' | wc -l 418 I know a bunch of those aren't problematic, but a bunch are. Same with long casts. n From guido at python.org Fri Apr 14 09:46:31 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 14 Apr 2006 08:46:31 +0100 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <200604141258.31553.anthony@interlink.com.au> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <200604141258.31553.anthony@interlink.com.au> Message-ID: <ca471dc20604140046h26f1ab9dt845347c66ee0f9f7@mail.gmail.com> On 4/14/06, Anthony Baxter <anthony at interlink.com.au> wrote: > On Friday 14 April 2006 02:31, Martin v. L?wis wrote: > > Tim Peters wrote: > > >> I'm not the one to decide, but at some time the traceback module > > >> should be rewritten to match the interpreter behavior. > > > > > > No argument from me about that. > > > > I also think the traceback module should be corrected, and the test > > cases updated, despite the objections that it may break other > > people's doctest test cases. Let me chime in with agreement (2.5 only of course). > I don't mind one way or the other, but with the number of people > working actively on the code at the moment, I think reverting broken > patches that don't have trivial test fixes is the way to go. The > buildbot system is useless, otherwise. That was the right thing to do (short of fixing all the missing tests, which requires actual thinking :-). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Fri Apr 14 10:00:11 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 14 Apr 2006 10:00:11 +0200 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> <443E2601.3060101@egenix.com> <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> Message-ID: <443F568B.9090208@v.loewis.de> Neal Norwitz wrote: > I'll leave this decision to Martin or someone else, since I'm not > familiar with the ramifications. Since it was documented as unsigned, > I think it's reasonable to consider changing. Though it could create > signed-ness warnings in other modules. I'm not sure but it's possible > it could create problems for C++ compilers since they are pickier. My concern is not so much that it becomes unsigned in 2.4.4, but that it stops being a typedef for wchar_t on Linux. C++ code that uses that assumption might stop compiling. Regards, Martin From martin at v.loewis.de Fri Apr 14 10:06:51 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 14 Apr 2006 10:06:51 +0200 Subject: [Python-Dev] Procedure for sandbox branches in SVN? In-Reply-To: <1f7befae0604132137u2d785104y6c35e7b99f408c0c@mail.gmail.com> References: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> <1f7befae0604132137u2d785104y6c35e7b99f408c0c@mail.gmail.com> Message-ID: <443F581B.8070500@v.loewis.de> Tim Peters wrote: > Just do it. Branches under SVN are cheap, go fast, and are pretty > easy to work with. Even better, because a branch is just another > named directory in SVN's virtual file system, you can "svn remove" it > when you're done with it (just like any other directory). That's all true. Of course, removing the branch won't free up any disk space - it will just remove the branch from the view (IOW, it is also easy to bring it back if it was removed mistakenly). Regards, Martn From mal at egenix.com Fri Apr 14 11:08:55 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 14 Apr 2006 11:08:55 +0200 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <443F568B.9090208@v.loewis.de> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> <443E2601.3060101@egenix.com> <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> <443F568B.9090208@v.loewis.de> Message-ID: <443F66A7.6060404@egenix.com> Martin v. L?wis wrote: > Neal Norwitz wrote: >> I'll leave this decision to Martin or someone else, since I'm not >> familiar with the ramifications. Since it was documented as unsigned, >> I think it's reasonable to consider changing. Though it could create >> signed-ness warnings in other modules. I'm not sure but it's possible >> it could create problems for C++ compilers since they are pickier. > > My concern is not so much that it becomes unsigned in 2.4.4, but that > it stops being a typedef for wchar_t on Linux. C++ code that uses that > assumption might stop compiling. I'd argue that such code is broken anyway: 3rd party code simply cannot make any assumptions on the typedef behind Py_UNICODE. Note that you'd only see this change when compiling Python in the non-standard UCS4 setting on Linux. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 14 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From martin at v.loewis.de Fri Apr 14 11:18:44 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 14 Apr 2006 11:18:44 +0200 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <443F66A7.6060404@egenix.com> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> <443E2601.3060101@egenix.com> <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> <443F568B.9090208@v.loewis.de> <443F66A7.6060404@egenix.com> Message-ID: <443F68F4.8050500@v.loewis.de> M.-A. Lemburg wrote: > I'd argue that such code is broken anyway: 3rd party code simply > cannot make any assumptions on the typedef behind Py_UNICODE. The code would work just fine now, but will stop working with the change. Users of the code might not be open to arguments. In any case, it's up to the release manager to decide. > Note that you'd only see this change when compiling Python in the > non-standard UCS4 setting on Linux. Sure. That happens to be the default on many Linux distributions, though. Regards, Martin From fredrik at pythonware.com Fri Apr 14 11:42:01 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Fri, 14 Apr 2006 11:42:01 +0200 Subject: [Python-Dev] Procedure for sandbox branches in SVN? References: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> <1f7befae0604132137u2d785104y6c35e7b99f408c0c@mail.gmail.com> Message-ID: <e1nqpa$f81$1@sea.gmane.org> Tim Peters wrote: > Just do it. Branches under SVN are cheap, go fast, and are pretty > easy to work with. Even better, because a branch is just another > named directory in SVN's virtual file system, you can "svn remove" it > when you're done with it (just like any other directory). footnote: if you want to make huge amounts of branches (e.g. a branch for each bug or optimization idea you're working on), I recommend using the SVK frontend instead of SVN: http://svk.elixus.org/ </F> From and-dev at doxdesk.com Fri Apr 14 12:35:53 2006 From: and-dev at doxdesk.com (Andrew Clover) Date: Fri, 14 Apr 2006 11:35:53 +0100 Subject: [Python-Dev] New-style icons, .desktop file Message-ID: <443F7B09.50605@doxdesk.com> Morning! I've done some tweaks to the previously-posted-about icon set, taking note of some of the comments here and on -list. In particular, amongst more minor changes: - added egg icon (based on zip) - flipped pycon to work better with shortcut arrow - emphasised borders of 32x32 version of pycon, and changed text colour, in order to distinguish more from pyc. In balance I think this is enough of a change, though Nick Coghlan's idea of just using the plus in this case also has merit - indeed this is what happens at 16x16 - built .ico files without the Windows Vista enormo-icons. If the icons *were* to go in the win32 distribution, it probably wouldn't make sense to spend the considerably larger filesize on Vista icons until such time as people are actually using Vista. - included PNG and SVG version of icons. The SVG unfortunately doesn't preserve all the effects of the original Xara files, partly because a few effects (feathering, bevels) can't be done in SVG 1.1, and partly because the current conversion process is bobbins (but we're working on that!). So the nice gradients and things are gone, but it should still be of use as a base for anyone wanting to hack on the icons. - excised file() in favour of open() ;-) Files and preview here: http://doxdesk.com/img/software/py/icons2.zip http://doxdesk.com/img/software/py/icons2.png Oh, and is the intention to deprecate the purple/green horizontal snake logo previously used for 'Python for Windows' (as well as PSF)? If so, Erik van B's installer graphic could probably do with a quick refresh. cheers, -- And Clover mailto:and at doxdesk.com http://www.doxdesk.com/ From mal at egenix.com Fri Apr 14 12:14:34 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 14 Apr 2006 12:14:34 +0200 Subject: [Python-Dev] unicode vs buffer (array) design issue can crash interpreter In-Reply-To: <443F68F4.8050500@v.loewis.de> References: <ee2a432c0603292256v38fca9bdn21c67e6c75b780f6@mail.gmail.com> <442C1095.9070905@v.loewis.de> <442CF0BB.6040901@egenix.com> <ee2a432c0604122213t24cf8c71y75c98620d35d376@mail.gmail.com> <443E2601.3060101@egenix.com> <ee2a432c0604132220s4fd42fe3ga0a4389ea9c83ebd@mail.gmail.com> <443F568B.9090208@v.loewis.de> <443F66A7.6060404@egenix.com> <443F68F4.8050500@v.loewis.de> Message-ID: <443F760A.7070203@egenix.com> Martin v. L?wis wrote: > M.-A. Lemburg wrote: >> I'd argue that such code is broken anyway: 3rd party code simply >> cannot make any assumptions on the typedef behind Py_UNICODE. > > The code would work just fine now, but will stop working with the > change. Users of the code might not be open to arguments. Fair enough. Let's leave things as they are for 2.4, then. > In any case, it's up to the release manager to decide. > >> Note that you'd only see this change when compiling Python in the >> non-standard UCS4 setting on Linux. > > Sure. That happens to be the default on many Linux distributions, > though. Unfortunately, that's true. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 14 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From ncoghlan at gmail.com Fri Apr 14 13:49:08 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 14 Apr 2006 21:49:08 +1000 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <443F7B09.50605@doxdesk.com> References: <443F7B09.50605@doxdesk.com> Message-ID: <443F8C34.9070906@gmail.com> Andrew Clover wrote: > Morning! > > I've done some tweaks to the previously-posted-about icon set, taking > note of some of the comments here and on -list. And I still think they're pretty :) > - emphasised borders of 32x32 version of pycon, and changed text > colour, in order to distinguish more from pyc. In balance I think > this is enough of a change, though Nick Coghlan's idea of just using > the plus in this case also has merit - indeed this is what happens > at 16x16 A small problem with including the interactive prompt as part of that icon is that it leaves us without an appropriate icon for pythonw (the Windows .exe to start Python GUI applications without displaying a console window). The plain Plus-Python icon would work in both cases (the normal python executable and the pythonw variant). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From nnorwitz at gmail.com Wed Apr 12 08:30:07 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 11 Apr 2006 23:30:07 -0700 Subject: [Python-Dev] IMPORTANT 2.5 API changes for C Extension Modules and Embedders Message-ID: <mailman.4453.1144849954.27775.python-announce-list@python.org> If you don't write or otherwise maintain Python Extension Modules written in C (or C++) or embed Python in your application, you can stop reading. Python 2.5 alpha 1 was released April 5, 2006. The second alpha should be released in a few weeks. There are several changes which can cause C extension modules or embedded applications to crash the interpreter if not fixed. Periodically, I will send out these reminders with updated information until 2.5 is released. * support for 64-bit sequences (eg, > 2GB strings) * memory allocation modifications 64-bit changes -------------- There are important changes that are in 2.5 to support 64-bit systems. The 64-bit changes can cause Python to crash if your module is not upgraded to support the changes. Python was changed internally to use 64-bit values on 64-bit machines for indices. If you've got a machine with more than 16 GB of RAM, it would be great if you can test Python with large (> 2GB) strings and other sequences. For more details about the Python 2.5 schedule: http://www.python.org/dev/peps/pep-0356/ For more details about the 64-bit change: http://www.python.org/dev/peps/pep-0353/ How to fix your module: http://www.python.org/dev/peps/pep-0353/#conversion-guidelines The effbot wrote a program to check your code and find potential problems with the 64-bit APIs. http://svn.effbot.python-hosting.com/stuff/sandbox/python/ssizecheck.py Memory Allocation Modifications ------------------------------- In previous versions of Python, it was possible to use different families of APIs (PyMem_* vs. PyObject_*) to allocate and free the same block of memory. APIs in these families include: PyMem_*: PyMem_Malloc, PyMem_Realloc, PyMem_Free, PyObject_*: PyObject_Malloc, PyObject_Realloc, PyObject_Free There are a few other APIs with similar names and also the macro variants. In 2.5, if allocate a block of memory with one family, you must reallocate or free with the same family. That means: If you allocate with PyMem_Malloc (or MALLOC), you must reallocate with PyMem_Realloc (or REALLOC) and free with PyMem_Free (or FREE). If you allocate with PyObject_Malloc (or MALLOC), you must reallocate with PyObject_Realloc (or REALLOC) and free with PyObject_Free (or FREE). Using inconsistent APIs can cause double frees or otherwise crash the interpreter. It is fine to mix and match functions or macros within the same family. Please test and upgrade your extension modules! Cheers, n From skip at pobox.com Fri Apr 14 15:39:00 2006 From: skip at pobox.com (skip at pobox.com) Date: Fri, 14 Apr 2006 08:39:00 -0500 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <e1ludl$go1$1@sea.gmane.org> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <e1ludl$go1$1@sea.gmane.org> Message-ID: <17471.42484.36337.641236@montanaro.dyndns.org> Travis> I generally like the idea. A different name would be better. Travis> Here's a list of approximate synonyms that might work (ordered Travis> by my preference...) ... lots of suggestions elided ... None of the alternatives seem better to me than "make" or "create". In fact, many don't seem to me (as a native English speaker) to be fairly obscure synonyms. "construct" or "build" are maybe the best of the lot if neither "make" nor "create" are deemed worthy. I'm willing to put up with the pain of some breakage to have a better keyword. That's what __future__ imports are for. We need to consider that Python exists outside the English-speaking world. Making a strong connection between the keyword and its action is important. For that reason I'd much prefer simple, common keywords. Yes, I realize that some keywords in Python are more symbolic than that: "def", "lambda", "del", "elif", "exec". But most keywords in Python are common English, understood by almost anyone having any facility with the language. Skip From fdrake at acm.org Fri Apr 14 16:19:21 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Fri, 14 Apr 2006 10:19:21 -0400 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <443F7B09.50605@doxdesk.com> References: <443F7B09.50605@doxdesk.com> Message-ID: <200604141019.21215.fdrake@acm.org> On Friday 14 April 2006 06:35, Andrew Clover wrote: > Files and preview here: > > http://doxdesk.com/img/software/py/icons2.zip > http://doxdesk.com/img/software/py/icons2.png Very nice! -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From tim.peters at gmail.com Fri Apr 14 17:19:53 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 14 Apr 2006 11:19:53 -0400 Subject: [Python-Dev] Procedure for sandbox branches in SVN? In-Reply-To: <443F581B.8070500@v.loewis.de> References: <5.1.1.6.0.20060413234202.01e0c358@mail.telecommunity.com> <1f7befae0604132137u2d785104y6c35e7b99f408c0c@mail.gmail.com> <443F581B.8070500@v.loewis.de> Message-ID: <1f7befae0604140819y7631463bx26c908698ccb93bb@mail.gmail.com> [Martin v. L?wis] > That's all true. Of course, removing the branch won't free up > any disk space - it will just remove the branch from the view > (IOW, it is also easy to bring it back if it was removed mistakenly). Right! I'm implicitly addressing a different issue, namely that two reasons for people disliking branches in CVS can "go away by magic" under SVN: 1. There's no way to attach a comment to a branch, explaining why it exists. In SVN, that's the checkin comment on the `copy` command that creates the branch. 2. Branches are in your face forever after, because the CVS docs strongly warn against trying to use the relevant commands even for tags (let alone branches). The CVS moral equivalent of "svn list repo/branches" keeps growing as a result, and shows up in each `cvs log`. The ever-growing clutter of long-obsolete info gets depressing <0.5 wink>. The easy possibility in SVN for "out of sight, out of mind" is a huge improvement. From bjourne at gmail.com Fri Apr 14 19:38:08 2006 From: bjourne at gmail.com (=?ISO-8859-1?Q?BJ=F6rn_Lindqvist?=) Date: Fri, 14 Apr 2006 19:38:08 +0200 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <740c3aec0604141038n610dc76dpe82ccfcc94358dab@mail.gmail.com> [nice way to declare properties with "make"] > Of course, properties are only one of the many possible uses of the > make statement. The make statement is useful in essentially any > situation where a name is associated with a namespace. So, for So far, in this thread that is the only useful use of the make statement that has been presented. I'd like to see more examples. It would be really cool if you could go through the standard library, and replace code there with code using the make statement. I think a patch showing how much nicer good Python code would be with the make statement would be a very convincing argument. -- mvh Bj?rn From tim.peters at gmail.com Fri Apr 14 21:49:35 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 14 Apr 2006 15:49:35 -0400 Subject: [Python-Dev] Debugging opportunity :-) Message-ID: <1f7befae0604141249p7db53989qda326e7e39929ec4@mail.gmail.com> When "possible finalizers" were added to generators, the implementation at first said that all generators need finalizing. As a result, some old tests in test_generators.py started leaking cyclic trash. As a result of that, someone(s) added a number of explicit "close" gimmicks to those tests, to break the cycles and stop the leaks. Recently smarter gc code was added, so that most old generators should no longer say they need finalization. But the new close-it cruft added to test_generators wasn't removed, so I thought I'd try that. Alas, they still leaked without the close-it cruft. That eventually pointed to an off-by-one error in the new PyGen_NeedsFinalizing(), where reading up trash made it very likely that old generators containing an active loop would falsely claim they need finalization. Alas, fix that, and Python segfaults when running test_generators. I'm out of time for this now, so wrote up what I know, and attached a patch sufficient to reproduce it: http://www.python.org/sf/1470508 It's nominally assigned to Phillip, but for anyone who enjoys a significant debugging challenge, it's likely to be much more fun than skinning the Easter Bunny :-) From jjl at pobox.com Fri Apr 14 21:32:27 2006 From: jjl at pobox.com (John J Lee) Date: Fri, 14 Apr 2006 19:32:27 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <443E7CF8.3020102@v.loewis.de> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> Message-ID: <Pine.LNX.4.64.0604141926150.8363@alice> On Thu, 13 Apr 2006, "Martin v. L?wis" wrote: > Tim Peters wrote: >>> I'm not the one to decide, but at some time the traceback module should be >>> rewritten to match the interpreter behavior. >> >> No argument from me about that. > > I also think the traceback module should be corrected, and the test > cases updated, despite the objections that it may break other people's > doctest test cases. Assuming this is fixed in 2.5 final, is there some way to write doctests that work on both 2.4 and 2.5? If not, should something like doctest.IGNORE_EXCEPTION_DETAIL be added -- say IGNORE_EXCEPTION_MODULE? John From jjl at pobox.com Fri Apr 14 21:39:59 2006 From: jjl at pobox.com (John J Lee) Date: Fri, 14 Apr 2006 19:39:59 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <Pine.LNX.4.64.0604141926150.8363@alice> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> Message-ID: <Pine.LNX.4.64.0604141938180.8363@alice> Sorry, please ignore the post of mine I'm replying to here. I missed part of the thread, and Tim has already answered my question... On Fri, 14 Apr 2006, John J Lee wrote: [...] > Assuming this is fixed in 2.5 final, is there some way to write doctests that > work on both 2.4 and 2.5? If not, should something like > doctest.IGNORE_EXCEPTION_DETAIL be added -- say IGNORE_EXCEPTION_MODULE? [...] From pje at telecommunity.com Fri Apr 14 22:38:37 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 14 Apr 2006 16:38:37 -0400 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <1f7befae0604141249p7db53989qda326e7e39929ec4@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> At 03:49 PM 4/14/2006 -0400, Tim Peters wrote: > That eventually >pointed to an off-by-one error in the new PyGen_NeedsFinalizing(), >where reading up trash made it very likely that old generators >containing an active loop would falsely claim they need finalization. >Alas, fix that, and Python segfaults when running test_generators. Even with the close() gimmicks still in place? If so, it sounds like the safest thing for a2 might be to claim that generators always need finalizing. :( From ianb at colorstudy.com Fri Apr 14 23:16:29 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 14 Apr 2006 16:16:29 -0500 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <740c3aec0604141038n610dc76dpe82ccfcc94358dab@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <740c3aec0604141038n610dc76dpe82ccfcc94358dab@mail.gmail.com> Message-ID: <4440112D.1060500@colorstudy.com> BJ?rn Lindqvist wrote: > [nice way to declare properties with "make"] > > >>Of course, properties are only one of the many possible uses of the >>make statement. The make statement is useful in essentially any >>situation where a name is associated with a namespace. So, for > > > So far, in this thread that is the only useful use of the make > statement that has been presented. I'd like to see more examples. In SQLObject I would prefer: class Foo(SQLObject): make IntCol bar: notNull = True In FormEncode I would prefer: make Schema registration: make String name: max_length = 100 not_empty = True make PostalCode postal_code: not_empty = True make Int age: min = 18 In another thread on the python-3000 list I suggested (using : class Point(object): make setonce x: "x coordinate" make setonce y: "y coordinate" For a read-only x and y property (setonce because they have to be set to *something*, but just never re-set). Interfaces are nice: make interface IValidator: make attribute if_empty: """If this attribute is not NoDefault, then this value will be used in lieue of an empty value""" default = NoDefault def to_python(value, state): """...""" Another descriptor, stricttype (http://svn.colorstudy.com/home/ianb/recipes/stricttype.py): class Pixel(object): make stricttype x: type = int make stricttype y: type = int (Both this descriptor and setonce need to know their name if they are going to store their value in the object in a stable location) > It would be really cool if you could go through the standard library, > and replace code there with code using the make statement. I think a > patch showing how much nicer good Python code would be with the make > statement would be a very convincing argument. I don't know if the standard library will have a whole lot; "make" is really only useful when frameworks are written to use it, and there's just not a lot of framework in the standard library. Maybe: make OptionParser myparser: make Option verbose: short = '-v' help = """..."" -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From thomas at python.org Fri Apr 14 23:51:39 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 14 Apr 2006 23:51:39 +0200 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> References: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> Message-ID: <9e804ac0604141451m1fdd7942mf7afc090297557e7@mail.gmail.com> On 4/14/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 03:49 PM 4/14/2006 -0400, Tim Peters wrote: > > That eventually > >pointed to an off-by-one error in the new PyGen_NeedsFinalizing(), > >where reading up trash made it very likely that old generators > >containing an active loop would falsely claim they need finalization. > >Alas, fix that, and Python segfaults when running test_generators. > > Even with the close() gimmicks still in place? If so, it sounds like the > safest thing for a2 might be to claim that generators always need > finalizing. :( No, it sounds like there's a bug in generator cleanup :-) From Tim's description, it isn't obvious to me that the bug is in the hack to clean up simple generator frames. I had "finding generator-cycle leak in test_generators" on my TODO list (as I'd noticed it was still leaking, too), I've now replaced that with "finding the generator-cleanup crash Tim encountered" :-) I don't know if I'll get to it this weekend, though, so if anyone else wants to hunt for it, don't let me stop you. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060414/798fb9b2/attachment.htm From pje at telecommunity.com Sat Apr 15 00:26:21 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 14 Apr 2006 18:26:21 -0400 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <9e804ac0604141451m1fdd7942mf7afc090297557e7@mail.gmail.com > References: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> At 11:51 PM 4/14/2006 +0200, Thomas Wouters wrote: >On 4/14/06, Phillip J. Eby ><<mailto:pje at telecommunity.com>pje at telecommunity.com> wrote: >>At 03:49 PM 4/14/2006 -0400, Tim Peters wrote: >> > That eventually >> >pointed to an off-by-one error in the new PyGen_NeedsFinalizing(), >> >where reading up trash made it very likely that old generators >> >containing an active loop would falsely claim they need finalization. >> >Alas, fix that, and Python segfaults when running test_generators. >> >>Even with the close() gimmicks still in place? If so, it sounds like the >>safest thing for a2 might be to claim that generators always need >>finalizing. :( > >No, it sounds like there's a bug in generator cleanup :-) From Tim's >description, it isn't obvious to me that the bug is in the hack to clean >up simple generator frames. I had "finding generator-cycle leak in >test_generators" on my TODO list (as I'd noticed it was still leaking, >too), I've now replaced that with "finding the generator-cleanup crash Tim >encountered" :-) I don't know if I'll get to it this weekend, though, so >if anyone else wants to hunt for it, don't let me stop you. FYI, the smallest code I've found so far that reproduces the crash is: fun_tests = """ >>> class LazyList: ... def __init__(self, g): ... self.v = None ... self.g = g ... ... def __iter__(self): ... yield 1 ... if self.v is None: ... self.v = self.g.next() ... yield self.v >>> def loop(): ... for i in ll: ... yield i >>> ll = LazyList(loop()) >>> g=iter(ll) >>> g.next() 1 >>> g.next() 1 """ If you replace the fun_tests in test_generators with this, and remove all the other tests (e.g. pep_tests, tut_tests, etc.) you can still get it to crash. The code above is much shorter, but has the disadvantage that I don't really know what it's supposed to do, other than cause a crash. :) Interestingly, this code does *not* crash the interpreter on its own, only when run as a doctest (with or without regrtest). I still haven't figured out what the actual problem is, but at least this cuts out all the merge() and times() crud in the case this was derived from, reducing it to a pure zen essence of non-meaning. :) From thomas at python.org Sat Apr 15 00:59:47 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 15 Apr 2006 00:59:47 +0200 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> References: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> Message-ID: <9e804ac0604141559r52b4e706qb7207e603fb8375@mail.gmail.com> On 4/15/06, Phillip J. Eby <pje at telecommunity.com> wrote: > Interestingly, this code does *not* crash the interpreter on its own, only > when run as a doctest (with or without regrtest). Actually, it does, you just have to force GC to clean up stuff (rather than wait for the interpreter-exit to just toss the whole lot out without giving it a second glance.) You do that by making it unreachable and calling gc.collect(). centurion:~/python/python/trunk > cat test_gencrash.py import gc class LazyList: def __init__(self, g): self.v = None self.g = g def __iter__(self): yield 1 if self.v is None: self.v = self.g.next() yield self.v def loop(): for i in ll: yield i ll = LazyList(loop()) g=iter(ll) g.next() g.next() del g, ll print gc.collect() centurion:~/python/python/trunk > ./python test_gencrash.py Segmentation fault (core dumped) I still haven't figured > out what the actual problem is, but at least this cuts out all the merge() > and times() crud in the case this was derived from, reducing it to a pure > zen essence of non-meaning. :) I'm sure I know where the crash is coming from: one of the generator objects is being dealloced twice. The GC module tp_clears one of the frame objects, which deallocs a generator the frame referenced. That first dealloc calls _Py_ForgetReference and then calls gen_dealloc, which calls its tp_del (as it's a running generator, albeit a simple one), which raises an exception in the frame, which causes it to be cleaned up, which causes another generator object to be cleaned up and its tp_del method to be called, which - for some reason - tries to dealloc the first generator again. The second dealloc calls _Py_ForgetReference again, and _Py_ForgetReference doesn't like that (the ob_prev/ob_next debug ptrs are already wiped when it's called the second time, so it crashes.) The thing I don't understand is how the cleaning up of the second generator triggers a dealloc of the first generator. The refcount is 0 when the dealloc is called the first time (or _Py_ForgetReference, as well as a few other places, would have bombed out), and it is also 0 when it gets dealloced the second time. Since deallocation is triggered by DECREF'ing, that implies something is INCREF'ing the first generator somewhere -- but without having a valid reference, since the generator has been cleaned up by actually DECREF'ing the reference that a frame object held. Or, somehow, deallocation is triggered by something other than a DECREF. Hmm. No, the second dealloc is triggered by the Py_XDECREF in ceval.c, when clearing the stack after an exception, which is the right thing to do. Time to set some breakpoints and step through the code, I guess. (I first thought the problem was caused by gen_dealloc doing 'Py_DECREF(gen->gen_frame)' instead of 'frame = gen->gen_frame; gen->gen_frame = NULL; Py_DECREF(frame)', but that isn't the case. It should do it that way, I believe, but it's not the cause of this crash.) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060415/04d86a94/attachment.htm From pje at telecommunity.com Sat Apr 15 01:05:38 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 14 Apr 2006 19:05:38 -0400 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> References: <9e804ac0604141451m1fdd7942mf7afc090297557e7@mail.gmail.com > <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060414183453.04b3a0c0@mail.telecommunity.com> At 06:26 PM 4/14/2006 -0400, Phillip J. Eby wrote: >If you replace the fun_tests in test_generators with this, and remove all >the other tests (e.g. pep_tests, tut_tests, etc.) you can still get it to >crash. The code above is much shorter, but has the disadvantage that I >don't really know what it's supposed to do, other than cause a crash. :) > >Interestingly, this code does *not* crash the interpreter on its own, only >when run as a doctest (with or without regrtest). I still haven't figured >out what the actual problem is, but at least this cuts out all the merge() >and times() crud in the case this was derived from, reducing it to a pure >zen essence of non-meaning. :) I believe this is now the irreducible minimum code necessary to produce the crash: fun_tests = """ >>> def cycle(g): ... yield 1 ... yield 1 >>> def root(): ... for i in c: ... yield i >>> r = root() >>> c = cycle(r) >>> c.next() 1 >>> r.next() 1 """ I have not been able to think of any way to further simplify this, that doesn't make the crash go away. For example, if I replace the "for" loop inside of "root()" with iter() and next() operations (either with or without a "while" loop), the crash does NOT occur. I don't know if that's a red herring, or if it means the crash is inherently about there being a "for" loop on the stack. I also know that the object 'r' has to be created before 'c' -- in previous iterations I found that just creating a cycle between two generators isn't enough; the one that has the paused "for" loop has to be created before the thing it's looping over. I'm guessing that this means the order they're listed in the GC tracking lists makes a difference. From thomas at python.org Sat Apr 15 01:11:42 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 15 Apr 2006 01:11:42 +0200 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <9e804ac0604141559r52b4e706qb7207e603fb8375@mail.gmail.com> References: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> <9e804ac0604141559r52b4e706qb7207e603fb8375@mail.gmail.com> Message-ID: <9e804ac0604141611p47ead00dodba512523701c8ec@mail.gmail.com> On 4/15/06, Thomas Wouters <thomas at python.org> wrote: > (I first thought the problem was caused by gen_dealloc doing > 'Py_DECREF(gen->gen_frame)' instead of 'frame = gen->gen_frame; > gen->gen_frame = NULL; Py_DECREF(frame)', but that isn't the case. It should > do it that way, I believe, but it's not the cause of this crash.) > fixes the crash.Ah, found the problem. After I hit 'send', I realized I hadn't checked frameobject's tp_clear, and sure enough, it calls Py_XDECREF in-place. That explains why the first generator object gets dealloced twice, although it doesn't explain why it doesn't blow up when it reaches a negative refcount. Fixing frameobject and genobject to both use Py_CLEAR() makes both the 'minimal' testcase and test_generators work. Your testcase also stops leaking, but alas, test_generators still leaks 255 references..... WTF-time for me, meaning I hit the sack and not think about it until next week :) I'll upload a patch to Tim's bugreport after I remove my debugging cruft. All tp_clear/tp_traverse methods should really always use Py_CLEAR/Py_VISIT. I guess revisiting all tp_clear and tp_traverse methods is well worth putting on the TODO list, Bad-pun'ly y'rs, -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060415/f8c5ecf0/attachment.html From pje at telecommunity.com Sat Apr 15 01:30:01 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 14 Apr 2006 19:30:01 -0400 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <9e804ac0604141611p47ead00dodba512523701c8ec@mail.gmail.com > References: <9e804ac0604141559r52b4e706qb7207e603fb8375@mail.gmail.com> <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> <5.1.1.6.0.20060414182039.036f79e8@mail.telecommunity.com> <9e804ac0604141559r52b4e706qb7207e603fb8375@mail.gmail.com> Message-ID: <5.1.1.6.0.20060414192031.047d7728@mail.telecommunity.com> At 01:11 AM 4/15/2006 +0200, Thomas Wouters wrote: >On 4/15/06, Thomas Wouters <<mailto:thomas at python.org>thomas at python.org> >wrote: >>(I first thought the problem was caused by gen_dealloc doing >>'Py_DECREF(gen->gen_frame)' instead of 'frame = gen->gen_frame; >>gen->gen_frame = NULL; Py_DECREF(frame)', but that isn't the case. It >>should do it that way, I believe, but it's not the cause of this crash.) > >fixes the crash.Ah, found the problem. After I hit 'send', I realized I >hadn't checked frameobject's tp_clear, and sure enough, it calls >Py_XDECREF in-place. That explains why the first generator object gets >dealloced twice, although it doesn't explain why it doesn't blow up when >it reaches a negative refcount. You're missing another piece of the puzzle: the problem is that since generators don't have a tp_clear, they don't know they're pointing to an invalid frame; it appears to still be running. So when the frame object releases items off the stack, it releases its reference to the generator, which is released normally, dropping out the local variable reference back to the generator whose frame is being cleared. That generator doesn't know it's holding a garbage frame, and thus proceeds to finalize it by resuming it... which then pops the blockstack, and f_stacktop is still valid, so popping the blockstack decrefs the frame a second time. > Fixing frameobject and genobject to both use Py_CLEAR() makes both the > 'minimal' testcase and test_generators work. It seems to me that frame_clear() should also set f_stacktop to NULL before doing any clearing; otherwise it's possible for a generator to think that the frame is still executable, and the double-decref could thus be replaced by a null pointer dereference. From tim.peters at gmail.com Sat Apr 15 05:02:37 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 14 Apr 2006 23:02:37 -0400 Subject: [Python-Dev] Debugging opportunity :-) In-Reply-To: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> References: <5.1.1.6.0.20060414163652.01e3f8a0@mail.telecommunity.com> Message-ID: <1f7befae0604142002g47bca0efh407e10f2746def0f@mail.gmail.com> [Phillip J. Eby] > Even with the close() gimmicks still in place? If so, it sounds like the > safest thing for a2 might be to claim that generators always need > finalizing. :( LOL! Been there many times before: Python's gc really does work, but it's as unforgivingly delicate as any dance you're likely to learn while still mortal :-) This one was actually typical, ending with an unlikely comedy of errors involving microscopic details of how multiple objects' cleanup routines could barely manage to fool each other into tearing down an object more than once. Until you get to that point of clarity, it's also typical to fear that it can never be fixed ;-) You and Thomas did a wonderful of job of tracking this one down quickly. Good show! Take Easter off -- but only if you have to :-) From tim.peters at gmail.com Sat Apr 15 06:42:28 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 15 Apr 2006 00:42:28 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <Pine.LNX.4.64.0604141938180.8363@alice> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> Message-ID: <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> [John J Lee] >> Assuming this is fixed in 2.5 final, is there some way to write doctests that >> work on both 2.4 and 2.5? If not, should something like >> doctest.IGNORE_EXCEPTION_DETAIL be added -- say >> IGNORE_EXCEPTION_MODULE? [also John] > Sorry, please ignore the post of mine I'm replying to here. > > I missed part of the thread, and Tim has already answered my question... That's news to Tim ;-) It's not possible to add a new doctest option in 2.5 that would allow a doctest to work under both 2.4 and 2.5: the test would blow up under 2.4, because 2.4's doctest wouldn't recognize the (new in 2.5) option, and would raise a ValueError of its own griping about the unknown option name. It would be possible to add a new doctest option in 2.5 that would (just) allow a doctest running _under_ 2.5 to accept a bare "name" in an exception despite that the traceback module changed in 2.5 to produce "some.dotted.name" instead. That doesn't seem very useful, though (as above, it would force the test to fail when run under 2.4). Of course a doctest can contain any Python code, so there are many ways to write a doctest that passes under all versions of Python, without active help from doctest. For example, using the running example in this thread, this way works under all versions of Python with a decimal module: """ >>> import decimal >>> try: >>> 1 / decimal.Decimal(0) >>> except decimal.DivisionByZero: >>> print "good" good """ Oddly enough, ELLIPSIS doesn't actually work for this purpose: """ >>> import decimal >>> 1 / decimal.Decimal(0) #doctest: +ELLIPSIS Traceback (most recent call last): [etc] ...DivisionByZero: x / 0 """ That test fails under 2.4, and would also fail under the presumably changed 2.5. The problem is that, as the docs say, when trying to guess where an exception message starts, doctest skips down to the first line after the "Traceback" line indented the same as the "Traceback" line and starting with an _alphanumeric_ character. There is no such line in this example (the exception line starts with a period), so doctest believes the example is showing expected stdout, not that it's showing an exception. The delightfully baffling result is that doctest complains that the example raises an exception when an exception wasn't expected. Stinking magic ;-) From bbman at mail.ru Sat Apr 15 09:30:26 2006 From: bbman at mail.ru (Mikhail Glushenkov) Date: Sat, 15 Apr 2006 07:30:26 +0000 (UTC) Subject: [Python-Dev] Any reason that any()/all() do not take a predicate argument? References: <loom.20060413T093154-130@post.gmane.org> <000401c65f08$11fd0710$6402a8c0@arkdesktop> Message-ID: <loom.20060415T092803-479@post.gmane.org> Andrew Koenig <ark <at> acm.org> writes: > How about this? > > if any(x==5 for x in seq): Cool! Thank you. From brian at sweetapp.com Sat Apr 15 10:53:48 2006 From: brian at sweetapp.com (Brian Quinlan) Date: Sat, 15 Apr 2006 10:53:48 +0200 Subject: [Python-Dev] Any reason that any()/all() do not take a predicateargument? In-Reply-To: <000401c65f08$11fd0710$6402a8c0@arkdesktop> References: <000401c65f08$11fd0710$6402a8c0@arkdesktop> Message-ID: <4440B49C.7040902@sweetapp.com> >> seq = [1,2,3,4,5] >> if any(seq, lambda x: x==5): >> ... >> >> which is clearly more readable than >> >> reduce(seq, lambda x,y: x or y==5, False) > > How about this? > > if any(x==5 for x in seq): Aren't all of these equivalent to: if 5 in seq: ... ? Cheers, Brian From martin at v.loewis.de Sat Apr 15 11:10:32 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 11:10:32 +0200 Subject: [Python-Dev] Any reason that any()/all() do not take a predicateargument? In-Reply-To: <4440B49C.7040902@sweetapp.com> References: <000401c65f08$11fd0710$6402a8c0@arkdesktop> <4440B49C.7040902@sweetapp.com> Message-ID: <4440B888.9020702@v.loewis.de> Brian Quinlan wrote: >>> if any(seq, lambda x: x==5): >> if any(x==5 for x in seq): > > Aren't all of these equivalent to: > > if 5 in seq: > ... There should be one-- and preferably only one --obvious way to do it. Regards, Martin From arigo at tunes.org Sat Apr 15 11:20:39 2006 From: arigo at tunes.org (Armin Rigo) Date: Sat, 15 Apr 2006 11:20:39 +0200 Subject: [Python-Dev] Checkin 45232: Patch #1429775 In-Reply-To: <9CF73E91-C4D3-4B3C-A814-DA3E6308C116@chello.se> References: <9CF73E91-C4D3-4B3C-A814-DA3E6308C116@chello.se> Message-ID: <20060415092039.GA15302@code0.codespeak.net> Hi Simon, On Thu, Apr 13, 2006 at 06:43:09PM +0200, Simon Percivall wrote: > Building SVN trunk with --enable-shared has been broken on Mac OS X > Intel > since rev. 45232 a couple of days ago. I can't say if this is the case > anywhere else as well. What happens is simply that ld can't find the > file > to link the shared mods against. For what it's worth, it still works on Linux (Gentoo/i386), insofar as it always worked -- which is that we need either to "make install" or to tweak /etc/ld.so.conf to let the executable find libpython2.5.so. A bientot, Armin From martin at v.loewis.de Sat Apr 15 11:28:27 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 11:28:27 +0200 Subject: [Python-Dev] Building Python with the free MS Toolkit compiler In-Reply-To: <79990c6b0604131411j32a53793g5845107d3a648863@mail.gmail.com> References: <79990c6b0604131411j32a53793g5845107d3a648863@mail.gmail.com> Message-ID: <4440BCBB.5070907@v.loewis.de> Paul Moore wrote: > I've just added some instructions on how to build Python on Windows > with the free MS Toolkit C++ compiler. They are at > http://wiki.python.org/moin/Building_Python_with_the_free_MS_C_Toolkit. Cool! If you think that adding/changing files in Python would simplify the process, don't hesitate to produce patches. Regards, Martin From martin at v.loewis.de Sat Apr 15 11:30:07 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 11:30:07 +0200 Subject: [Python-Dev] Checkin 45232: Patch #1429775 In-Reply-To: <20060415092039.GA15302@code0.codespeak.net> References: <9CF73E91-C4D3-4B3C-A814-DA3E6308C116@chello.se> <20060415092039.GA15302@code0.codespeak.net> Message-ID: <4440BD1F.6070907@v.loewis.de> Armin Rigo wrote: > For what it's worth, it still works on Linux (Gentoo/i386), insofar as > it always worked -- which is that we need either to "make install" or to > tweak /etc/ld.so.conf to let the executable find libpython2.5.so. I usually set LD_LIBRARY_PATH in the shell where I want to use an --enable-share'd binary. Regards, Martin From arigo at tunes.org Sat Apr 15 12:20:44 2006 From: arigo at tunes.org (Armin Rigo) Date: Sat, 15 Apr 2006 12:20:44 +0200 Subject: [Python-Dev] Checkin 45232: Patch #1429775 In-Reply-To: <4440BD1F.6070907@v.loewis.de> References: <9CF73E91-C4D3-4B3C-A814-DA3E6308C116@chello.se> <20060415092039.GA15302@code0.codespeak.net> <4440BD1F.6070907@v.loewis.de> Message-ID: <20060415102044.GA21305@code0.codespeak.net> Hi Martin, On Sat, Apr 15, 2006 at 11:30:07AM +0200, "Martin v. L?wis" wrote: > Armin Rigo wrote: > > For what it's worth, it still works on Linux (Gentoo/i386), insofar as > > it always worked -- which is that we need either to "make install" or to > > tweak /etc/ld.so.conf to let the executable find libpython2.5.so. > > I usually set LD_LIBRARY_PATH in the shell where I want to use an > --enable-share'd binary. Thanks for reminding me of that trick! A bientot, Armin From martin at v.loewis.de Sat Apr 15 14:57:25 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 14:57:25 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> Message-ID: <4440EDB5.7080402@v.loewis.de> Tim Peters wrote: > Well, there may well be a bug (or multiple bugs) underlying that one > too. It's one thing for Py_Finalize() not to release all memory (it > doesn't and probably never will), but it's not necessarily the same > thing if running Py_Initialize() ... Py_Finalize() repeatedly keeps > leaking more and more memory. Running Py_Initialize/Py_Finalize once leaves 2150 objects behind (on Linux). The second run adds 180 additional objects; each subsequent run appears to add 156 more. > Not unless the module has a finalization function called by > Py_Finalize() that frees such things (like PyString_Fini and > PyInt_Fini). How should the module install such a function? PyString_Fini and PyInt_Fini are invoked explicitly in pythonrun.c. That doesn't scale to extension modules. > I'm not clear on whether, e.g., init_socket() may get called more than > once if socket-slinging code appears in a Py_Initialize() ... > Py_Finalize(). Module initialization functions are called each time. Py_Finalize "forgets" which modules had been loaded, and reloads them all. Regards, Martin From martin at v.loewis.de Sat Apr 15 16:42:45 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 16:42:45 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <4440EDB5.7080402@v.loewis.de> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> <4440EDB5.7080402@v.loewis.de> Message-ID: <44410665.7000203@v.loewis.de> Martin v. L?wis wrote: > Tim Peters wrote: >> Well, there may well be a bug (or multiple bugs) underlying that one >> too. It's one thing for Py_Finalize() not to release all memory (it >> doesn't and probably never will), but it's not necessarily the same >> thing if running Py_Initialize() ... Py_Finalize() repeatedly keeps >> leaking more and more memory. > > Running Py_Initialize/Py_Finalize once leaves 2150 objects behind (on > Linux). The second run adds 180 additional objects; each subsequent > run appears to add 156 more. With COUNT_ALLOCS, I get the following results: Ignoring the two initial rounds of init/fini, each subsequent init/fini pair puts this number of objects into garbage: builtin_function_or_method 9 cell 1 code 12 dict 23 function 12 getset_descriptor 9 instancemethod 7 int 9 list 6 member_descriptor 23 method_descriptor 2 staticmethod 1 str 86 tuple 78 type 14 weakref 38 wrapper_descriptor 30 This totals to 360, which is for some reason higher than the numbers I get when counting the objects on the global list of objects. Is it not right to obtain the number of live object by computing tp->tp_allocs-tp->tp_frees? Regards, Martin From p.f.moore at gmail.com Sat Apr 15 18:36:30 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 15 Apr 2006 17:36:30 +0100 Subject: [Python-Dev] Building Python with the free MS Toolkit compiler In-Reply-To: <4440BCBB.5070907@v.loewis.de> References: <79990c6b0604131411j32a53793g5845107d3a648863@mail.gmail.com> <4440BCBB.5070907@v.loewis.de> Message-ID: <79990c6b0604150936k5896020aj84c5a72ac9b70f32@mail.gmail.com> On 4/15/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Paul Moore wrote: > > I've just added some instructions on how to build Python on Windows > > with the free MS Toolkit C++ compiler. They are at > > http://wiki.python.org/moin/Building_Python_with_the_free_MS_C_Toolkit. > > Cool! If you think that adding/changing files in Python would simplify > the process, don't hesitate to produce patches. I've just submitted http://www.python.org/sf/1470875 and assigned it to you. I hope that's OK. It's basically a documentation patch plus 2 supporting build files. Paul From janssen at parc.com Sat Apr 15 21:51:59 2006 From: janssen at parc.com (Bill Janssen) Date: Sat, 15 Apr 2006 12:51:59 PDT Subject: [Python-Dev] Any reason that any()/all() do not take a predicateargument? In-Reply-To: Your message of "Sat, 15 Apr 2006 01:53:48 PDT." <4440B49C.7040902@sweetapp.com> Message-ID: <06Apr15.125208pdt."58633"@synergy1.parc.xerox.com> > >> seq = [1,2,3,4,5] > >> if any(seq, lambda x: x==5): > >> ... > >> > >> which is clearly more readable than > >> > >> reduce(seq, lambda x,y: x or y==5, False) > > > > How about this? > > > > if any(x==5 for x in seq): > > Aren't all of these equivalent to: > > if 5 in seq: > ... Yeah, but you can't do more complicated expressions that way, like any(lambda x: x[3] == "thiskey") I think it makes a lot of sense for any and all to take optional predicate function arguments. But perhaps the syntax should be: X in SEQ If X is a predicate function, it gets called to determine "equals"; if an expression or other object, the normal rules apply. Of course, then you couldn't look for a function in a set of functions... I suppose (len([x for x in SEQ if PRED(x)]) > 0) will suffice for now. Obvious enough, Martin? Bill From martin at v.loewis.de Sat Apr 15 22:16:14 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 15 Apr 2006 22:16:14 +0200 Subject: [Python-Dev] Any reason that any()/all() do not take a predicateargument? In-Reply-To: <06Apr15.125208pdt."58633"@synergy1.parc.xerox.com> References: <06Apr15.125208pdt."58633"@synergy1.parc.xerox.com> Message-ID: <4441548E.9090000@v.loewis.de> Bill Janssen wrote: > Yeah, but you can't do more complicated expressions that way, like > > any(lambda x: x[3] == "thiskey") Not /quite/ sure what this is intended to mean, but most likely, you meant any(x[3]=="thiskey" for x in seq) > I think it makes a lot of sense for any and all to take optional > predicate function arguments. I don't believe that adds expressiveness: you can always formulate this with a generator expression - apparently, those are of the "read-only" nature, i.e. difficult to formulate (assuming you have no difficulties to read above term). > I suppose > > (len([x for x in SEQ if PRED(x)]) > 0) > > will suffice for now. Obvious enough, Martin? It's simpler written as any(PRED(x) for x in SEQ) or any(True for x in SEQ if PRED(x)) if you want Using any() has the advantage over len() that the any() code stops at the first value that becomes true, whereas the len code ill compute them all. Regards, Martin From jjl at pobox.com Sat Apr 15 21:58:25 2006 From: jjl at pobox.com (John J Lee) Date: Sat, 15 Apr 2006 19:58:25 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> Message-ID: <Pine.LNX.4.64.0604151945100.8453@alice> On Sat, 15 Apr 2006, Tim Peters wrote: [...] > [also John] >> Sorry, please ignore the post of mine I'm replying to here. >> >> I missed part of the thread, and Tim has already answered my question... > > That's news to Tim ;-) You mentioned use of '...' / ELLIPSIS, IIRC, so I assumed that would work -- but it seems not, from your latest post (that I'm replying to here). > It's not possible to add a new doctest option > in 2.5 that would allow a doctest to work under both 2.4 and 2.5: the > test would blow up under 2.4, because 2.4's doctest wouldn't recognize > the (new in 2.5) option, and would raise a ValueError of its own > griping about the unknown option name. <slaps forehead> [...] > """ >>>> import decimal >>>> try: >>>> 1 / decimal.Decimal(0) >>>> except decimal.DivisionByZero: >>>> print "good" > good > """ Yes, that works, I see. Kind of annoying of course, but can't be helped. Hmm, will 2.5's doctest work under Python 2.4? I guess that's not guaranteed, since I don't see any comment in doctest.py implying it needs to be compatible with old Pythons. > Oddly enough, ELLIPSIS doesn't actually work for this purpose: [...] John From tim.peters at gmail.com Sun Apr 16 00:57:14 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 15 Apr 2006 18:57:14 -0400 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <Pine.LNX.4.64.0604151945100.8453@alice> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> <Pine.LNX.4.64.0604151945100.8453@alice> Message-ID: <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> [John J Lee] ... > You mentioned use of '...' / ELLIPSIS, IIRC, so I assumed that would work > -- but it seems not, from your latest post (that I'm replying to here). Different context -- answering why IGNORE_EXCEPTION_DETAIL exists given that ELLIPSIS can do everything it does (provided you don't care about Pythons before 2.4). That sub-thread was a red herring (a distraction from the real topic here). ... > Hmm, will 2.5's doctest work under Python 2.4? I guess that's not > guaranteed, since I don't see any comment in doctest.py implying it needs > to be compatible with old Pythons. doctest compatibility with 2.4 is neither a goal nor a non-goal for 2.5. I'm not sure why it's being asked, since the incompatible change projected for 2.5 is in how the trackback module spells some exception names; and doctest (every version of doctest) gets its idea of the name of an exception from the traceback module. From tim.peters at gmail.com Sun Apr 16 01:00:27 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 15 Apr 2006 19:00:27 -0400 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <4440EDB5.7080402@v.loewis.de> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> <4440EDB5.7080402@v.loewis.de> Message-ID: <1f7befae0604151600iea0445coaaef3f5c071102d7@mail.gmail.com> [Martin] > Running Py_Initialize/Py_Finalize once leaves 2150 objects behind (on > Linux). The second run adds 180 additional objects; each subsequent > run appears to add 156 more. One thing I notice is that they're all > 0 :-) I believe that, at one time, the second and subsequent numbers were 0, but maybe that's just old age pining for the days when kids listened to good music. Because new-style classes create cycles that Py_Finalize() doesn't clean up, it may make analysis easier to stick a PyGC_Collect() call (or two! repeat until it returns 0) inside the loop now. ... >> Not unless the module has a finalization function called by >> Py_Finalize() that frees such things (like PyString_Fini and >> PyInt_Fini). > How should the module install such a function? There is no way at present, short of editing the source for Py_Finalize and recompiling. Presumably this is something that should be addressed in the module initialization/finalization PEP, right? I suppose people may want a way for Python modules to provide finalization functions too. >> I'm not clear on whether, e.g., init_socket() may get called more than >> once if socket-slinging code appears in a Py_Initialize() ... >> Py_Finalize(). > Module initialization functions are called each time. Py_Finalize > "forgets" which modules had been loaded, and reloads them all. OK, so things like the previous example (socketmodule.c's unconditional socket_gaierror = PyErr_NewException(...); in init_socket()) are guaranteed to leak "the old" socket_gaierror object across multiple socket module initializations. ... [later msg] ... > With COUNT_ALLOCS, I get the following results: Ignoring the two initial > rounds of init/fini, each subsequent init/fini pair puts this number > of objects into garbage: > > builtin_function_or_method 9 > cell 1 > code 12 > dict 23 > function 12 > getset_descriptor 9 > instancemethod 7 > int 9 > list 6 > member_descriptor 23 > method_descriptor 2 > staticmethod 1 > str 86 > tuple 78 > type 14 > weakref 38 > wrapper_descriptor 30 FYI, from debugging Zope C-code leaks, staring at leftover strings, dict keys, and tuple contents often helps identify the sources. types too. > This totals to 360, which is for some reason higher than the numbers > I get when counting the objects on the global list of objects. How much higher? Last time I looked at this stuff (2.3b1, I think), the "all" in "the global list of all objects" wasn't true, and I made many changes to the core at the time to make it "more true". For example, all static C singleton objects were missing from that list (from Py_None and Py_True to all builtin static type objects). As SpecialBuilds.txt says, it became true in 2.3 that "a static type object T does appear in this list if at least one object of type T has been created" when COUNT_ALLOCS is defined. I expect that for _most_ static type objects, just doing initialize/finalize will not create objects of that type, so they'll be missing from the global list of objects. It used to be a good check to sum ob_refcnt over all objects in the global list, and compare that to _Py_RefTotal. The difference was a good clue about how many objects were missing from the global list, and became a better clue after I added code to inc_count() to call _Py_AddToAllObjects() whenever an incoming type object has tp_allocs == 0. Because of the latter, every type object with a non-zero tp_allocs count is in the list of all objects now (but only when COUNT_ALLOCS is defined). It's possible that other static singletons (like Py_None) have been defined since then that didn't grow code to add them to the global list, although by construction _PyBuiltin_Init() does add all available from __builtin__. > Is it not right to obtain the number of live object by computing > tp->tp_allocs-tp->tp_frees? That's the theory ;-) This code is never tested, though, and I bet is rarely used. Every time I've looked at it (most recently for 2.3b1, about 3 years ago), it was easy to find bugs. They creep in over time. Common historical weak points are "fancy" object-creation code, and the possibility of resurrection in a destructor. The special builds need special stuff at such times, and exactly what's needed isn't obvious. From nnorwitz at gmail.com Sun Apr 16 02:37:05 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sat, 15 Apr 2006 17:37:05 -0700 Subject: [Python-Dev] valgrind reports Message-ID: <ee2a432c0604151737tdcad9c0t25d84eadc11e7ff3@mail.gmail.com> This was run on linux amd64. It would be great to run purify on windows and other platforms. If you do, please report back here, even if nothing was found. That would be a good data point. Summary of tests with problems: test_codecencodings_jp (1 invalid read) test_coding (1 invalid read) test_ctypes (crashes valgrind) test_socket_ssl (many invalid reads) test_sqlite (1 invalid read, 1 memory leak) Details: New one: test_codecencodings_jp ==19749== ==19749== Invalid read of size 1 ==19749== at 0xE4F7BB3: shift_jis_2004_decode (_codecs_jp.c:642) ==19749== by 0xE1AD8D5: decoder_feed_buffer (multibytecodec.c:839) ==19749== by 0xE1ADCEB: mbidecoder_decode (multibytecodec.c:1028) ==19749== by 0x46875C: call_function (ceval.c:3637) ==19749== Address 0x5E0DD63 is 0 bytes after a block of size 3 alloc'd ==19749== at 0x4A19AC6: malloc (vg_replace_malloc.c:149) ==19749== by 0xE1ADD7F: mbidecoder_decode (multibytecodec.c:1017) ==19749== by 0x46875C: call_function (ceval.c:3637) I think this is the same as: http://python.org/sf/1357836 test_coding ==19749== ==19749== Invalid read of size 1 ==19749== at 0x4135DF: tok_nextc (tokenizer.c:901) ==19749== by 0x413BE5: tok_get (tokenizer.c:1124) ==19749== by 0x4145D8: PyTokenizer_Get (tokenizer.c:1515) ==19749== by 0x411F12: parsetok (parsetok.c:135) ==19749== by 0x481051: PyParser_ASTFromFile (pythonrun.c:1318) ==19749== Address 0xC6E1EE6 is 2 bytes before a block of size 8,192 free'd ==19749== at 0x4A1A61D: free (vg_replace_malloc.c:235) ==19749== by 0x4125BC: error_ret (tokenizer.c:167) ==19749== by 0x412D34: decoding_fgets (tokenizer.c:513) ==19749== by 0x4134B0: tok_nextc (tokenizer.c:841) ==19749== by 0x413BE5: tok_get (tokenizer.c:1124) ==19749== by 0x4145D8: PyTokenizer_Get (tokenizer.c:1515) ==19749== by 0x411F12: parsetok (parsetok.c:135) test_ctypes crashes valgrind on amd64: http://bugs.kde.org/show_bug.cgi?id=125651 test_socket_ssl causes all kinds of havoc due to libssl. I don't know of any problems specific to Python. It would be good for others to test on different architectures. test_sqlite ==22745== Invalid read of size 4 ==22745== at 0x6B94D01: sqlite3VdbeFinalize (vdbeaux.c:1457) ==22745== by 0x6B7B24F: sqlite3_finalize (main.c:1304) ==22745== by 0x6858646: statement_dealloc (statement.c:300) ==22745== by 0x6854EAE: connection_call (connection.c:759) ==22745== by 0x419287: call_function_tail (abstract.c:1799) ==22745== by 0x4193AF: PyObject_CallFunction (abstract.c:1854) ==22745== by 0x685364C: cache_get (cache.c:170) ==22745== by 0x6856181: _query_execute (cursor.c:545) ==22745== Address 0x53FE38C is 148 bytes inside a block of size 696 free'd ==22745== at 0x4A1A61D: free (vg_replace_malloc.c:235) ==22745== by 0x6B8C39D: sqlite3FreeX (util.c:287) ==22745== by 0x6B94EB7: sqlite3VdbeDelete (vdbeaux.c:1524) ==22745== by 0x6B94D2D: sqlite3VdbeFinalize (vdbeaux.c:1462) ==22745== by 0x6B7B24F: sqlite3_finalize (main.c:1304) ==22745== by 0x6857F8B: statement_create (statement.c:82) ==22745== by 0x6854E67: connection_call (connection.c:748) ==22796== 8 bytes in 1 blocks are definitely lost in loss record 21 of 545 ==22796== at 0x4A19AC6: malloc (vg_replace_malloc.c:149) ==22796== by 0x6854A21: connection_set_isolation_level (connection.c:721) ==22796== by 0x6854C40: connection_init (connection.c:81) n From tim.peters at gmail.com Sun Apr 16 02:57:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 15 Apr 2006 20:57:18 -0400 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <443CB273.5010500@v.loewis.de> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <443CB273.5010500@v.loewis.de> Message-ID: <1f7befae0604151757i6fde87a0n68e24768f85d82d3@mail.gmail.com> [Tim] >> All the trunk buildbots started failing about 5 hours ago, in >> test_parser. There have been enough checkins since then that the >> boundary between passing and failing is about to scroll off forever. [Martin] > It's not lost, though; it's just not displayed anymore. It would be > possible to lengthen the waterfall from 12 hours to a larger period > of time. Since we're spread across time zones, I think 24 hours is a good minimum. If something is set to 12 hours now, doesn't look like it's working: when I wrote my msg, it showed (as I said) about 5 hours of history. Right now it shows only about 3 hrs, from Sat 15 Apr 2006 21:47:13 GMT to now (about 00:50:00 GMT Sunday). This is under Firefox on Windows, so nobody can blame it on an IE bug :-) > You can go back in time yourself, by editing the build number in > > http://www.python.org/dev/buildbot/trunk/g4%20osx.4%20trunk/builds/361 I don't care enough about osx.4 enough to bother ;-) Seriously, clicking on the waterfall display is much more efficient. From greg.ewing at canterbury.ac.nz Sun Apr 16 04:07:45 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 16 Apr 2006 14:07:45 +1200 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <4441A6F1.8060701@canterbury.ac.nz> Steven Bethard wrote: > make <callable> <name> <tuple>: > <block> I don't like the position of the name being defined. It should be straight after the opening keyword, as with 'def' and 'class'. This makes it much easier to search for definitions of things, both by eyeball and editor search functions, etc. -- Greg From raymond.hettinger at verizon.net Sun Apr 16 05:26:43 2006 From: raymond.hettinger at verizon.net (Raymond Hettinger) Date: Sat, 15 Apr 2006 20:26:43 -0700 Subject: [Python-Dev] Any reason that any()/all() do not take apredicateargument? References: <06Apr15.125208pdt."58633"@synergy1.parc.xerox.com> Message-ID: <007101c66105$97e62150$7372e145@RaymondLaptop1> [Bill Janssen] > Yeah, but you can't do more complicated expressions that way, like > > any(lambda x: x[3] == "thiskey") You're not making any sense. The sequence argument is not listed and the lambda is unnecessary. Try this instead: any(x[3] == 'thiskey' for x in seq) > I think it makes a lot of sense for any and all to take optional > predicate function arguments. I think you don't understand what you already have to work with in Py2.5a -- a boolean expression in a genexp should always suffice -- no separate lambda based predicate function is ever required. Raymond From ian.bollinger at gmail.com Sun Apr 16 05:44:58 2006 From: ian.bollinger at gmail.com (Ian D. Bollinger) Date: Sat, 15 Apr 2006 23:44:58 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <4441A6F1.8060701@canterbury.ac.nz> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <4441A6F1.8060701@canterbury.ac.nz> Message-ID: <4441BDBA.30300@gmail.com> Greg Ewing wrote: > I don't like the position of the name being defined. > It should be straight after the opening keyword, as > with 'def' and 'class'. This makes it much easier > to search for definitions of things, both by eyeball > and editor search functions, etc. > > Also, all other definitions have a keyword or punctuation separating each token, probably for good reason. This might not be desirable though, as it would require yet another reserved word. (I personally think that make is too ubiquitous a name for it to be made a reserved word at this point, though.) The word 'of' might be appropriate as in: make class of type. However, 'from' seems to work better in most instances, and may be better or worse as it is already a reserved word. (make tag from Element) -- Ian D. Bollinger From martin at v.loewis.de Sun Apr 16 08:50:25 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 16 Apr 2006 08:50:25 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <1f7befae0604151600iea0445coaaef3f5c071102d7@mail.gmail.com> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> <4440EDB5.7080402@v.loewis.de> <1f7befae0604151600iea0445coaaef3f5c071102d7@mail.gmail.com> Message-ID: <4441E931.30005@v.loewis.de> Tim Peters wrote: > Because new-style classes create cycles that Py_Finalize() doesn't > clean up, it may make analysis easier to stick a PyGC_Collect() call > (or two! repeat until it returns 0) inside the loop now. I'm shy to do this: the comment in Py_Finalize suggests that things will break if there is a "late" garbage collection. > There is no way at present, short of editing the source for > Py_Finalize and recompiling. Presumably this is something that should > be addressed in the module initialization/finalization PEP, right? Indeed. >> This totals to 360, which is for some reason higher than the numbers >> I get when counting the objects on the global list of objects. > > How much higher? Well, I counted an increase of 156 objects on the "all objects" list, and an increase of 360 according to the COUNT_ALLOCS numbers. The first number was without COUNT_ALLOCS being defined, though. Anyway, thanks for your comments. I'll try to look at this from time to time, maybe I can resolve some of the leaks. Regards, Martin From martin at v.loewis.de Sun Apr 16 09:12:52 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 16 Apr 2006 09:12:52 +0200 Subject: [Python-Dev] Preserving the blamelist In-Reply-To: <1f7befae0604151757i6fde87a0n68e24768f85d82d3@mail.gmail.com> References: <1f7befae0604112216u4aebe698o84a8757ff19f92f3@mail.gmail.com> <443CB273.5010500@v.loewis.de> <1f7befae0604151757i6fde87a0n68e24768f85d82d3@mail.gmail.com> Message-ID: <4441EE74.2070900@v.loewis.de> Tim Peters wrote: > Since we're spread across time zones, I think 24 hours is a good > minimum. Ok, done. > If something is set to 12 hours now, doesn't look like it's > working: when I wrote my msg, it showed (as I said) about 5 hours of > history. Right now it shows only about 3 hrs, from Sat 15 Apr 2006 > 21:47:13 GMT to now (about 00:50:00 GMT Sunday). This is under > Firefox on Windows, so nobody can blame it on an IE bug :-) There is another limit on the height of the waterfall: at most 200 "timestamps". I have now doubled that as well. Regards, Martin From jjl at pobox.com Sun Apr 16 14:25:47 2006 From: jjl at pobox.com (John J Lee) Date: Sun, 16 Apr 2006 12:25:47 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604121814j25103030g8739fcd53ea8b44e@mail.gmail.com> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> <Pine.LNX.4.64.0604151945100.8453@alice> <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> Message-ID: <Pine.LNX.4.64.0604160225231.8453@alice> On Sat, 15 Apr 2006, Tim Peters wrote: [...] >> Hmm, will 2.5's doctest work under Python 2.4? I guess that's not >> guaranteed, since I don't see any comment in doctest.py implying it needs >> to be compatible with old Pythons. > > doctest compatibility with 2.4 is neither a goal nor a non-goal for > 2.5. I'm not sure why it's being asked, since the incompatible change > projected for 2.5 is in how the trackback module spells some exception > names; and doctest (every version of doctest) gets its idea of the > name of an exception from the traceback module. Ah, yes. (Does the channelling service extend to divining the questions that posters on python-dev *should* have asked? No?) OK, I suppose I should have asked "will 2.5's module traceback work with Python 2.4?". I guess the answer is something resembling "no", but of course (?) the question I'm really interested in is "how, without too much effort or ugliness, can people run their doctests on both 2.4 and 2.5"? I guess if I care sufficiently, I should just go ahead and back-port to 2.4 <insert whatever #$!& code turns out to need back-porting here ;-> and post it somewhere for the public good. John From p.f.moore at gmail.com Sun Apr 16 15:19:25 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 16 Apr 2006 14:19:25 +0100 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <Pine.LNX.4.64.0604160225231.8453@alice> References: <20060412211409.000EF1E4003@bag.python.org> <e1ktuo$o1$1@sea.gmane.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> <Pine.LNX.4.64.0604151945100.8453@alice> <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> <Pine.LNX.4.64.0604160225231.8453@alice> Message-ID: <79990c6b0604160619o6fd7ed1fl2c47ff765cda5973@mail.gmail.com> On 4/16/06, John J Lee <jjl at pobox.com> wrote: > OK, I suppose I should have asked "will 2.5's module traceback work with > Python 2.4?". I guess the answer is something resembling "no", but of > course (?) the question I'm really interested in is "how, without too much > effort or ugliness, can people run their doctests on both 2.4 and 2.5"? I think there was an example earlier - you could change your doctest to not rely on the exact exception by catching it: >>> try: ... 1/0 ... except ZeroDivisionError: ... print "Divide by zero!" ... Divide by zero! >>> Whether that counts as "too much effort or ugliness", I'm not sure. Personally, my instinct is that having the whole traceback in a doctest is at least as ugly. Paul. From guido at python.org Sun Apr 16 15:33:45 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 16 Apr 2006 15:33:45 +0200 Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <79990c6b0604160619o6fd7ed1fl2c47ff765cda5973@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> <Pine.LNX.4.64.0604151945100.8453@alice> <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> <Pine.LNX.4.64.0604160225231.8453@alice> <79990c6b0604160619o6fd7ed1fl2c47ff765cda5973@mail.gmail.com> Message-ID: <ca471dc20604160633o65e12345kc14aec2a65dd4a2d@mail.gmail.com> On 4/16/06, Paul Moore <p.f.moore at gmail.com> wrote: > Personally, my instinct is that having the whole traceback in a > doctest is at least as ugly. Well, it depends on what you use doctest for. If you use it to write unit tests, the try/except solution is fine, and perhaps preferable. If you use it as *documentation*, where doctest is used to ensure that the documentation is accurate, showing a (short) traceback seems to be the logical thing to do. In a sample session where you want to show that a certain exception is raised for a certain combination of erroneous arguments (for example), showing the traceback is much more natural than putting a try/except around the erroneous call. So one person's ugly is another person's pretty. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jjl at pobox.com Sun Apr 16 16:39:42 2006 From: jjl at pobox.com (John J Lee) Date: Sun, 16 Apr 2006 14:39:42 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] r45321 - in python/trunk: Lib/test/test_traceback.py Lib/traceback.py Misc/NEWS In-Reply-To: <ca471dc20604160633o65e12345kc14aec2a65dd4a2d@mail.gmail.com> References: <20060412211409.000EF1E4003@bag.python.org> <1f7befae0604130834s23b3fc0ydbb11a5e6a94a217@mail.gmail.com> <443E7CF8.3020102@v.loewis.de> <Pine.LNX.4.64.0604141926150.8363@alice> <Pine.LNX.4.64.0604141938180.8363@alice> <1f7befae0604142142gf080y90a8223d2f630648@mail.gmail.com> <Pine.LNX.4.64.0604151945100.8453@alice> <1f7befae0604151557t7a49c4f1t961c850b0b7e10d4@mail.gmail.com> <Pine.LNX.4.64.0604160225231.8453@alice> <79990c6b0604160619o6fd7ed1fl2c47ff765cda5973@mail.gmail.com> <ca471dc20604160633o65e12345kc14aec2a65dd4a2d@mail.gmail.com> Message-ID: <Pine.LNX.4.64.0604161426050.8453@localhost> On Sun, 16 Apr 2006, Guido van Rossum wrote: > On 4/16/06, Paul Moore <p.f.moore at gmail.com> wrote: >> Personally, my instinct is that having the whole traceback in a >> doctest is at least as ugly. You don't need the whole traceback -- e.g.: """ If a URL is supplied, it must have an authority (host:port) component. According to RFC 3986, having an authority component means the URL must have two slashes after the scheme: >>> _parse_proxy('file:/ftp.example.com/') Traceback (most recent call last): ValueError: proxy URL with no authority: 'file:/ftp.example.com/' """ I think the try: ... except FooException: print 'FooException occurred' style is uglier and less natural than that, but I guess it's not a big deal. > Well, it depends on what you use doctest for. If you use it to write > unit tests, the try/except solution is fine, and perhaps preferable. [...] Preferable because depending less on irrelevant details? I had thought that, apart from the issue with module traceback, IGNORE_EXCEPTION_DETAIL made that a non-issue in most cases, but perhaps I missed something (again). John From thomas at python.org Sun Apr 16 18:10:09 2006 From: thomas at python.org (Thomas Wouters) Date: Sun, 16 Apr 2006 18:10:09 +0200 Subject: [Python-Dev] refleaks & test_tcl & threads Message-ID: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> On my box, the latest batch of refleak fixes seems to have fixed all but one of the leaky tests. test_threading_local still leaks, but it leaks rather consistently now (which is new.) I'm not able to make the other ones leak with any combination of '-u<resource>' or removing .pyc's beforehand or running longer or shorter. I hope it's not just my box :-) test_threading_local is not entirely consistent, but it looks a lot more reliable on my box than on Neal's automated mails: test_threading_local beginning 11 repetitions 12345678901 ........... test_threading_local leaked [34, 34, 34, 34, 34, 26, 26, 22, 34] references One remaining issue with refleakhunting on my machine is that test_tcl can't stand being run twice. Even without -R, this makes Python hang while waiting for a mutex in the second run through test_tcl: ...trunk $ ./python -E -tt Lib/test/regrtest test_tcl test_tcl Attaching gdb to the hung process shows this unenlightening trace: #0 0x00002b7d6629514b in __lll_mutex_lock_wait () from /lib/libpthread.so.0 #1 0x00002b7d6639a280 in completed.4801 () from /lib/libpthread.so.0 #2 0x0000000000000004 in ?? () #3 0x00002b7d66291dca in pthread_mutex_lock () from /lib/libpthread.so.0 #4 0x0000000000000000 in ?? () The process has one other thread, which is stuck here: #0 0x00002b7d667f14d6 in __select_nocancel () from /lib/libc.so.6 #1 0x00002b7d67512d8c in Tcl_WaitForEvent () from /usr/lib/libtcl8.4.so.0 #2 0x00002b7d66290b1c in start_thread () from /lib/libpthread.so.0 #3 0x00002b7d667f8962 in clone () from /lib/libc.so.6 #4 0x0000000000000000 in ?? () It smells like test_tcl or Tkinter is doing something wrong with regards to threads. I can reproduce this on a few machines, but all of them run newish linux kernels with newish glibc's and newish tcl/tk. At least in kernel/libc land, various thread related things changed of late. I don't have access to other machines with tcl/tk right now, but I wonder if anyone can reproduce this in different situations. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060416/c815e197/attachment.html From thomas at python.org Sun Apr 16 20:27:49 2006 From: thomas at python.org (Thomas Wouters) Date: Sun, 16 Apr 2006 20:27:49 +0200 Subject: [Python-Dev] refleaks & test_tcl & threads In-Reply-To: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> References: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> Message-ID: <9e804ac0604161127i726878f1ibe04a2bbfd278872@mail.gmail.com> On 4/16/06, Thomas Wouters <thomas at python.org> wrote: > test_threading_local is not entirely consistent, but it looks a lot more > reliable on my box than on Neal's automated mails: > > test_threading_local > beginning 11 repetitions > 12345678901 > ........... > test_threading_local leaked [34, 34, 34, 34, 34, 26, 26, 22, 34] > references > This is caused by _threading_local.local's __del__ method, or rather the fact that it's part of a closure enclosing threading.enumerate. Fixing the inner __del__ to call enumerate (which is 'threading.enumerate') directly, rather than through the cellvar 'threading_enumerate', makes the leak go away. The fact that the leakage is inconsistent is actually easily explained: the 'local' instances are stored on the 'currentThread' object indexed by 'key', and keys sometimes get reused (since they're basically id(self)), so sometimes an old reference is overwritten. It doesn't quite explain why using the cellvar causes the cycle, nor does it explain why gc.garbage remains empty. I guess some Thread objects linger in threading's _active or _limbo dicts, but I have no idea why having a cellvar in the cycle matters; they seem to be participating in GC just fine, and I cannot reproduce the leak with a simpler test. And on top of that, I'm not sure *why* _threading_local.local is doing the song and dance to get a cellvar. If the global 'enumerate' (which is threading.enumerate) disappears, it will be because Python is cleaning up. Even if we had a need to clean up the local dict at that time (which I don't believe we do), using a cellvar doesn't guarantee anything more than using a global name. Chances are very good that the 'threading' module has also been cleared, meaning that while we still have a reference to threading.enumerate, it cannot use the three globals it uses (_active, _limbo, _active_limbo_lock.) All in all, I think matters improve significantly if it just deals with the NameError it'll get at cleanup (which it already does.) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060416/24be3ae6/attachment-0001.htm From steven.bethard at gmail.com Sun Apr 16 20:38:55 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Sun, 16 Apr 2006 12:38:55 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <4441A6F1.8060701@canterbury.ac.nz> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <4441A6F1.8060701@canterbury.ac.nz> Message-ID: <d11dcfba0604161138h66accfb4hbe02311d637beb82@mail.gmail.com> On 4/15/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > Steven Bethard wrote: > > > make <callable> <name> <tuple>: > > <block> > > I don't like the position of the name being defined. > It should be straight after the opening keyword, as > with 'def' and 'class'. I see where you're coming from, but the current ordering exists so that ``class`` and ``make type`` are basically equivalent (for new-style classes). I'm trying not to break too far from the class statement. The next update of the PEP will try to make this more clear. That said, I think that the order of expressions is only a minor concern, at least until people have agreed that the make-statement concept is a good one. Until then, I'd rather defer pure syntax issues. STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From brett at python.org Sun Apr 16 22:49:57 2006 From: brett at python.org (Brett Cannon) Date: Sun, 16 Apr 2006 13:49:57 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com> On 4/8/06, Brett Cannon <brett at python.org> wrote: > OK, I am going to write the PEP I proposed a week or so ago, listing > all modules and packages within the stdlib that are maintained > externally so we have a central place to go for contact info or where > to report bugs on issues. This should only apply to modules that want > bugs reported outside of the Python tracker and have a separate dev > track. People who just use the Python repository as their mainline > version can just be left out. > Basically, from all the replies I have gotten has said that package that were/are externally maintained either considers Python HEAD as the current version or watches checkins and the bug tracker and thus the PEP is really not needed. So unless some package steps forward and says that they prefer external reporting of bugs and patches, I will consider this PEP idea dead and just modify HEAD without worrying about it. -Brett From martin at v.loewis.de Sun Apr 16 23:10:48 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 16 Apr 2006 23:10:48 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com> Message-ID: <4442B2D8.8090003@v.loewis.de> Brett Cannon wrote: > Basically, from all the replies I have gotten has said that package > that were/are externally maintained either considers Python HEAD as > the current version or watches checkins and the bug tracker and thus > the PEP is really not needed. So unless some package steps forward > and says that they prefer external reporting of bugs and patches, I > will consider this PEP idea dead and just modify HEAD without worrying > about it. Not sure whether Fredrik Lundh has responded, but I believe he once said that he would prefer if ElementTree isn't directly modified, but that instead patches are filed on the SF tracker and assigned to him. Regards, Martin From brett at python.org Mon Apr 17 00:00:43 2006 From: brett at python.org (Brett Cannon) Date: Sun, 16 Apr 2006 15:00:43 -0700 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <4442B2D8.8090003@v.loewis.de> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com> <4442B2D8.8090003@v.loewis.de> Message-ID: <bbaeab100604161500l11e46864n6e08af667e9ff470@mail.gmail.com> On 4/16/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Brett Cannon wrote: > > Basically, from all the replies I have gotten has said that package > > that were/are externally maintained either considers Python HEAD as > > the current version or watches checkins and the bug tracker and thus > > the PEP is really not needed. So unless some package steps forward > > and says that they prefer external reporting of bugs and patches, I > > will consider this PEP idea dead and just modify HEAD without worrying > > about it. > > Not sure whether Fredrik Lundh has responded, but I believe he once > said that he would prefer if ElementTree isn't directly modified, but > that instead patches are filed on the SF tracker and assigned to him. > Nope, Fredrik never responded. I am cc:ing him directly to see if he has a preference along with Gerhard to see if he has one as well for pysqlite. -Brett From tim.peters at gmail.com Mon Apr 17 05:50:24 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 16 Apr 2006 23:50:24 -0400 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <4441E931.30005@v.loewis.de> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> <4440EDB5.7080402@v.loewis.de> <1f7befae0604151600iea0445coaaef3f5c071102d7@mail.gmail.com> <4441E931.30005@v.loewis.de> Message-ID: <1f7befae0604162050y479cdf13v56f61004e2857a85@mail.gmail.com> [Tim] >> Because new-style classes create cycles that Py_Finalize() doesn't >> clean up, it may make analysis easier to stick a PyGC_Collect() call >> (or two! repeat until it returns 0) inside the loop now. [Martin] > I'm shy to do this: the comment in Py_Finalize suggests that things > will break if there is a "late" garbage collection. Putting a collection call inside an initialize/finalize loop isn't doing it late, it's doing it early. If we can't collect cyclic trash after Py_Initialize(), that would be a showstopper for apps embedding Python "in a loop"! There's either nothing to fear here, or Python has a very bad bug. Are you thinking of this comment?: /* Collect final garbage. This disposes of cycles created by * new-style class definitions, for example. * XXX This is disabled because it caused too many problems. If * XXX a __del__ or weakref callback triggers here, Python code has * XXX a hard time running, because even the sys module has been * XXX cleared out (sys.stdout is gone, sys.excepthook is gone, etc). * XXX One symptom is a sequence of information-free messages * XXX coming from threads (if a __del__ or callback is invoked, * XXX other threads can execute too, and any exception they encounter * XXX triggers a comedy of errors as subsystem after subsystem * XXX fails to find what it *expects* to find in sys to help report * XXX the exception and consequent unexpected failures). I've also * XXX seen segfaults then, after adding print statements to the * XXX Python code getting called. */ I wrote that, and think it's pretty clear: after PyImport_Cleanup(), so little of the interpreter still exists that _any_ problem while running Python code has a way of turning into a fatal problem. For example, internal error mechanisms that fetch things from the sys module (like excepthook or stdout) are usually (always?) careful to check whether the fetched things are NULL, but wander into lala-land when Py_None comes back (as it does after PyImport_Cleanup() "Nones-out" everything in sys). But call Py_Initialize() again, and everything (including the sys module) should become usable again. ... >>> This totals to 360, which is for some reason higher than the numbers >>> I get when counting the objects on the global list of objects. >> How much higher? > Well, I counted an increase of 156 objects on the "all objects" > list, and an increase of 360 according to the COUNT_ALLOCS numbers. > The first number was without COUNT_ALLOCS being defined, though. > > Anyway, thanks for your comments. I'll try to look at this from > time to time, maybe I can resolve some of the leaks. Could you check in the code you're using? It would be useful to share this, and improve it over time. Of course it's easy to write a C program that calls Py_Initialize() and Py_Finalize() in a loop, but it gets real tedious real fast to add analysis code sufficient to truly help track down leaks. I'd be happiest to contribute future hints in the form of working C code :-) Heck, if we got the leaks down to 0 on second and subsequent calls, we could even make a standard test out of it to ensure it stays that way. From nnorwitz at gmail.com Mon Apr 17 06:11:51 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Sun, 16 Apr 2006 21:11:51 -0700 Subject: [Python-Dev] windows buildbot failures Message-ID: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> The windows buildbot slaves (cygwin too) are still having problems with the DLL being in use when we start compiling so the compile fails. clean.bat is not called afterwards based on the buildbot log. I don't know if clean fixes the problem. If it does, would this patch fix the problem: Index: Tools/buildbot/build.bat =================================================================== --- Tools/buildbot/build.bat (revision 45475) +++ Tools/buildbot/build.bat (working copy) @@ -1,4 +1,5 @@ @rem Used by the buildbot "compile" step. +cmd /c Tools\buildbot\clean.bat cmd /c Tools\buildbot\external.bat call "%VS71COMNTOOLS%vsvars32.bat" devenv.com /useenv /build Debug PCbuild\pcbuild.sln If the patch won't fix the problem, is there something else we can do to ensure the python DLL is no longer used regardless of whether the previous test passed or not? If we can get the process handle, can we can subprocess.TerminateProcess()? n From aleaxit at gmail.com Mon Apr 17 06:31:15 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Sun, 16 Apr 2006 21:31:15 -0700 Subject: [Python-Dev] 2.5 post-alpha1 broken on mac-intel machines Message-ID: <5C3DBAA4-7925-407E-B8A5-2DD556688E6F@gmail.com> Back from vacation, just did an svn up and make, and...: ... gcc -DNDEBUG -g -O3 -Wall -Wstrict-prototypes Parser/acceler.o Parser/grammar1.o Parser/listnode.o Parser/node.o Parser/parser.o Parser/parsetok.o Parser/bitset.o Parser/metagrammar.o Parser/ firstsets.o Parser/grammar.o Parser/pgen.o Objects/obmalloc.o Python/ mysnprintf.o Parser/tokenizer_pgen.o Parser/printgrammar.o Parser/ pgenmain.o -ldl -o Parser/pgen /usr/bin/ld: warning Parser/printgrammar.o cputype (18, architecture ppc) does not match cputype (7) for specified -arch flag: i386 (file not loaded) /usr/bin/ld: Undefined symbols: __Py_printgrammar __Py_printnonterminals collect2: ld returned 1 exit status make: *** [Parser/pgen] Error 1 No idea where that deuced "architecture ppc" comes from, since this Mac (a Macbook Pro) has an intel CPU -- I assume some configuration file erroneously posits that Macs "must" have PPC processors (which has been false for many months and was not posited in alpha-1, which compiled fine here). Too tired to look more into this (having just driven back from a week's vacation in the Grand Canyon), but I will next week if I have to...:-( Alex From tim.peters at gmail.com Mon Apr 17 07:40:06 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 01:40:06 -0400 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> Message-ID: <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> [Neal Norwitz] > The windows buildbot slaves (cygwin too) are still having problems > with the DLL being in use when we start compiling so the compile > fails. clean.bat is not called afterwards based on the buildbot log. > I don't know if clean fixes the problem. If it does, would this patch > fix the problem: > > Index: Tools/buildbot/build.bat > =================================================================== > --- Tools/buildbot/build.bat (revision 45475) > +++ Tools/buildbot/build.bat (working copy) > @@ -1,4 +1,5 @@ > @rem Used by the buildbot "compile" step. > +cmd /c Tools\buildbot\clean.bat > cmd /c Tools\buildbot\external.bat > call "%VS71COMNTOOLS%vsvars32.bat" > devenv.com /useenv /build Debug PCbuild\pcbuild.sln I doubt that will solve it, but you can check it in to give it a try (and revert the checkin if it doesn't help). > If the patch won't fix the problem, is there something else we can do > to ensure the python DLL is no longer used regardless of whether the > previous test passed or not? If we can get the process handle, can we > can subprocess.TerminateProcess()? I don't know. Typically (but not always, and in IME), we get into this like so: 1. The buildbot slave terminates the test run "early" for some reason. Maybe because the slave lost its connection to the master, or maybe because too much time has elapsed since the buildbot saw any output from the test process. 2. The buildbot code tries to kill the process itself. It appears (to judge from the buildbot messges) that this never works on Windows. 3. For reasons that are still unknown, python_d.exe keeps running, and forever. So long as the DLL is in use, Windows will not allow it to be deleted or renamed, so all subsequent attempts to compile fail. Now I'm not really sure _which_ process #2's "the process" may be. If it's the python_d.exe process, then I conclude that the programmatic way the buildbot uses to try to kill python_d.exe doesn't work in this situation. I never have any trouble killing it "by hand" using Task Manager, and that's all I know for sure. From nnorwitz at gmail.com Mon Apr 17 09:43:33 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 17 Apr 2006 00:43:33 -0700 Subject: [Python-Dev] Summer of Code preparation Message-ID: <ee2a432c0604170043y6d62d541w5a264b515e0e5411@mail.gmail.com> We've only got a short time to get setup for Google's Summer of Code. We need to start identifying mentors and collecting ideas for students to implement. We have the SimpleTodo list (http://wiki.python.org/moin/SimpleTodo), but nothing on the SoC page yet (http://wiki.python.org/moin/SummerOfCode). I can help manage the process from inside Google, but I need help gathering mentors and ideas. I'm not certain of the process, but if you are interested in being a mentor, send me an email. I will try to find all the necessary info and post here again tomorrow. Pass the word! I hope all mentors from last year will return again this year. Can someone take ownership of drumming up mentors and ideas? We also need to spread the word to c.l.p and beyond. Thanks, n From martin at v.loewis.de Mon Apr 17 10:04:26 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 10:04:26 +0200 Subject: [Python-Dev] Py_Finalize does not release all memory, not even closely In-Reply-To: <1f7befae0604162050y479cdf13v56f61004e2857a85@mail.gmail.com> References: <443B9806.4010707@v.loewis.de> <1f7befae0604111147k2ace031bvd543b5f4f68fbb9@mail.gmail.com> <4440EDB5.7080402@v.loewis.de> <1f7befae0604151600iea0445coaaef3f5c071102d7@mail.gmail.com> <4441E931.30005@v.loewis.de> <1f7befae0604162050y479cdf13v56f61004e2857a85@mail.gmail.com> Message-ID: <44434C0A.2080009@v.loewis.de> Tim Peters wrote: > Putting a collection call inside an initialize/finalize loop isn't > doing it late, it's doing it early. If we can't collect cyclic trash > after Py_Initialize(), that would be a showstopper for apps embedding > Python "in a loop"! There's either nothing to fear here, or Python > has a very bad bug. Right. I did that, and it collects 308 objects after the first call in the second "round" of Py_Initialize/Py_Finalize, and then no additional objects. However, I don't think that helps much: Py_Finalize will call PyGC_Collect(), anyway, and before any counts are made. > Are you thinking of this comment?: Yes; I was assuming you suggested to enable that block of code. > I wrote that, and think it's pretty clear: after PyImport_Cleanup(), > so little of the interpreter still exists that _any_ problem while > running Python code has a way of turning into a fatal problem. Right. I still haven't tried it, but it might be that, after a plain Py_Initialize/Py_Finalize sequence, no such problems will occur, and that it would be safe to call it in this specific case. > Could you check in the code you're using? I had to modify code in ways that shouldn't be checked in, e.g. by putting API calls into _Py_PrintReferenceAddresses, even though the comment says it does't call any API. When I get to clean this up, I'll check it in. With some debugging, I now found a "leak" that contributes to quite some of these garbage objects: Each round of Py_Initialize/Py_Finalize will leave a CodecInfo type behind. I think it comes from this block of code /* Note that as of Python 2.2, heap-allocated type objects * can go away, but this code requires that they stay alive * until program exit. That's why we're careful with * refcounts here. type_list gets a new reference to tp, * while ownership of the reference type_list used to hold * (if any) was transferred to tp->tp_next in the line above. * tp is thus effectively immortal after this. */ Py_INCREF(tp); so that this "leak" would only exist if COUNT_ALLOCS is defined. I would guess that even more of the leaking type objects (16 per round) can be attributed to this. This completely obstructs measurements, and could well explain why the number of leaked objects is so much higher when COUNT_ALLOCS is defined. OTOH, I can see why "this code requires that they stay alive". Any ideas on how to solve this dilemma? Perhaps the type_list could be a list of weak references, so that the types do have a chance to go away when the last instance disappears? Regards, Martin From martin at v.loewis.de Mon Apr 17 10:31:40 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 10:31:40 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> Message-ID: <4443526C.8040907@v.loewis.de> Neal Norwitz wrote: > If the patch won't fix the problem, is there something else we can do > to ensure the python DLL is no longer used regardless of whether the > previous test passed or not? Rebooting the machine will help, and might be the only cure. It's Windows, after all :-( Of course, we shouldn't do that, and even if it was ok to reboot "remotely", the buildbot likely wouldn't come back automatically. > If we can get the process handle, can we > can subprocess.TerminateProcess()? You get the process handle either from CreateProcess (which buildbot did, so we can't get the handle), or from OpenProcess. For OpenProcess, we need a process id. One way to get that is through Process32First/Process32Next. These would provide the executable path, so it should be easy to find out which one is a python_d.exe binary. None of these functions is exposed through subprocess, so this is no option. In addition, I believe that buildbot *tries* to use TerminateProcess. The code is twisted, though, so it is hard to tell what actually happens. Of course, it would be possible to do this all in VisualBasic, so we could check in a vbscript file, similar to the one in http://support.microsoft.com/kb/q187913/ OTOH, we could just as well check in an executable that does all that, e.g. like the one in http://msdn.microsoft.com/library/default.asp?url=/library/en-us/perfmon/base/enumerating_all_modules_for_a_process.asp Regards, Martin From martin at v.loewis.de Mon Apr 17 13:57:08 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 13:57:08 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> Message-ID: <44438294.7070806@v.loewis.de> Tim Peters wrote: > 2. The buildbot code tries to kill the process itself. It appears (to judge > from the buildbot messges) that this never works on Windows. > > 3. For reasons that are still unknown, python_d.exe keeps running, > and forever. It's actually not too surprising that python_d.exe keeps running. The buildbot has a process handle for the cmd.exe process that runs test.bat. python_d.exe is only a child process of process. So killing cmd.exe wouldn't help, even if it worked. Regards, Martin From martin at v.loewis.de Mon Apr 17 17:48:43 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 17:48:43 +0200 Subject: [Python-Dev] [C++-sig] GCC version compatibility In-Reply-To: <u64rc49os.fsf@boost-consulting.com> References: <42CDA654.2080106@v.loewis.de> <uu0j6p7z1.fsf@boost-consulting.com> <20050708072807.GC3581@lap200.cdc.informatik.tu-darmstadt.de> <u8y0hl45u.fsf@boost-consulting.com> <42CEF948.3010908@v.loewis.de> <20050709102010.GA3836@lap200.cdc.informatik.tu-darmstadt.de> <42D0D215.9000708@v.loewis.de> <20050710125458.GA3587@lap200.cdc.informatik.tu-darmstadt.de> <42D15DB2.3020300@v.loewis.de> <20050716101357.GC3607@lap200.cdc.informatik.tu-darmstadt.de> <20051012120917.GA11058@lap200.cdc.informatik.tu-darmstadt.de> <u64rc49os.fsf@boost-consulting.com> Message-ID: <4443B8DB.7030502@v.loewis.de> David Abrahams wrote: > I just wanted to write to encourage some Python developers to look at > (and accept!) Christoph's patch. This is really crucial for smooth > interoperability between C++ and Python. I did, and accepted the patch. If there is anything left to be done, please submit another patch. Regards, Martin From martin at v.loewis.de Mon Apr 17 17:52:06 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 17:52:06 +0200 Subject: [Python-Dev] remote debugging with pdb In-Reply-To: <Pine.LNX.4.58.0508162119100.2711@bagira> References: <Pine.LNX.4.58.0508071312290.695@bagira> <20050808154503.GB28005@panix.com> <Pine.LNX.4.58.0508081926010.2814@bagira> <200508111802.44357.anthony@interlink.com.au> <24EEDE5B-4511-40D4-9C16-8A33C4ACE1C8@redivi.com> <Pine.LNX.4.58.0508162119100.2711@bagira> Message-ID: <4443B9A6.3030307@v.loewis.de> Ilya Sandler wrote: > There is a patch on SourceForge > python.org/sf/721464 > which allows pdb to read/write from/to arbitrary file objects. Would it > answer some of your concerns (eg remote debugging)? > > The patch probably will not apply to the current code, but I guess, I > could revive it if anyone thinks that it's worthwhile... > > What do you think? I just looked at it, and yes, it's a good idea. As you say, the patch is currently out of date. It is probably easiest to redo it from scratch; if you do, please use print redirections instead of self.file.write. Regards, Martin From martin at v.loewis.de Mon Apr 17 18:19:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 18:19:35 +0200 Subject: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()? Message-ID: <4443C017.3020900@v.loewis.de> Currently, the readdir() call releases the GIL. I believe this is not thread-safe, because readdir() does not need to be re-entrant; we should use readdir_r where available to get a thread-safe version. Comments? Regards, Martin From pje at telecommunity.com Mon Apr 17 18:53:57 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 17 Apr 2006 12:53:57 -0400 Subject: [Python-Dev] FYI: more clues re: tee+generator leak Message-ID: <5.1.1.6.0.20060417124813.01fab958@mail.telecommunity.com> I've been fiddling a bit with test_generators this morning, and have found that a stripped down version of the fibonacci test only leaks if the generator has a reference to a *copied* tee object. It doesn't matter whether the copied tee object is the second result from tee(), or if you just create a single tee object and use its __copy__() method, the leak only occurs if the cycle is: geniter -> frame -> ... -> copied_tee -> tdo ---+ ^ | | | +--------------------------------------------+ The "..." is to indicate that the frame may reference the object directly as a local variable, or via a cell. I've tried it both ways and it still leaks. Replacing "copied_tee" with an uncopied tee object does *not* leak. I have no idea what this means, although I've been staring at the relevant itertools code for some time now. It doesn't appear that the traverse functions are skipping anything. By the way, the above cycle will leak even if the generator is never iterated even once; it's quite simple to set up. I'm testing this using -R:: on test_generators, and hacking on the _fib function and friends. From rhettinger at ewtllc.com Mon Apr 17 18:03:53 2006 From: rhettinger at ewtllc.com (Raymond Hettinger) Date: Mon, 17 Apr 2006 09:03:53 -0700 Subject: [Python-Dev] PyObject_REPR() In-Reply-To: <ee2a432c0604112239o21377e17v4a282618f61e42a1@mail.gmail.com> References: <9e804ac0604111604i1cb58c0ekc220f3437e65ddd1@mail.gmail.com> <017c01c65dc0$da6899b0$6c3c0a0a@RaymondLaptop1> <ee2a432c0604111803o688f44f4l7e3327905359b74f@mail.gmail.com> <e8bf7a530604112005s1db43a30me584bb87a3816308@mail.gmail.com> <ee2a432c0604112239o21377e17v4a282618f61e42a1@mail.gmail.com> Message-ID: <4443BC69.207@ewtllc.com> If PyObject_REPR changes or gets renamed in Py2.5, I suggest modifying the implementation so that it returns a newly allocated C pointer rather one embedded in an inaccessible (unfreeable) PyStringObject. Roughly: r = PyObject_Repr(o); if (r == NULL) return NULL; s1 = PyObject_AS_STRING(r); s2 = strcpy(s1); Py_DECREF(r); return s2; The benefits are: * it won't throw-off leak checking (no Python objects get leaked) * the leak is slightly smaller (only the allocated string) * if the caller cares about memory, they have the option of freeing the returned pointer * error-checking is still possible. Neal Norwitz wrote: >Ok, then how about prefixing with _, adding a comment saying in big, >bold letters: FOR DEBUGGING PURPOSES ONLY, THIS LEAKS, and only >defining in a debug build? > >n >-- >On 4/11/06, Jeremy Hylton <jeremy at alum.mit.edu> wrote: > > >>It's intended as an internal debugging API. I find it very convenient >>for adding with a printf() here and there, which is how it got added >>long ago. It should really have a comment mentioning that it leaks >>the repr object, and starting with an _ wouldn't be bad either. >> >>Jeremy >> >>On 4/11/06, Neal Norwitz <nnorwitz at gmail.com> wrote: >> >> >>>On 4/11/06, Raymond Hettinger <raymond.hettinger at verizon.net> wrote: >>> >>> >>>>>It strikes me that it should not be used, or maybe renamed to _PyObject_REPR. >>>>>Should removing or renaming it be done in 2.5 or in Py3K? >>>>> >>>>> >>>>Since it is intrinsically buggy, I would support removal in Py2.5 >>>> >>>> >>>+1 on removal. Google only turned up a handleful of uses that I saw. >>> >>>n >>>_______________________________________________ >>>Python-Dev mailing list >>>Python-Dev at python.org >>>http://mail.python.org/mailman/listinfo/python-dev >>>Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu >>> >>> >>> >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >http://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: http://mail.python.org/mailman/options/python-dev/rhettinger%40ewtllc.com > > From pje at telecommunity.com Mon Apr 17 19:09:14 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 17 Apr 2006 13:09:14 -0400 Subject: [Python-Dev] FYI: more clues re: tee+generator leak In-Reply-To: <5.1.1.6.0.20060417124813.01fab958@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060417125611.01fa2b70@mail.telecommunity.com> At 12:53 PM 4/17/2006 -0400, Phillip J. Eby wrote: >By the way, the above cycle will leak even if the generator is never >iterated even once; it's quite simple to set up. I'm testing this using >-R:: on test_generators, and hacking on the _fib function and friends. Follow-up note: it's possible to create the same leak with this code: l = [] a, b = tee(l) l.append(b) Which -R:: reports as leaking 4 references. If you "l.append(a)" instead of 'b', there is no leaking. This showed that the problem was actually in the itertools module, as no generators are involved here. After staring at tee_copy until my eyes bled, I accidentally scrolled such that tee_new was on the screen at the same time and notice that tee_copy was missing a call to PyObject_GC_Track();. So then I fixed everything up and tried to check it in, to find that Thomas Wouters already found and fixed this yesterday. The moral of the story? Always catch up on the Python-checkins list before trying to track down cycle leaks. :) From ronaldoussoren at mac.com Mon Apr 17 20:30:04 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 17 Apr 2006 20:30:04 +0200 Subject: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()? In-Reply-To: <4443C017.3020900@v.loewis.de> References: <4443C017.3020900@v.loewis.de> Message-ID: <2889B6F3-7D51-45CA-BD58-4A6579492843@mac.com> On 17-apr-2006, at 18:19, Martin v. L?wis wrote: > Currently, the readdir() call releases the GIL. I believe > this is not thread-safe, because readdir() does not need > to be re-entrant; we should use readdir_r where available > to get a thread-safe version. > > Comments? AFAIK readdir is only unsafe when multiple threads use the same DIR* at the same time. The spec[1] seems to agree with me. It seems to me that this means the implementation of listdir in posixmodule.c doesn't need to be changed. Ronald [1] : http://www.opengroup.org/onlinepubs/009695399/functions/ readdir.html > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ > ronaldoussoren%40mac.com From martin at v.loewis.de Mon Apr 17 20:50:32 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 20:50:32 +0200 Subject: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()? In-Reply-To: <2889B6F3-7D51-45CA-BD58-4A6579492843@mac.com> References: <4443C017.3020900@v.loewis.de> <2889B6F3-7D51-45CA-BD58-4A6579492843@mac.com> Message-ID: <4443E378.7070605@v.loewis.de> Ronald Oussoren wrote: > AFAIK readdir is only unsafe when multiple threads use the same DIR* at > the same time. The spec[1] seems to agree with me. > [1] : http://www.opengroup.org/onlinepubs/009695399/functions/readdir.html What specific sentence makes you think so? I see "The readdir() interface need not be reentrant." which seems to allow for an implementation that returns a static struct dirent. Of course, the most natural implementation associates the storage for the result with the DIR*, so it's probably not a real problem... Regards, Martin From martin at v.loewis.de Mon Apr 17 20:59:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 20:59:35 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <4443526C.8040907@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <4443526C.8040907@v.loewis.de> Message-ID: <4443E597.9000407@v.loewis.de> > OTOH, we could just as well check in an executable that > does all that, e.g. like the one in I did something like this: I checked the source of a kill_python.exe application which looks at all running processes and tries to kill python_d.exe. After several rounds of experimentation, this now was able to unstick Trent's build slave. Regards, Martin From ronaldoussoren at mac.com Mon Apr 17 21:08:27 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 17 Apr 2006 21:08:27 +0200 Subject: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()? In-Reply-To: <4443E378.7070605@v.loewis.de> References: <4443C017.3020900@v.loewis.de> <2889B6F3-7D51-45CA-BD58-4A6579492843@mac.com> <4443E378.7070605@v.loewis.de> Message-ID: <B908EA8A-E4FC-4FE1-95D8-77D658FF3391@mac.com> On 17-apr-2006, at 20:50, Martin v. L?wis wrote: > Ronald Oussoren wrote: >> AFAIK readdir is only unsafe when multiple threads use the same >> DIR* at >> the same time. The spec[1] seems to agree with me. >> [1] : http://www.opengroup.org/onlinepubs/009695399/functions/ >> readdir.html > > What specific sentence makes you think so? I see > > "The readdir() interface need not be reentrant." > > which seems to allow for an implementation that returns a static > struct dirent. A couple of lines down it says: "The pointer returned by readdir() points to data which may be overwritten by another call to readdir() on the same directory stream. This data is not overwritten by another call to readdir() on a different directory stream." This explicitly says that implementations cannot use a static dirent structure. > > Of course, the most natural implementation associates the storage > for the result with the DIR*, so it's probably not a real problem... If this were a problem on some platform I'd expect it to be so ancient that it doesn't offer readdir_r either. Ronald From martin at v.loewis.de Mon Apr 17 21:12:15 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 21:12:15 +0200 Subject: [Python-Dev] [ python-Patches-790710 ] breakpoint command lists in pdb In-Reply-To: <42F32EF7.6050208@info.ucl.ac.be> References: <42F32EF7.6050208@info.ucl.ac.be> Message-ID: <4443E88F.4030402@v.loewis.de> Gr?goire Dooms wrote: > What should I do to get it reviewed further ? (perhaps just this : > posting to python-dev :-) It didn't help that much, except for keeping your mail in my inbox. In any case, I went back to it and checked it in. Regards, Martin From martin at v.loewis.de Mon Apr 17 21:16:05 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 21:16:05 +0200 Subject: [Python-Dev] FishEye on Python CVS Repository In-Reply-To: <52431c5005060820217cb1f1fb@mail.gmail.com> References: <52431c5005060820217cb1f1fb@mail.gmail.com> Message-ID: <4443E975.4000208@v.loewis.de> Peter Moore wrote: > I'm responsible for setting up free FishEye hosting for community > projects. As a long time python user I of course added Python up > front. You can see it here: > > http://fisheye.cenqua.com/viewrep/python/ Can you please move that to the subversion repository (http://svn.python.org/projects/python), or, failing that, remove that entry? The CVS repository is no longer used. Regards, Martin From martin at v.loewis.de Mon Apr 17 21:26:43 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 21:26:43 +0200 Subject: [Python-Dev] problem installing current cvs - TabError In-Reply-To: <200506081449.09254.anthony@interlink.com.au> References: <42A5EC27.8010409@xs4all.nl> <20050607210816.GB19337@zot.electricrain.com> <200506081449.09254.anthony@interlink.com.au> Message-ID: <4443EBF3.3020902@v.loewis.de> Anthony Baxter wrote: > There's a scripts Tools/scripts/reindent.py - put it somewhere on your > PATH and run it before checkin, like "reindent.py -r Lib". It means Tim > or I don't have to run it for you <wink> As I kept forgetting what the name, location, and command line options of that script are, I now added a reindent makefile target. Regards, Martin From martin at v.loewis.de Mon Apr 17 21:30:21 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 21:30:21 +0200 Subject: [Python-Dev] Py_BEGIN_ALLOW_THREADS around readdir()? In-Reply-To: <B908EA8A-E4FC-4FE1-95D8-77D658FF3391@mac.com> References: <4443C017.3020900@v.loewis.de> <2889B6F3-7D51-45CA-BD58-4A6579492843@mac.com> <4443E378.7070605@v.loewis.de> <B908EA8A-E4FC-4FE1-95D8-77D658FF3391@mac.com> Message-ID: <4443ECCD.70907@v.loewis.de> Ronald Oussoren wrote: > A couple of lines down it says: > "The pointer returned by readdir() points to data which may be > overwritten by another call to readdir() on the same directory > stream. This data is not overwritten by another call to readdir() on > a different directory stream." > > This explicitly says that implementations cannot use a static dirent > structure. Ah, right. I read over this several times, and still managed to miss that point. Thanks. >> Of course, the most natural implementation associates the storage >> for the result with the DIR*, so it's probably not a real problem... > > If this were a problem on some platform I'd expect it to be so > ancient that it doesn't offer readdir_r either. Sure - I would have just removed Py_BEGIN_ALLOW_THREADS on systems which don't have readdir_r. But this is now unnecessary. Regards, Martin From tim.peters at gmail.com Mon Apr 17 21:37:50 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 15:37:50 -0400 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <44438294.7070806@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> Message-ID: <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> [Tim] >> ... >> 2. The buildbot code tries to kill the process itself. It appears (to judge >> from the buildbot messges) that this never works on Windows. >> >> 3. For reasons that are still unknown, python_d.exe keeps running, >> and forever. [Martin] > It's actually not too surprising that python_d.exe keeps running. No, what's surprising is that it keeps running _forever_. This isn't Unix, and, e.g., a defunct child process doesn't sit around waiting for its parent to reap it. Why doesn't the leftover python_d.exe complete running the test suite, and then go away all by itself? It doesn't, no matter how long you wait. That's the mystery to me. > The buildbot has a process handle for the cmd.exe process that runs > test.bat. python_d.exe is only a child process of process. So killing > cmd.exe wouldn't help, even if it worked. It suppose it's possible that killing cmd.exe actually did work, but the buildbot code misreports the outcome, and python_d.exe "runs forever" because it's blocked waiting on some resource (console I/O handle?) it inherited from its (no longer there) parent process. From ronaldoussoren at mac.com Mon Apr 17 21:51:56 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 17 Apr 2006 21:51:56 +0200 Subject: [Python-Dev] fat binaries for OSX Message-ID: <02FA0F5D-F7B7-40BB-AAA3-7683E804E0C6@mac.com> Hi, I've uploaded 3 patches that form the core of the python24-fat tree that Bob Ippolito and I have been maintaining for a while. With these patches one can build fat/universal binaries for python that run natively on OSX 10.3 and later. I'd like to merge these patches to the trunk, but would like some review. I'm especially unhappy with the code duplication in patch 1471925, but don't know how to solve that. * Patch 1471883: --enable-universalsdk on Mac OS X This patch introduces a --enable-universalsdk flag for configure and the required changes to the build system to get this to work. When this flag is used Python is build as a universal (aka fat) binary. * Patch 1471761: test for broken poll at runtime This patch moves the HAVE_BROKEN_POLL test from configure-time to runtime. With this patch we can have a single binary on OSX that works on OSX 10.3.9 or later while having select.poll available on those versions of the OS that have a functioning version poll(). * Patch 1471925: Weak linking support for OSX This patch adds weak linking support to the posix, time and socket modules. That is, the existance of a number of functions is tested for at runtime (on OSX only). With this patch one can use a python binary that was build on OSX 10.4 on OSX 10.3 systems, without loosing access to APIs that were introduced in 10.4 on OSX 10.4 systems. Ronald From martin at v.loewis.de Mon Apr 17 22:03:10 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 17 Apr 2006 22:03:10 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> Message-ID: <4443F47E.7000103@v.loewis.de> Tim Peters wrote: > No, what's surprising is that it keeps running _forever_. This isn't > Unix, and, e.g., a defunct child process doesn't sit around waiting > for its parent to reap it. Why doesn't the leftover python_d.exe > complete running the test suite, and then go away all by itself? It > doesn't, no matter how long you wait. That's the mystery to me. True. But I find that not too surprising: something deadlocks. A perfect deadlock aims to hold until the heat death of the universe; most of them only hold until reboot, or even just process termination. Now, as to *why* it deadlocks: that's indeed a mystery. But hey: it's Windows, so processes just do get stuck. It took them years to make sure they system continues running in such a case. > It suppose it's possible that killing cmd.exe actually did work, but > the buildbot code misreports the outcome, and python_d.exe "runs > forever" because it's blocked waiting on some resource (console I/O > handle?) it inherited from its (no longer there) parent process. It can't be that simple. Python's stdout should indeed be inherited from cmd.exe, but that, in turn, should have obtained it from buildbot. So even though cmd.exe closes its handle, Python's handle should still be fine. If buildbot then closes the other end of the pipe, Python should get ERROR_BROKEN_PIPE. The only deadlock I can see here is when buildbot does *not* close the pipe, but stops reading from it. In that case, Python's WriteFile would block. If that happens, it would be useful to attach with a debugger to find out where Python got stuck. Regards, Martin From dooms at info.ucl.ac.be Mon Apr 17 21:43:36 2006 From: dooms at info.ucl.ac.be (=?ISO-8859-1?Q?Gr=E9goire_Dooms?=) Date: Mon, 17 Apr 2006 21:43:36 +0200 Subject: [Python-Dev] [ python-Patches-790710 ] breakpoint command lists in pdb In-Reply-To: <4443E88F.4030402@v.loewis.de> References: <42F32EF7.6050208@info.ucl.ac.be> <4443E88F.4030402@v.loewis.de> Message-ID: <4443EFE8.8050105@info.ucl.ac.be> Martin v. L?wis wrote: > Gr?goire Dooms wrote: > >> What should I do to get it reviewed further ? (perhaps just this : >> posting to python-dev :-) >> > > It didn't help that much, except for keeping your mail in my inbox. > > In any case, I went back to it and checked it in. > Thanks for taking the time to review it and include it. I won't have to apply it by myself each time I upgrade my Python install. Keep up with the good work, Best, -- Gr?goire From tim.peters at gmail.com Mon Apr 17 23:05:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 17:05:18 -0400 Subject: [Python-Dev] refleaks & test_tcl & threads In-Reply-To: <9e804ac0604161127i726878f1ibe04a2bbfd278872@mail.gmail.com> References: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> <9e804ac0604161127i726878f1ibe04a2bbfd278872@mail.gmail.com> Message-ID: <1f7befae0604171405r784d125drb56d9098d5642309@mail.gmail.com> [Thomas Wouters] >> test_threading_local is not entirely consistent, but it looks a lot more >> reliable on my box than on Neal's automated mails: >> >> test_threading_local >> beginning 11 repetitions >> 12345678901 >> ........... >> test_threading_local leaked [34, 34, 34, 34, 34, 26, 26, 22, 34] >> references [also Thomas] > This is caused by _threading_local.local's __del__ method, or rather the > fact that it's part of a closure enclosing threading.enumerate . Fixing the > inner __del__ to call enumerate (which is 'threading.enumerate') directly, > rather than through the cellvar 'threading_enumerate', makes the leak go > away. The fact that the leakage is inconsistent is actually easily > explained: the 'local' instances are stored on the 'currentThread' object > indexed by 'key', and keys sometimes get reused (since they're basically > id(self)), so sometimes an old reference is overwritten. It doesn't quite > explain why using the cellvar causes the cycle, nor does it explain why > gc.garbage remains empty. I guess some Thread objects linger in threading's > _active or _limbo dicts, but I have no idea why having a cellvar in the > cycle matters; they seem to be participating in GC just fine, and I cannot > reproduce the leak with a simpler test. > > And on top of that, I'm not sure *why* _threading_local.local is doing the > song and dance to get a cellvar. If the global 'enumerate' (which is > threading.enumerate) disappears, it will be because Python is cleaning up. > Even if we had a need to clean up the local dict at that time (which I don't > believe we do), using a cellvar doesn't guarantee anything more than using a > global name. The threading_enumerate = enumerate line creates a persistent local variable at module import time, which (unlike a global name) can't get "None'd out" at shutdown time. BTW, it's very easy to miss that this line _is_ executed at module import time, and is executed only once over the life of the interpreter; more on that below. > Chances are very good that the 'threading' module has also been > cleared, meaning that while we still have a reference to > threading.enumerate, it cannot use the three globals it uses (_active, > _limbo, _active_limbo_lock.) All in all, I think matters improve > significantly if it just deals with the NameError it'll get at cleanup > (which it already does.) Well, you missed something obvious :-): the code is so clever now that its __del__ doesn't actually do anything. In outline: ""|" ... # Threading import is at end ... class local(_localbase): ... def __del__(): threading_enumerate = enumerate ... def __del__(self): try: threads = list(threading_enumerate()) except: # if enumerate fails, as it seems to do during # shutdown, we'll skip cleanup under the assumption # that there is nothing to clean up return ... return __del__ __del__ = __del__() from threading import currentThread, enumerate, RLock """ Lots of questions pop to mind, from why the import is at the bottom of the file, to why it's doing this seemingly bizarre nested-__del__ dance. I don't have good answers to any of them <0.1 wink>, but this is the bottom line: at the time threading_enumerate = enumerate is executed, `enumerate` is bound to __builtin__.enumerate, not to threading.enumerate. That line is executed during the _execution_ of the "class local" statement, the first time the module is imported, and the import at the bottom of the file has not been executed by that time. So there is no global `enumerate` at the time, but there is one in __builtin__ so that's the one it captures. As a result, "the real" __del__'s threads = list(threading_enumerate()) line _always_ raises an exception, namely the TypeError "enumerate() takes exactly 1 argument (0 given)", which is swallowed by the bare "except:", and __del__ then returns having accomplished nothing. Of course that's the real reason it never cleans anything up now -- while virtually any way of rewriting it causes it to get the _intended_ threading.enumerate, and then the leaks stop. I'll check one of those in. From skip at pobox.com Mon Apr 17 23:29:32 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 16:29:32 -0500 Subject: [Python-Dev] Returning -1 from function with unsigned long type Message-ID: <17476.2236.481146.139054@montanaro.dyndns.org> I'm fiddling with the "compile Python w/ C++" stuff and came across a number of places where a function is defined as returning unsigned long or unsigned long long but returns -1. For example, see PyInt_AsUnsignedLongMask. What's the correct fix for that, return ~0 (assuming twos-complement arithmetic), cast -1 to unsigned long? Or does the API need to be changed somehow? Skip From tomerfiliba at gmail.com Mon Apr 17 23:38:26 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Mon, 17 Apr 2006 23:38:26 +0200 Subject: [Python-Dev] adding Construct to the standard library? Message-ID: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> hello folks after several people (several > 10) contacted me and said "IMHO 'construct' is a good candidate for stdlib", i thought i should give it a try. of course i'm not saying it should be included right now, but in 6 months time, or such a timeframe (aiming at python 2.6? some 2.5.x release?) a little intro: "Construct" ( http://pyconstruct.sourceforge.net/) is a library for declaratively defining data structures at the bit-level. these constructs can be used to parse raw data into objects, or build objects into raw data. you can see a couple of examples at http://pyconstruct.wikispaces.com/examples being "data structures" they are not limited to simple structures -- they can be linked lists, for example, or an enitre efl32 file, with sections and pointers (included in the distribution). currently i'm writing a parser of ext2 file systems, to allow inspecting file systems without mounting. why include Construct? * the struct module is very nice, but very limited and non-pythonic as well * pure python (no platform/security issues) * lots of people need to parse and build binary data structures, it's not an esoteric library * license: public domain * quite a large user base for such a short time (proves the need of the community) * easy to use and extend (follows the componentization pattern) * declarative: you don't need to write executable code for most cases why not: * the code is (very) young. stable and all, but less than a month on the loose. * new features may still be added / existing ones may be changed in a non-backwards-compatible manner so why am i saying this now, instead of waiting a few months for it to maturet? well, i wanted to get feedback. those of you who have seen/used the library, please tell me what you think: * is it suitable for a standard library? * what more features would you want? * any changes you think are necessary? i'm starting this now, in order to have a mature version in the (near) future. thanks, -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060417/b08eeed7/attachment.htm From skip at pobox.com Mon Apr 17 23:39:46 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 16:39:46 -0500 Subject: [Python-Dev] posix_confstr seems wrong Message-ID: <17476.2850.358011.427823@montanaro.dyndns.org> More C++ stuff... According to the man page on my Mac: If the call to confstr() is not successful, -1 is returned and errno is set appropriately. but the code in posix_confstr looks like: if (PyArg_ParseTuple(args, "O&:confstr", conv_confstr_confname, &name)) { int len = confstr(name, buffer, sizeof(buffer)); errno = 0; if (len == 0) { if (errno != 0) posix_error(); else result = PyString_FromString(""); } ... 1. Why is errno being set to 0? 2. Why is errno's value then tested to see if it's not zero? Looks like this have been that way since December 1999 when Fred added it. Skip From fdrake at acm.org Mon Apr 17 23:45:49 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Mon, 17 Apr 2006 17:45:49 -0400 Subject: [Python-Dev] posix_confstr seems wrong In-Reply-To: <17476.2850.358011.427823@montanaro.dyndns.org> References: <17476.2850.358011.427823@montanaro.dyndns.org> Message-ID: <200604171745.50283.fdrake@acm.org> On Monday 17 April 2006 17:39, skip at pobox.com wrote: > 1. Why is errno being set to 0? The C APIs don't promise to clear errno on input; you have to do that yourself. > 2. Why is errno's value then tested to see if it's not zero? > > Looks like this have been that way since December 1999 when Fred added it. Looks like a bug to me. It should be set just before confstr() is called. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From tim.peters at gmail.com Mon Apr 17 23:48:10 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 17:48:10 -0400 Subject: [Python-Dev] Returning -1 from function with unsigned long type In-Reply-To: <17476.2236.481146.139054@montanaro.dyndns.org> References: <17476.2236.481146.139054@montanaro.dyndns.org> Message-ID: <1f7befae0604171448p32f0762dw196df950da198ecc@mail.gmail.com> [skip at pobox.com] > I'm fiddling with the "compile Python w/ C++" stuff and came across a number > of places where a function is defined as returning unsigned long or unsigned > long long but returns -1. For example, see PyInt_AsUnsignedLongMask. > What's the correct fix for that, return ~0 (assuming twos-complement > arithmetic), cast -1 to unsigned long? Explicitly casting -1 is both the obvious and best way, and is guaranteed to "work as intended" by the standards. > Or does the API need to be changed somehow? Well, it's ubiquitous in Python that C API calls returning any kind of integer return -1 (and arrange to make PyErr_Occurred() return true) in case of error. This is clumsy when the integer retured is of an unsigned type, but it _is_ C we're talking about ;-) From skip at pobox.com Mon Apr 17 23:49:30 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 16:49:30 -0500 Subject: [Python-Dev] posix_confstr seems wrong In-Reply-To: <200604171745.50283.fdrake@acm.org> References: <17476.2850.358011.427823@montanaro.dyndns.org> <200604171745.50283.fdrake@acm.org> Message-ID: <17476.3434.154501.845250@montanaro.dyndns.org> Fred> Looks like a bug to me. It should be set just before confstr() is Fred> called. Thanks. I'll fix, test and check in... Skip From skip at pobox.com Mon Apr 17 23:55:04 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 16:55:04 -0500 Subject: [Python-Dev] Returning -1 from function with unsigned long type In-Reply-To: <1f7befae0604171448p32f0762dw196df950da198ecc@mail.gmail.com> References: <17476.2236.481146.139054@montanaro.dyndns.org> <1f7befae0604171448p32f0762dw196df950da198ecc@mail.gmail.com> Message-ID: <17476.3768.959907.650998@montanaro.dyndns.org> Tim> Explicitly casting -1 is both the obvious and best way, and is Tim> guaranteed to "work as intended" by the standards. Thanks. I'll fix 'em. Skip From tdelaney at avaya.com Tue Apr 18 00:41:11 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Tue, 18 Apr 2006 08:41:11 +1000 Subject: [Python-Dev] pdb segfaults in 2.5 trunk? Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E672@au3010avexu1.global.avaya.com> Tim Peters wrote: >> I might see if I can work up a patch over the easter long weekend if >> no one beats me to it. What files should I be looking at (it would >> be my first C-level python patch)? Blegh - my parents came to visit ... Tim Delaney From ianb at colorstudy.com Tue Apr 18 01:44:38 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Mon, 17 Apr 2006 18:44:38 -0500 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <44442866.2090805@colorstudy.com> Steven Bethard wrote: > This PEP proposes a generalization of the class-declaration syntax, > the ``make`` statement. The proposed syntax and semantics parallel > the syntax for class definition, and so:: > > make <callable> <name> <tuple>: > <block> I can't really see any use case for <tuple>. In particular, you could always choose to implement this: make Foo someobj(stuff): ... like: make Foo(stuff) someobj: ... I don't think I'd naturally use the tuple position for anything, and so it's an arbitrary and usually empty position in the call, just to support type() which already has its own syntax. So maybe it makes less sense to copy the class/metaclass arguments so closely, and so moving to this might feel a bit better: make someobj Foo(stuff): ... And actually it reminds me more of class statements, which are in the form "keyword name(things_you_build_from)". Which then obviously leads to more parenthesis: make someobj(Foo(stuff)): ... Except I don't know what "make someobj(A, B)" would mean, so maybe the parenthesis are uncalled for. I prefer the look of the statement without parenthesis anyway. Really, to me this syntax feels like support for a more prototype-based construct. And many of the class-abusing metaclasses I've used have really looked similar to prototypes. The "class" statement is caught up in a bunch of very class-like semantics, and a more explicit/manual technique of creating objects opens up lots of potential. With that in mind, I think __call__ might be the wrong method to call on the builder. For instance, if you were actually going to implement prototypes on this, you wouldn't want to steal all uses of __call__ just for the cloning machinery. So __make__ would be nicer. Personally this would also let people using older constructs (like a plain __call__(**kw)) to keep that in addition to supporting this new construct. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From rowen at cesmail.net Tue Apr 18 02:19:43 2006 From: rowen at cesmail.net (Russell E. Owen) Date: Mon, 17 Apr 2006 17:19:43 -0700 Subject: [Python-Dev] PEP 359: The "make" Statement References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> Message-ID: <rowen-B6C55B.17194317042006@sea.gmane.org> At some point folks were discussing use cases of "make" where it was important to preserve the order in which items were added to the namespace. I'd like to suggest adding an implementation of an ordered dictionary to standard python (e.g. as a library or built in type). It's inherently useful, and in this case it could be used by "make". -- Russell From brett at python.org Tue Apr 18 02:34:16 2006 From: brett at python.org (Brett Cannon) Date: Mon, 17 Apr 2006 17:34:16 -0700 Subject: [Python-Dev] possible fix for recursive __call__ segfault Message-ID: <bbaeab100604171734x4bbe017ci94dce673b8d54585@mail.gmail.com> Bug 532646 is a check for recursive __call__ methods where it is just set to an instance of the same class:: class A: pass A.__call__ = A() a = A() try: a() # This should not segfault except RuntimeError: pass else: raise TestFailed, "how could this not have overflowed the stack?" Turns out this was never handled for new-style classes and thus goes back to 2.4 at least. I don't know if this is a good solution or not, but I came up with this as a quick fix:: Index: Objects/typeobject.c =================================================================== --- Objects/typeobject.c (revision 45499) +++ Objects/typeobject.c (working copy) @@ -4585,6 +4585,11 @@ if (meth == NULL) return NULL; + if (meth == self) { + PyErr_SetString(PyExc_RuntimeError, + "recursive __call__ definition"); + return NULL; + } res = PyObject_Call(meth, args, kwds); Py_DECREF(meth); return res; Of course SF is down (can't wait until the summer when I can do more tracker work) so I can't post there at the moment. But does anyone think there is a better solution to this without some counter somewhere to keep track how far one goes down fetching __call__ attributes? -Brett From steven.bethard at gmail.com Tue Apr 18 03:15:18 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 17 Apr 2006 19:15:18 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <rowen-B6C55B.17194317042006@sea.gmane.org> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <rowen-B6C55B.17194317042006@sea.gmane.org> Message-ID: <d11dcfba0604171815g5966be6cj9af692a2be9cc9ed@mail.gmail.com> On 4/17/06, Russell E. Owen <rowen at cesmail.net> wrote: > At some point folks were discussing use cases of "make" where it was > important to preserve the order in which items were added to the > namespace. > > I'd like to suggest adding an implementation of an ordered dictionary to > standard python (e.g. as a library or built in type). It's inherently > useful, and in this case it could be used by "make". Not to argue against adding an ordered dictionary somewhere to Python, but for the moment I've been convinced that it's not worth the complication to allow the dict in which the make-statement's block is executed to be customized. The original use case had been XML-building, and there are much nicer solutions using the with-statement than there would be with the make-statement. If you think you have a better (non-XML) use case though, I'm willing to reconsider it. The next update of the PEP will discuss this in more detail, but search c.l.py for some of the discussion. Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From steven.bethard at gmail.com Tue Apr 18 03:24:57 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 17 Apr 2006 19:24:57 -0600 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <44442866.2090805@colorstudy.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <44442866.2090805@colorstudy.com> Message-ID: <d11dcfba0604171824g12000777o9d80463868f095a5@mail.gmail.com> On 4/17/06, Ian Bicking <ianb at colorstudy.com> wrote: > Steven Bethard wrote: > > This PEP proposes a generalization of the class-declaration syntax, > > the ``make`` statement. The proposed syntax and semantics parallel > > the syntax for class definition, and so:: > > > > make <callable> <name> <tuple>: > > <block> > > I can't really see any use case for <tuple>. FWIW, I've been thinking of the tuple as the "*args" and the block as the "**kwargs". But certainly any function can be written to take all keyword arguments. > In particular, you could always choose to implement this: > > make Foo someobj(stuff): ... > > like: > > make Foo(stuff) someobj: ... [snip] > and so moving to this might feel a bit better: > > make someobj Foo(stuff): ... Just to clarify, you mean translating: make <name> <callable>: <block> into the assignment:: <name> = <callable>("<name>", <namespace>) ? Looks okay to me. I'm only hesitant because on c.l.py I got a pretty strong push for maintaining compatiblity with the class statement. > With that in mind, I think __call__ might be the wrong method to call on > the builder. For instance, if you were actually going to implement > prototypes on this, you wouldn't want to steal all uses of __call__ just > for the cloning machinery. So __make__ would be nicer. Personally this > would also let people using older constructs (like a plain > __call__(**kw)) to keep that in addition to supporting this new construct. Yeah, I guess the real question here is, do we expect that types will want to support both normal creation and creation using the make statement? If the answer is yes, then we definitely need to introduce a __make__ slot. Steve -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From skip at pobox.com Tue Apr 18 03:27:07 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 20:27:07 -0500 Subject: [Python-Dev] How to make _sre.c compile w/ C++? Message-ID: <17476.16491.338639.381288@montanaro.dyndns.org> I checked in a number of minor changes this evening to correct various problems compiling Python with a C++ compiler, in my case Apple's version of g++ 4.0. I'm stuck on Modules/_sre.c though. After applying this change: Index: Modules/_sre.c =================================================================== --- Modules/_sre.c (revision 45497) +++ Modules/_sre.c (working copy) @@ -2284,10 +2284,10 @@ ptr = getstring(ptemplate, &n, &b); if (ptr) { if (b == 1) { - literal = sre_literal_template(ptr, n); + literal = sre_literal_template((SRE_CHAR *)ptr, n); } else { #if defined(HAVE_UNICODE) - literal = sre_uliteral_template(ptr, n); + literal = sre_uliteral_template((Py_UNICODE *)ptr, n); #endif } } else { I am left with this error: ../Modules/_sre.c: In function 'PyObject* pattern_subx(PatternObject*, PyObject*, PyObject*, int, int)': ../Modules/_sre.c:2287: error: cannot convert 'Py_UNICODE*' to 'unsigned char*' for argument '1' to 'int sre_literal_template(unsigned char*, int)' During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call to sre_literal_template is incorrect. Any ideas how to fix things? As clever as the two-pass compilation thing is, I must admit it confuses me. Thx, Skip From skip at pobox.com Tue Apr 18 03:35:14 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 17 Apr 2006 20:35:14 -0500 Subject: [Python-Dev] Gentoo failures - it's blaming me... Message-ID: <17476.16978.351728.629099@montanaro.dyndns.org> I'm on the blame list for the current gentoo buildbot failures. I promise I ran "make test" before checking anything in. I don't see where the changes I checked in would have caused the reported test failures, but I'm investigating. If anyone has any suggestions, let me know. Skip From anthony at interlink.com.au Tue Apr 18 04:00:23 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Tue, 18 Apr 2006 12:00:23 +1000 Subject: [Python-Dev] How to make _sre.c compile w/ C++? In-Reply-To: <17476.16491.338639.381288@montanaro.dyndns.org> References: <17476.16491.338639.381288@montanaro.dyndns.org> Message-ID: <200604181200.27702.anthony@interlink.com.au> On Tuesday 18 April 2006 11:27, skip at pobox.com wrote: > During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call > to sre_literal_template is incorrect. Any ideas how to fix things? I thought (but haven't had time to test) that making getstring return a union that's either SRE_CHAR* or Py_UNICODE* would make the problem go away. -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From tim.peters at gmail.com Tue Apr 18 04:51:56 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 22:51:56 -0400 Subject: [Python-Dev] Gentoo failures - it's blaming me... In-Reply-To: <17476.16978.351728.629099@montanaro.dyndns.org> References: <17476.16978.351728.629099@montanaro.dyndns.org> Message-ID: <1f7befae0604171951t64b2f5acp1dfac2198ffb8f5c@mail.gmail.com> [skip at pobox.com] > I'm on the blame list for the current gentoo buildbot failures. I promise I > ran "make test" before checking anything in. I don't see where the changes > I checked in would have caused the reported test failures, but I'm > investigating. If anyone has any suggestions, let me know. A URL really helps. I'm guessing this one: http://www.python.org/dev/buildbot/all/amd64%20gentoo%20trunk/builds/546/step-test/0 If so, Phillip was also in the blamelist, and he's since fixed the test_pyclbr problem he introduced. BTW, thanks for taking the blamelist seriously! It's usually very helpful. From tim.peters at gmail.com Tue Apr 18 05:39:36 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 17 Apr 2006 23:39:36 -0400 Subject: [Python-Dev] refleaks & test_tcl & threads In-Reply-To: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> References: <9e804ac0604160910o1232e518k71b46c4e881dd043@mail.gmail.com> Message-ID: <1f7befae0604172039p20ad45bfp3544959b19b24c52@mail.gmail.com> [Thomas Wouters] > ... > One remaining issue with refleakhunting on my machine is that test_tcl can't > stand being run twice. Even without -R, this makes Python hang while waiting > for a mutex in the second run through test_tcl: > > ...trunk $ ./python -E -tt Lib/test/regrtest test_tcl test_tcl > > Attaching gdb to the hung process shows this unenlightening trace: > #0 0x00002b7d6629514b in __lll_mutex_lock_wait () from /lib/libpthread.so.0 > #1 0x00002b7d6639a280 in completed.4801 () from /lib/libpthread.so.0 > #2 0x0000000000000004 in ?? () > #3 0x00002b7d66291dca in pthread_mutex_lock () from /lib/libpthread.so.0 > #4 0x0000000000000000 in ?? () > > The process has one other thread, which is stuck here: > #0 0x00002b7d667f14d6 in __select_nocancel () from /lib/libc.so.6 > #1 0x00002b7d67512d8c in Tcl_WaitForEvent () from /usr/lib/libtcl8.4.so.0 > #2 0x00002b7d66290b1c in start_thread () from /lib/libpthread.so.0 > #3 0x00002b7d667f8962 in clone () from /lib/libc.so.6 > #4 0x0000000000000000 in ?? () > > It smells like test_tcl or Tkinter is doing something wrong with regards to > threads. I can reproduce this on a few machines, but all of them run newish > linux kernels with newish glibc's and newish tcl/tk. At least in kernel/libc > land, various thread related things changed of late. I don't have access to > other machines with tcl/tk right now, but I wonder if anyone can reproduce > this in different situations. FYI, there's no problem running test_tcl with -R on WinXP Pro SP2 from current trunk: C:\Code\python\PCbuild>python_d -E -tt ../lib/test/regrtest.py -R:: test_tcl test_tcl beginning 9 repetitions 123456789 ......... 1 test OK. [27306 refs] That's using Tcl/Tk 8.4.12 (I don't know what newish means in Tcl-land these days). From ilya at bluefir.net Tue Apr 18 06:22:02 2006 From: ilya at bluefir.net (Ilya Sandler) Date: Mon, 17 Apr 2006 21:22:02 -0700 (PDT) Subject: [Python-Dev] remote debugging with pdb In-Reply-To: <4443B9A6.3030307@v.loewis.de> References: <Pine.LNX.4.58.0508071312290.695@bagira> <20050808154503.GB28005@panix.com> <Pine.LNX.4.58.0508081926010.2814@bagira> <200508111802.44357.anthony@interlink.com.au> <24EEDE5B-4511-40D4-9C16-8A33C4ACE1C8@redivi.com> <Pine.LNX.4.58.0508162119100.2711@bagira> <4443B9A6.3030307@v.loewis.de> Message-ID: <Pine.LNX.4.58.0604172050300.1278@bagira> On Mon, 17 Apr 2006, [ISO-8859-1] "Martin v. L?wis" wrote: > > There is a patch on SourceForge > > python.org/sf/721464 > > which allows pdb to read/write from/to arbitrary file objects. Would it > > answer some of your concerns (eg remote debugging)? > > > > I guess, I could revive it if anyone thinks that it's worthwhile... > > > I just looked at it, and yes, it's a good idea. Ok, I'll look into it and submit as a new SF item (probably within 2-3 weeks)... A question though: the patch will touch the code in many places and so is likely to break other pdb patches which are in SF (e.g 1393667( restart patch by rockyb) and 1267629('until' patch by me)). Any chance of getting those accepted/rejected before remote debugging patch? Thanks Ilya From martin at v.loewis.de Tue Apr 18 07:22:02 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 18 Apr 2006 07:22:02 +0200 Subject: [Python-Dev] How to make _sre.c compile w/ C++? In-Reply-To: <17476.16491.338639.381288@montanaro.dyndns.org> References: <17476.16491.338639.381288@montanaro.dyndns.org> Message-ID: <4444777A.4060009@v.loewis.de> skip at pobox.com wrote: > if (b == 1) { > - literal = sre_literal_template(ptr, n); > + literal = sre_literal_template((SRE_CHAR *)ptr, n); > } else { > #if defined(HAVE_UNICODE) > - literal = sre_uliteral_template(ptr, n); > + literal = sre_uliteral_template((Py_UNICODE *)ptr, n); > #endif > ../Modules/_sre.c: In function 'PyObject* pattern_subx(PatternObject*, PyObject*, PyObject*, int, int)': > ../Modules/_sre.c:2287: error: cannot convert 'Py_UNICODE*' to 'unsigned char*' for argument '1' to 'int sre_literal_template(unsigned char*, int)' > > During the 16-bit pass, SRE_CHAR expands to Py_UNICODE, so the call to > sre_literal_template is incorrect. Any ideas how to fix things? sre_literal_template doesn't take SRE_CHAR*, but unsigned char*. So just cast to that. Regards, Martin From mal at egenix.com Tue Apr 18 10:55:25 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 18 Apr 2006 10:55:25 +0200 Subject: [Python-Dev] [Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py In-Reply-To: <20060418005956.156301E400A@bag.python.org> References: <20060418005956.156301E400A@bag.python.org> Message-ID: <4444A97D.40406@egenix.com> Phillip.eby wrote: > Author: phillip.eby > Date: Tue Apr 18 02:59:55 2006 > New Revision: 45510 > > Modified: > python/trunk/Lib/pkgutil.py > python/trunk/Lib/pydoc.py > Log: > Second phase of refactoring for runpy, pkgutil, pydoc, and setuptools > to share common PEP 302 support code, as described here: > > http://mail.python.org/pipermail/python-dev/2006-April/063724.html Shouldn't this new module be named "pkglib" to be in line with the naming scheme used for all the other utility modules, e.g. httplib, imaplib, poplib, etc. ? > pydoc now supports PEP 302 importers, by way of utility functions in > pkgutil, such as 'walk_packages()'. It will properly document > modules that are in zip files, and is backward compatible to Python > 2.3 (setuptools installs for Python <2.5 will bundle it so pydoc > doesn't break when used with eggs.) Are you saying that the installation of setuptools in Python 2.3 and 2.4 will then overwrite the standard pydoc included with those versions ? I think that's the wrong way to go if not made an explicit option in the installation process or a separate installation altogether. I bothered by the fact that installing setuptools actually changes the standard Python installation by either overriding stdlib modules or monkey-patching them at setuptools import time. > What has not changed is that pydoc command line options do not support > zip paths or other importer paths, and the webserver index does not > support sys.meta_path. Those are probably okay as limitations. > > Tasks remaining: write docs and Misc/NEWS for pkgutil/pydoc changes, > and update setuptools to use pkgutil wherever possible, then add it > to the stdlib. Add setuptools to the stdlib ? I'm still missing the PEP for this along with the needed discussion touching among other things, the change of the distutils standard "python setup.py install" to install an egg instead of a site package. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From arigo at tunes.org Tue Apr 18 14:41:09 2006 From: arigo at tunes.org (Armin Rigo) Date: Tue, 18 Apr 2006 14:41:09 +0200 Subject: [Python-Dev] possible fix for recursive __call__ segfault In-Reply-To: <bbaeab100604171734x4bbe017ci94dce673b8d54585@mail.gmail.com> References: <bbaeab100604171734x4bbe017ci94dce673b8d54585@mail.gmail.com> Message-ID: <20060418124109.GA8836@code0.codespeak.net> Hi Brett, On Mon, Apr 17, 2006 at 05:34:16PM -0700, Brett Cannon wrote: > + if (meth == self) { > + PyErr_SetString(PyExc_RuntimeError, > + "recursive __call__ definition"); > + return NULL; > + } This is not the proper way, as it can be worked around with a pair of objects whose __call__ point to each other. The solution is to use the counter of Py_{Enter,Leave}RecursiveCall(), as was done for old-style classes (see classobject.c). By the way, this is a known problem: the example you show is Lib/test/crashers/infinite_rec_3.py, and the four other infinite_rec_*.py are all slightly more subtle ways to trigger a similar infinite loop in C. They point to the SF bug report at http://python.org/sf/1202533, where we discuss the problem in general. Basically, someone should try to drop many Py_{Enter,Leave}RecursiveCall() pairs in the source until all the currently-known bugs go away, and then measure if this has a noticeable performance impact. A bientot, Mr. 8 Of The 12 Files In That Directory From andymac at bullseye.apana.org.au Tue Apr 18 14:08:34 2006 From: andymac at bullseye.apana.org.au (Andrew MacIntyre) Date: Tue, 18 Apr 2006 23:08:34 +1100 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <4443F47E.7000103@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> <4443F47E.7000103@v.loewis.de> Message-ID: <4444D6C2.10606@bullseye.apana.org.au> Martin v. L?wis wrote: > It can't be that simple. Python's stdout should indeed be inherited > from cmd.exe, but that, in turn, should have obtained it from > buildbot. So even though cmd.exe closes its handle, Python's handle > should still be fine. If buildbot then closes the other end of the > pipe, Python should get ERROR_BROKEN_PIPE. The only deadlock I > can see here is when buildbot does *not* close the pipe, but stops > reading from it. In that case, Python's WriteFile would block. > > If that happens, it would be useful to attach with a debugger to > find out where Python got stuck. I doubt it has anything to do with this issue, but I just thought I'd mention something strange I've encountered on Windows XP Pro (SP2) at work. If Python terminates due to an uncaught exception, with stdout & stderr redirected externally (ie within the batch file that started Python), the files that were redirected to cannot be deleted/renamed until the system is rebooted. If a bare except is used to trap any such exceptions (and the traceback printed explicitly) so that Python terminates normally, there is no problem (ie the redirected files can be deleted/renamed etc). I've never reported this as a Python bug because I've considered the antivirus SW likely to be the culprit. I don't recall seeing this with Windows 2000, but much was changed in the transition from the Win2k SOE to the WinXP SOE. A wild shot at best... Andrew. -- ------------------------------------------------------------------------- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac at pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From gjcarneiro at gmail.com Tue Apr 18 15:25:45 2006 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Tue, 18 Apr 2006 13:25:45 +0000 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> Message-ID: <a467ca4f0604180625q49c62696k6f0b87c271524dd1@mail.gmail.com> > > why include Construct? > * the struct module is very nice, but very limited and non-pythonic as > well > * pure python (no platform/security issues) > IMHO this is a drawback. More on this below. * lots of people need to parse and build binary data structures, it's not an > esoteric library > * license: public domain > * quite a large user base for such a short time (proves the need of the > community) > Indeed, I wish I had known about this a year ago; it would have saved me a lot of work. Of course it probably didn't exist a year ago... :( > * easy to use and extend (follows the componentization pattern) > * declarative: you don't need to write executable code for most cases > Well, declarative is less flexible. OTOH declarative is nice in the way it is more readable and allows more optimisations. why not: > * the code is (very) young. stable and all, but less than a month on the > loose. > * new features may still be added / existing ones may be changed in a > non-backwards-compatible manner > > so why am i saying this now, instead of waiting a few months for it to > maturet? > well, i wanted to get feedback. those of you who have seen/used the > library, please tell me what you think: > * is it suitable for a standard library? > * what more features would you want? > * any changes you think are necessary? > This is a very nice library indeed. But the number one feature that I need in something like this would be to use C. That's because of my application specific requirements, where i have observed that reapeatedly using struct.pack/unpack and reading bytes from a stream represents a considerable CPU overhead, whereas the same thing in C would be ultra fast. IMHO, at least in theory Construct could have small but fast C extension to take care of the encoding and decoding, which is the critical path. Everything else, like the declaration part, can be python, as it is usually done once on application startup. If you agree to go down this path I might even be able to volunteer some of my time to help, but it's not my decision. Best regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060418/626d06f1/attachment.html From amk at amk.ca Tue Apr 18 16:04:19 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 18 Apr 2006 10:04:19 -0400 Subject: [Python-Dev] PEP 343: confusing context terminology Message-ID: <20060418140419.GA6328@localhost.localdomain> PEP 343 says: This PEP proposes that the protocol used by the with statement be known as the "context management protocol", and that objects that implement that protocol be known as "context managers". The term "context" then encompasses all objects with a __context__() method that returns a context manager (this means that all context managers are contexts, but not all contexts are context managers). I read this paragraph as meaning: context = 'thing with __context__()' context manager = 'thing returned by __context__()' So in a 'with' statement: with A as B: ... A is a context; the context manager is internal and unavailable. The PEP uses this terminology, but the documentation for contextlib seems to reverse things. The decorator is called '@contextmanager', not '@context', and the text says things like: Note that you can use \code{@contextmanager} to define a context manager's \method{__context__} method. This is usually more convenient than creating another class just to serve as a context. For example: or: \begin{funcdesc}{nested}{ctx1\optional{, ctx2\optional{, ...}}} Combine multiple context managers into a single nested context manager. ref/ref3.tex uses the terms one way, then reverses them: A \dfn{context manager} is an object that manages the entry to, and exit from, a \dfn{context} surrounding a block of code. Context managers are normally invoked using the \keyword{with} statement (described in section~\ref{with}), but can also be used by directly invoking their methods. \begin{methoddesc}[context manager]{__context__}{self} Invoked when the object is used as the context expression of a \keyword{with} statement. The return value must implement \method{__enter__()} and \method{__exit__()} methods. Simple context managers that wish to directly implement \method{__enter__()} and \method{__exit__()} should just return \var{self}. Context managers written in Python can also implement this method using a generator function decorated with the \function{contextlib.contextmanager} decorator, as this can be simpler than writing individual \method{__enter__()} and \method{__exit__()} methods when the state to be managed is complex. Personally, I find this terminology counterintuitive. The "What's New" uses 'context manager = thing with __context()' because I read the first paragraph quoted from PEP 343, got confused, decided it couldn't possibly mean what it said, and fell back on the intuition that a 'manager' is usually a big long-lived thing while a 'context' is more likely to be a transient thing that exists for the duration of a statement and is then discarded. So, my questions: 1) Am I misunderstanding the PEP? Is 'context = 'thing with __context__()' really the PEP's meaning? 2) If the answer to 1) is 'yes', can we change the PEP? My intuition is that a 'manager' keeps track of multiple entities being managed; to have a single 'context' produce multiple 'context managers' over time seems exactly backwards. I think I understand the logic behind the name -- that the context manager *manages* the entry to and exit from the block -- but think 'Manager' is the wrong term for this because the object is really encapsulating a procedure/algorithm, not managing a set of objects. The closest GoF pattern name might be 'Strategy', which is still a bit of a stretch. 3) If the answer to 2) is "buzz off" :), the documentation (lib/contextlib, RefGuide, and What's New) needs to be fixed to use the terminology consistent with PEP 343. I'll do the "What's New", but someone else would have to update the other two. --amk From tomerfiliba at gmail.com Tue Apr 18 17:25:38 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Tue, 18 Apr 2006 17:25:38 +0200 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <a467ca4f0604180625q49c62696k6f0b87c271524dd1@mail.gmail.com> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <a467ca4f0604180625q49c62696k6f0b87c271524dd1@mail.gmail.com> Message-ID: <1d85506f0604180825v6d7880eo2638f8569555018d@mail.gmail.com> > > Indeed, I wish I had known about this a year ago; it would have saved me a > lot of work. Of course it probably didn't exist a year ago... :( well, yeah. many people need "parsing-abilities", but they resort to ad-hoc parsers using struct/some ad-hoc implementation of their own. there clearly is a need for a generic, strong, and extensible parsing/building mechanism. Well, declarative is less flexible. OTOH declarative is nice in the way it > is more readable and allows more optimisations. > i don't think "less flexible" is the term. it's certainly different, but if you need something specific, you can always subclass a construct on your own. other than that, being declarative means easy to read/write/maintain/debug/upgrage (to a newer version of the library). IMHO, at least in theory Construct could have small but fast C extension > to take care of the encoding and decoding, which is the critical path. > Everything else, like the declaration part, can be python, as it is usually > done once on application startup. > well, i expected the encodings package to have a str.encode("bin") and str.decode("bin")... for some reason there's no such codec. it's a pity. This is a very nice library indeed. But the number one feature that I need > in something like this would be to use C. That's because of my application > specific requirements, where i have observed that reapeatedly using > struct.pack/unpack and reading bytes from a stream represents a > considerable CPU overhead, whereas the same thing in C would be ultra fast. > well, you must have the notion of a "stream", i.e., go back and forth, be able to read/write bits/bytes at arbitrary locations, etc. i thought of moving the library to pyrex, and compiling it, but the number of critical parts is very small -- basically only the Repeater class could be improved by writing it in C. i mean, most of the time is consumed at creating objects in the objects tree, etc. for example, the Struct class simply iterates over the nested construsts and parses each of the in that sequence. doing a pythonic iteration of a C-level iteration over a pythonic object is practically the same. If you agree to go down this path I might even be able to volunteer some of > my time to help, but it's not my decision. > well, mainly i'm looking for ideas. just moving it to c wouldnt be too helpful. some ideas i have: * making the stream work with bytes instead of bits, so that memory consumption would decrease 8-fold... but then parsing unaligned fields (either by size of position) is gonna be a headache * unifying the context tree with the parsing/building tree, to create less objects on the fly (but it has some issues) * using lambda functions for meta expressions, instead of eval(string) -- perhaps it's faster, but lambda is getting deprecated by python3k :( apart from that, i'm rely on inheritance in many places. if some classes are written in C and some in python, i'm not sure how it could work (can a C class inherit a pythonic one? would it be easy to extend?). and, that means users would have to compile the C sources, while now all they have to do is extract a zip file. and then i'd have to write makefiles, and maintain those also... it's getting dirty. i like the painless "unzip-and-use" installation. so if you have ideas, i'd be happy to hear those. thanks, -tomer On 4/18/06, Gustavo Carneiro <gjcarneiro at gmail.com> wrote: > > why include Construct? > > * the struct module is very nice, but very limited and non-pythonic as > > well > > * pure python (no platform/security issues) > > > > IMHO this is a drawback. More on this below. > > * lots of people need to parse and build binary data structures, it's not > > an esoteric library > > * license: public domain > > * quite a large user base for such a short time (proves the need of the > > community) > > > > Indeed, I wish I had known about this a year ago; it would have saved me > a lot of work. Of course it probably didn't exist a year ago... :( > > > > * easy to use and extend (follows the componentization pattern) > > * declarative: you don't need to write executable code for most cases > > > > Well, declarative is less flexible. OTOH declarative is nice in the way > it is more readable and allows more optimisations. > > why not: > > * the code is (very) young. stable and all, but less than a month on the > > loose. > > * new features may still be added / existing ones may be changed in a > > non-backwards-compatible manner > > > > so why am i saying this now, instead of waiting a few months for it to > > maturet? > > well, i wanted to get feedback. those of you who have seen/used the > > library, please tell me what you think: > > * is it suitable for a standard library? > > > * what more features would you want? > > * any changes you think are necessary? > > > > This is a very nice library indeed. But the number one feature that I > need in something like this would be to use C. That's because of my > application specific requirements, where i have observed that reapeatedly > using struct.pack/unpack and reading bytes from a stream represents a > considerable CPU overhead, whereas the same thing in C would be ultra fast. > > IMHO, at least in theory Construct could have small but fast C extension > to take care of the encoding and decoding, which is the critical path. > Everything else, like the declaration part, can be python, as it is usually > done once on application startup. > > If you agree to go down this path I might even be able to volunteer some > of my time to help, but it's not my decision. > > Best regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060418/de6843af/attachment-0001.html From pje at telecommunity.com Tue Apr 18 17:52:57 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 11:52:57 -0400 Subject: [Python-Dev] [Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py In-Reply-To: <4444A97D.40406@egenix.com> References: <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> Message-ID: <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> At 10:55 AM 4/18/2006 +0200, M.-A. Lemburg wrote: >Phillip.eby wrote: > > Author: phillip.eby > > Date: Tue Apr 18 02:59:55 2006 > > New Revision: 45510 > > > > Modified: > > python/trunk/Lib/pkgutil.py > > python/trunk/Lib/pydoc.py > > Log: > > Second phase of refactoring for runpy, pkgutil, pydoc, and setuptools > > to share common PEP 302 support code, as described here: > > > > http://mail.python.org/pipermail/python-dev/2006-April/063724.html > >Shouldn't this new module be named "pkglib" to be in line with >the naming scheme used for all the other utility modules, e.g. httplib, >imaplib, poplib, etc. ? It's not a new module; it was previously a module with only one function in it, introduced in Python 2.3. If it were a new module, I'd have inquired about a name for it first. > > pydoc now supports PEP 302 importers, by way of utility functions in > > pkgutil, such as 'walk_packages()'. It will properly document > > modules that are in zip files, and is backward compatible to Python > > 2.3 (setuptools installs for Python <2.5 will bundle it so pydoc > > doesn't break when used with eggs.) > >Are you saying that the installation of setuptools in Python 2.3 >and 2.4 will then overwrite the standard pydoc included with >those versions ? Yes. As far as I can tell, there were no API changes to pydoc during this time, so this is effectively a "hot fix". This hot-fixing doesn't apply to setuptools system packages built with --root or --single-version-externally-managed, however, so OS vendors who build packages that wrap setuptools will have to make an explicit decision whether to also apply any fixes. If they do not, an end-user can of course install setuptools in their local PYTHONPATH and the hotfix will take effect. >I bothered by the fact that installing setuptools actually changes >the standard Python installation by either overriding stdlib modules >or monkey-patching them at setuptools import time. Please feel free to propose alternative solutions that will still ensure that setuptools "just works" for end-users. Both this and the pydoc hotfix are "practicality beats purity" issues. >Add setuptools to the stdlib ? I'm still missing the PEP for this >along with the needed discussion touching among other things, >the change of the distutils standard "python setup.py install" >to install an egg instead of a site package. Setuptools in the stdlib simply means that people wanting to use it can import it. It does not affect programs that do not import it. It also means that "python -m easy_install" is available without having to first download ez_setup.py. As for discussion, Guido originally brought up the question here a few months ago, and it's been listed in PEP 356 for a while. I've also posted things related to the inclusion both here and in distutils-sig. From pje at telecommunity.com Tue Apr 18 17:59:17 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 11:59:17 -0400 Subject: [Python-Dev] PEP 343: confusing context terminology In-Reply-To: <20060418140419.GA6328@localhost.localdomain> Message-ID: <5.1.1.6.0.20060418115622.0423db10@mail.telecommunity.com> At 10:04 AM 4/18/2006 -0400, A.M. Kuchling wrote: >PEP 343 says: > > This PEP proposes that the protocol used by the with statement be > known as the "context management protocol", and that objects that > implement that protocol be known as "context managers". The term > "context" then encompasses all objects with a __context__() method > that returns a context manager (this means that all context managers > are contexts, but not all contexts are context managers). > >I read this paragraph as meaning: > > context = 'thing with __context__()' > context manager = 'thing returned by __context__()' > >So in a 'with' statement: > >with A as B: > ... > >A is a context; the context manager is internal and unavailable. I believe this is backwards. But then again, I would think that, since I wrote the documentation that takes the opposite tack. :) Partly, that's because I agree with your intuition about long-livedness of "manager", but mostly it's because the PEP is where the "contextmanager" decorator originated, and because it doesn't make any sense to call the method __context__() if it returns a context *manager*. :) > 1) Am I misunderstanding the PEP? Is > 'context = 'thing with __context__()' really the PEP's > meaning? > > 2) If the answer to 1) is 'yes', can we change the PEP? > > My intuition is that a 'manager' keeps track of multiple > entities being managed; to have a single 'context' produce > multiple 'context managers' over time seems exactly > backwards. > > I think I understand the logic behind the name -- that the > context manager *manages* the entry to and exit from the > block -- but think 'Manager' is the wrong term for this > because the object is really encapsulating a > procedure/algorithm, not managing a set of objects. The > closest GoF pattern name might be 'Strategy', which is > still a bit of a stretch. I think we should correct the PEP. From pje at telecommunity.com Tue Apr 18 18:34:31 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 12:34:31 -0400 Subject: [Python-Dev] Place for setuptools manuals and source for .exe files? Message-ID: <5.1.1.6.0.20060418122008.036fae70@mail.telecommunity.com> Now that setuptools is in the trunk, I need to also add its manuals and the source for its .exe files. These currently live only in the sandbox. First, the C source, used to create the 'gui.exe' and 'cli.exe' launchers that setuptools uses to create script wrappers on Windows. The source currently lives at sandbox/trunk/setuptools/launcher.c - should I create a PC/setuptools directory (analagous to PC/bdist_wininst) to put it in? Do I need to make it buildable with the MS toolchain? I currently build it with MinGW, as the code doesn't link with Python at all; it's strictly a standalone .exe, like w9xpopen. And its usage is similar to bdist_wininst's .exe header, in that it's used by setuptools as data during runtime, and the .exe is directly kept in revision control. So it doesn't need to be a part of the normal build process, but would only be rebuilt if there are any changes to the source. Second, the manuals. Setuptools currently has three: pkg_resources.txt -- the reference manual for the pkg_resources module setuptools.txt -- a developer's guide to using setuptools to package projects for distribution EasyInstall.txt -- a user's manual for the easy_install package installation tool. It's pretty clear that pkg_resources.txt should be converted to a standard library reference chapter, since that's essentially what it is. The other two manuals, however, are roughly the setuptools versions of "distributing Python modules" and "installing Python modules", respectively. Should they be listed as separate manuals? From steven.bethard at gmail.com Tue Apr 18 18:53:59 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Tue, 18 Apr 2006 10:53:59 -0600 Subject: [Python-Dev] Updated: PEP 359: The make statement Message-ID: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> I've updated PEP 359 with a bunch of the recent suggestions. The patch is available at: http://bugs.python.org/1472459 and I've pasted the full text below. I've tried to be more explicit about the goals -- the make statement is mostly syntactic sugar for:: class <name> <tuple>: __metaclass__ = <callable> <block> so that you don't have to lie to your readers when you're not actually creating a class. I've also added some new examples and expanded the discussion of the old ones to give the statement some better motivation. And I've expanded the Open Issues discussions to consider a few alternate keywords and to indicate some of the difficulties in allowing a ``__make_dict__`` attribute for customizing the dict in which the block is executed. PEP: 359 Title: The "make" Statement Version: $Revision: 45366 $ Last-Modified: $Date: 2006-04-13 07:36:24 -0600 (Thu, 13 Apr 2006) $ Author: Steven Bethard <steven.bethard at gmail.com> Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 05-Apr-2006 Python-Version: 2.6 Post-History: 05-Apr-2006, 06-Apr-2006, 13-Apr-2006 Abstract ======== This PEP proposes a generalization of the class-declaration syntax, the ``make`` statement. The proposed syntax and semantics parallel the syntax for class definition, and so:: make <callable> <name> <tuple>: <block> is translated into the assignment:: <name> = <callable>("<name>", <tuple>, <namespace>) where ``<namespace>`` is the dict created by executing ``<block>``. This is mostly syntactic sugar for:: class <name> <tuple>: __metaclass__ = <callable> <block> and is intended to help more clearly express the intent of the statement when something other than a class is being created. Of course, other syntax for such a statement is possible, but it is hoped that by keeping a strong parallel to the class statement, an understanding of how classes and metaclasses work will translate into an understanding of how the make-statement works as well. The PEP is based on a suggestion [1]_ from Michele Simionato on the python-dev list. Motivation ========== Class statements provide two nice facilities to Python: (1) They execute a block of statements and provide the resulting bindings as a dict to the metaclass. (2) They encourage DRY (don't repeat yourself) by allowing the class being created to know the name it is being assigned. Thus in a simple class statement like:: class C(object): x = 1 def foo(self): return 'bar' the metaclass (``type``) gets called with something like:: C = type('C', (object,), {'x':1, 'foo':<function foo at ...>}) The class statement is just syntactic sugar for the above assignment statement, but clearly a very useful sort of syntactic sugar. It avoids not only the repetition of ``C``, but also simplifies the creation of the dict by allowing it to be expressed as a series of statements. Historically, type instances (a.k.a. class objects) have been the only objects blessed with this sort of syntactic support. The make statement aims to extend this support to other sorts of objects where such syntax would also be useful. Example: simple namespaces -------------------------- Let's say I have some attributes in a module that I access like:: mod.thematic_roletype mod.opinion_roletype mod.text_format mod.html_format and since "Namespaces are one honking great idea", I'd like to be able to access these attributes instead as:: mod.roletypes.thematic mod.roletypes.opinion mod.format.text mod.format.html I currently have two main options: (1) Turn the module into a package, turn ``roletypes`` and ``format`` into submodules, and move the attributes to the submodules. (2) Create ``roletypes`` and ``format`` classes, and move the attributes to the classes. The former is a fair chunk of refactoring work, and produces two tiny modules without much content. The latter keeps the attributes local to the module, but creates classes when there is no intention of ever creating instances of those classes. In situations like this, it would be nice to simply be able to declare a "namespace" to hold the few attributes. With the new make statement, I could introduce my new namespaces with something like:: make namespace roletypes: thematic = ... opinion = ... make namespace format: text = ... html = ... and keep my attributes local to the module without making classes that are never intended to be instantiated. One definition of namespace that would make this work is:: class namespace(object): def __init__(self, name, args, kwargs): self.__dict__.update(kwargs) Given this definition, at the end of the make-statements above, ``roletypes`` and ``format`` would be namespace instances. Example: gui objects -------------------- In gui toolkits, objects like frames and panels are often associated with attributes and functions. With the make-statement, code that looks something like:: root = Tkinter.Tk() frame = Tkinter.Frame(root) frame.pack() def say_hi(): print "hi there, everyone!" hi_there = Tkinter.Button(frame, text="Hello", command=say_hi) hi_there.pack(side=Tkinter.LEFT) root.mainloop() could be rewritten to group the the Button's function with its declaration:: root = Tkinter.Tk() frame = Tkinter.Frame(root) frame.pack() make Tkinter.Button hi_there(frame): text = "Hello" def command(): print "hi there, everyone!" hi_there.pack(side=Tkinter.LEFT) root.mainloop() Example: custom descriptors --------------------------- Since descriptors are used to customize access to an attribute, it's often useful to know the name of that attribute. Current Python doesn't give an easy way to find this name and so a lot of custom descriptors, like Ian Bicking's setonce descriptor [2]_, have to hack around this somehow. With the make-statement, you could create a ``setonce`` attribute like:: class A(object): ... make setonce x: "A's x attribute" ... where the ``setonce`` descriptor would be defined like:: class setonce(object): def __init__(self, name, args, kwargs): self._name = '_setonce_attr_%s' % name self.__doc__ = kwargs.pop('__doc__', None) def __get__(self, obj, type=None): if obj is None: return self return getattr(obj, self._name) def __set__(self, obj, value): try: getattr(obj, self._name) except AttributeError: setattr(obj, self._name, value) else: raise AttributeError, "Attribute already set" def set(self, obj, value): setattr(obj, self._name, value) def __delete__(self, obj): delattr(obj, self._name) Note that unlike the original implementation, the private attribute name is stable since it uses the name of the descriptor, and therefore instances of class A are pickleable. Example: property namespaces ---------------------------- Python's property type takes three function arguments and a docstring argument which, though relevant only to the property, must be declared before it and then passed as arguments to the property call, e.g.:: class C(object): ... def get_x(self): ... def set_x(self): ... x = property(get_x, set_x, "the x of the frobulation") This issue has been brought up before, and Guido [3]_ and others [4]_ have briefly mused over alternate property syntaxes to make declaring properties easier. With the make-statement, the following syntax could be supported:: class C(object): ... make block_property x: '''The x of the frobulation''' def fget(self): ... def fset(self): ... with the following definition of ``block_property``:: def block_property(name, args, block_dict): fget = block_dict.pop('fget', None) fset = block_dict.pop('fset', None) fdel = block_dict.pop('fdel', None) doc = block_dict.pop('__doc__', None) assert not block_dict return property(fget, fset, fdel, doc) Example: interfaces ------------------- Guido [5]_ and others have occasionally suggested introducing interfaces into python. Most suggestions have offered syntax along the lines of:: interface IFoo: """Foo blah blah""" def fumble(name, count): """docstring""" but since there is currently no way in Python to declare an interface in this manner, most implementations of Python interfaces use class objects instead, e.g. Zope's:: class IFoo(Interface): """Foo blah blah""" def fumble(name, count): """docstring""" With the new make-statement, these interfaces could instead be declared as:: make Interface IFoo: """Foo blah blah""" def fumble(name, count): """docstring""" which makes the intent (that this is an interface, not a class) much clearer. Specification ============= Python will translate a make-statement:: make <callable> <name> <tuple>: <block> into the assignment:: <name> = <callable>("<name>", <tuple>, <namespace>) where ``<namespace>`` is the dict created by executing ``<block>``. The ``<tuple>`` expression is optional; if not present, an empty tuple will be assumed. A patch is available implementing these semantics [6]_. The make-statement introduces a new keyword, ``make``. Thus in Python 2.6, the make-statement will have to be enabled using ``from __future__ import make_statement``. Open Issues =========== Keyword ------- Does the ``make`` keyword break too much code? Originally, the make statement used the keyword ``create`` (a suggestion due to Nick Coghlan). However, investigations into the standard library [7]_ and Zope+Plone code [8]_ revealed that ``create`` would break a lot more code, so ``make`` was adopted as the keyword instead. However, there are still a few instances where ``make`` would break code. Is there a better keyword for the statement? Some possible keywords and their counts in the standard library (plus some installed packages): * make - 2 (both in tests) * create - 19 (including existing function in imaplib) * build - 83 (including existing class in distutils.command.build) * construct - 0 * produce - 0 The make-statement as an alternate constructor ---------------------------------------------- Currently, there are not many functions which have the signature ``(name, args, kwargs)``. That means that something like:: make dict params: x = 1 y = 2 is currently impossible because the dict constructor has a different signature. Does this sort of thing need to be supported? One suggestion, by Carl Banks, would be to add a ``__make__`` magic method that if found would be called instead of ``__call__``. For types, the ``__make__`` method would be identical to ``__call__`` and thus unnecessary, but dicts could support the make-statement by defining a ``__make__`` method on the dict type that looks something like:: def __make__(cls, name, args, kwargs): return cls(**kwargs) Of course, rather than adding another magic method, the dict type could just grow a classmethod something like ``dict.fromblock`` that could be used like:: make dict.fromblock params: x = 1 y = 2 So the question is, will many types want to use the make-statement as an alternate constructor? And if so, does that alternate constructor need to have the same name as the original constructor? Customizing the dict in which the block is executed --------------------------------------------------- Should users of the make-statement be able to determine in which dict object the code is executed? This would allow the make-statement to be used in situations where a normal dict object would not suffice, e.g. if order and repeated names must be allowed. Allowing this sort of customization could allow XML to be written without repeating element names, and with nesting of make-statements corresponding to nesting of XML elements:: make Element html: make Element body: text('before first h1') make Element h1: attrib(style='first') text('first h1') tail('after first h1') make Element h1: attrib(style='second') text('second h1') tail('after second h1') If the make-statement tried to get the dict in which to execute its block by calling the callable's ``__make_dict__`` method, the following code would allow the make-statement to be used as above:: class Element(object): class __make_dict__(dict): def __init__(self, *args, **kwargs): self._super = super(Element.__make_dict__, self) self._super.__init__(*args, **kwargs) self.elements = [] self.text = None self.tail = None self.attrib = {} def __getitem__(self, name): try: return self._super.__getitem__(name) except KeyError: if name in ['attrib', 'text', 'tail']: return getattr(self, 'set_%s' % name) else: return globals()[name] def __setitem__(self, name, value): self._super.__setitem__(name, value) self.elements.append(value) def set_attrib(self, **kwargs): self.attrib = kwargs def set_text(self, text): self.text = text def set_tail(self, text): self.tail = text def __new__(cls, name, args, edict): get_element = etree.ElementTree.Element result = get_element(name, attrib=edict.attrib) result.text = edict.text result.tail = edict.tail for element in edict.elements: result.append(element) return result Note, however, that the code to support this is somewhat fragile -- it has to magically populate the namespace with ``attrib``, ``text`` and ``tail``, and it assumes that every name binding inside the make statement body is creating an Element. As it stands, this code would break with the introduction of a simple for-loop to any one of the make-statement bodies, because the for-loop would bind a name to a non-Element object. This could be worked around by adding some sort of isinstance check or attribute examination, but this still results in a somewhat fragile solution. It has also been pointed out that the with-statement can provide equivalent nesting with a much more explicit syntax:: with Element('html') as html: with Element('body') as body: body.text = 'before first h1' with Element('h1', style='first') as h1: h1.text = 'first h1' h1.tail = 'after first h1' with Element('h1', style='second') as h1: h1.text = 'second h1' h1.tail = 'after second h1' And if the repetition of the element names here is too much of a DRY violoation, it is also possible to eliminate all as-clauses except for the first by adding a few methods to Element. [9]_ So are there real use-cases for executing the block in a dict of a different type? And if so, should the make-statement be extended to support them? Optional Extensions =================== Remove the make keyword ------------------------- It might be possible to remove the make keyword so that such statements would begin with the callable being called, e.g.:: namespace ns: badger = 42 def spam(): ... interface C(...): ... However, almost all other Python statements begin with a keyword, and removing the keyword would make it harder to look up this construct in the documentation. Additionally, this would add some complexity in the grammar and so far I (Steven Bethard) have not been able to implement the feature without the keyword. Removing __metaclass__ in Python 3000 ------------------------------------- As a side-effect of its generality, the make-statement mostly eliminates the need for the ``__metaclass__`` attribute in class objects. Thus in Python 3000, instead of:: class <name> <bases-tuple>: __metaclass__ = <metaclass> <block> metaclasses could be supported by using the metaclass as the callable in a make-statement:: make <metaclass> <name> <bases-tuple>: <block> Removing the ``__metaclass__`` hook would simplify the BUILD_CLASS opcode a bit. Removing class statements in Python 3000 ---------------------------------------- In the most extreme application of make-statements, the class statement itself could be deprecated in favor of ``make type`` statements. References ========== .. [1] Michele Simionato's original suggestion (http://mail.python.org/pipermail/python-dev/2005-October/057435.html) .. [2] Ian Bicking's setonce descriptor (http://blog.ianbicking.org/easy-readonly-attributes.html) .. [3] Guido ponders property syntax (http://mail.python.org/pipermail/python-dev/2005-October/057404.html) .. [4] Namespace-based property recipe (http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418) .. [5] Python interfaces (http://www.artima.com/weblogs/viewpost.jsp?thread=86641) .. [6] Make Statement patch (http://ucsu.colorado.edu/~bethard/py/make_statement.patch) .. [7] Instances of create in the stdlib (http://mail.python.org/pipermail/python-list/2006-April/335159.html) .. [8] Instances of create in Zope+Plone (http://mail.python.org/pipermail/python-list/2006-April/335284.html) .. [9] Eliminate as-clauses in with-statement XML (http://mail.python.org/pipermail/python-list/2006-April/336774.html) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From mal at egenix.com Tue Apr 18 19:15:19 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 18 Apr 2006 19:15:19 +0200 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> References: <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> Message-ID: <44451EA7.4020907@egenix.com> Phillip J. Eby wrote: > At 10:55 AM 4/18/2006 +0200, M.-A. Lemburg wrote: >> Phillip.eby wrote: >> > Author: phillip.eby >> > Date: Tue Apr 18 02:59:55 2006 >> > New Revision: 45510 >> > >> > Modified: >> > python/trunk/Lib/pkgutil.py >> > python/trunk/Lib/pydoc.py >> > Log: >> > Second phase of refactoring for runpy, pkgutil, pydoc, and setuptools >> > to share common PEP 302 support code, as described here: >> > >> > http://mail.python.org/pipermail/python-dev/2006-April/063724.html >> >> Shouldn't this new module be named "pkglib" to be in line with >> the naming scheme used for all the other utility modules, e.g. httplib, >> imaplib, poplib, etc. ? > > It's not a new module; it was previously a module with only one function > in it, introduced in Python 2.3. If it were a new module, I'd have > inquired about a name for it first. Didn't realize that. Too bad the time machine didn't work on this one :-/ >> > pydoc now supports PEP 302 importers, by way of utility functions in >> > pkgutil, such as 'walk_packages()'. It will properly document >> > modules that are in zip files, and is backward compatible to Python >> > 2.3 (setuptools installs for Python <2.5 will bundle it so pydoc >> > doesn't break when used with eggs.) >> >> Are you saying that the installation of setuptools in Python 2.3 >> and 2.4 will then overwrite the standard pydoc included with >> those versions ? > > Yes. As far as I can tell, there were no API changes to pydoc during > this time, so this is effectively a "hot fix". Why should a 3rd party extension be hot-fixing the standard Python distribution ? > This hot-fixing doesn't apply to setuptools system packages built with > --root or --single-version-externally-managed, however, so OS vendors > who build packages that wrap setuptools will have to make an explicit > decision whether to also apply any fixes. If they do not, an end-user > can of course install setuptools in their local PYTHONPATH and the > hotfix will take effect. What does setuptools have to do with pydoc ? Why should a user installing setuptools assume that some arbitrary stdlib modules get (effectively) replaced by installing setuptools ? If you want to provide a hot fix for Python 2.3 and 2.4, you should make this a separate install, so that users are aware that their Python distribution is about to get modified in ways that have nothing to do with setuptools. >> I bothered by the fact that installing setuptools actually changes >> the standard Python installation by either overriding stdlib modules >> or monkey-patching them at setuptools import time. > > Please feel free to propose alternative solutions that will still ensure > that setuptools "just works" for end-users. Both this and the pydoc > hotfix are "practicality beats purity" issues. Not really. I'd consider them design flaws. distutils is built to be extended without having to monkey-patch it, e.g. you can easily override commands with your own variants by supplying them via the cmdclass and distclass keyword arguments to setup(). By monkey patching distutils during import of setuptools, you effectively *change* distutils at run-time and not only for the application space that you implement in setuptools, but for all the rest of the application. If an application wants to use setuptools for e.g. plugin management, then standard distutils features will get replaced by setuptools implementations which are not compatible to the standard distutils commands, effectively making it impossible to access the original versions. Monkey patching is only a last resort in case nothing else works. In this case, it's not needed, since distutils provides the interfaces needed to extend its command classes via the setup() call. See e.g. mxSetup.py in the eGenix extensions for an example of how effective the distutils design can be put to use without having to introduce lots of unwanted side-effects. >> Add setuptools to the stdlib ? I'm still missing the PEP for this >> along with the needed discussion touching among other things, >> the change of the distutils standard "python setup.py install" >> to install an egg instead of a site package. > > Setuptools in the stdlib simply means that people wanting to use it can > import it. It does not affect programs that do not import it. It also > means that "python -m easy_install" is available without having to first > download ez_setup.py. Doesn't really seem like much of an argument for the addition... the above is true for any 3rd party module. > As for discussion, Guido originally brought up the question here a few > months ago, and it's been listed in PEP 356 for a while. I've also > posted things related to the inclusion both here and in distutils-sig. I know, but the discussions haven't really helped much in getting the setuptools design compatible with standard distutils. Unless that's being put in place, I'm -1 on the addition, due to the invasive nature of setuptools and its various side-effects on systems management. Note that it only takes one single module in an application doing "import setuptools" for whatever reason, to have it modify distutils at run-time. The PEP is needed to document these effects, discuss their merits/drawbacks and find solutions to the problems prior to creating a status quo that's hard to modify after the fact due to the large install base of the Python distribution. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From mal at egenix.com Tue Apr 18 19:40:15 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 18 Apr 2006 19:40:15 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <200604061108.30416.anthony@interlink.com.au> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> Message-ID: <4445247F.1010903@egenix.com> Anthony Baxter wrote: > On Thursday 06 April 2006 04:10, Benji York wrote: >> On a related note: it might be nice to put a pystone run in the >> buildbot so it'd be easier to compare pystones across different >> releases, different architectures, and between particular changes >> to the code. (That's assuming that the machines are otherwise idle, >> though.) -- > -1. > > A bad benchmark (which pystone is) is much worse than no benchmark. I could contribute pybench to the Tools/ directory if that makes a difference: pybench -- The Python Benchmark Suite Extendable suite of of low-level benchmarks for measuring the performance of the Python implementation (interpreter, compiler or VM). ________________________________________________________________________ WHAT IS IT ?: pybench is a collection of tests that provides a standardized way to measure the performance of Python implementations. It takes a very close look at different aspects of Python programs and let's you decide which factors are more important to you than others, rather than wrapping everything up in one number, like the other performance tests do (e.g. pystone which is included in the Python Standard Library). pybench has been used in the past by several Python developers to track down performance bottlenecks or to demonstrate the impact of optimizations and new features in Python. There's currently no documentation and no distutils support in pybench; that'll go into one of the next releases. For now, please read the source code. The command line interface for pybench is the file pybench.py. Run this script with option '--help' to get a listing of the possible options. Without options, pybench will simply execute the benchmark and then print out a report to stdout. Here's some sample output: """ Tests: per run per oper. overhead ---------------------------------------------------------------------- BuiltinFunctionCalls: 131.50 ms 1.03 us 0.50 ms BuiltinMethodLookup: 195.85 ms 0.37 us 1.00 ms CompareFloats: 126.00 ms 0.28 us 1.00 ms CompareFloatsIntegers: 201.05 ms 0.45 us 0.50 ms CompareIntegers: 192.05 ms 0.21 us 2.00 ms CompareInternedStrings: 117.65 ms 0.24 us 3.50 ms ... TryExcept: 289.75 ms 0.19 us 3.00 ms TryRaiseExcept: 179.05 ms 11.94 us 1.00 ms TupleSlicing: 159.75 ms 1.52 us 0.50 ms UnicodeMappings: 171.85 ms 9.55 us 1.00 ms UnicodePredicates: 152.05 ms 0.68 us 4.00 ms UnicodeProperties: 203.00 ms 1.01 us 4.00 ms UnicodeSlicing: 190.10 ms 1.09 us 2.00 ms ---------------------------------------------------------------------- Average round time: 10965.00 ms """ This is the current version: http://www.lemburg.com/files/python/pybench-1.2.zip -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From pje at telecommunity.com Tue Apr 18 19:55:08 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 13:55:08 -0400 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <44451EA7.4020907@egenix.com> References: <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >Why should a 3rd party extension be hot-fixing the standard >Python distribution ? Because setuptools installs things in zip files, and older versions of pydoc don't work for packages zip files. >If you want to provide a hot fix for Python 2.3 and 2.4, you >should make this a separate install, so that users are aware >that their Python distribution is about to get modified in ways >that have nothing to do with setuptools. Their Python distribution is not "modified" -- new modules are merely placed on sys.path ahead of the stdlib. (Which, I might add, is a perfectly normal process in Python -- nothing stops users from installing their own version of pydoc or any other module via PYTHONPATH.) Note also that uninstalling setuptools by removing the .egg file or directory will effectively remove the hot fix, since the modules live in the .egg, not in the stdlib. >If an application wants to use setuptools for e.g. plugin >management, then standard distutils features will get >replaced by setuptools implementations which are not compatible >to the standard distutils commands, effectively making it >impossible to access the original versions. Please do a little research before you spread FUD. The 'pkg_resources' module is used for runtime plugin management, and it does not monkeypatch anything. >Monkey patching is only a last resort in case nothing >else works. In this case, it's not needed, since distutils >provides the interfaces needed to extend its command classes >via the setup() call. The monkeypatching is there so that the easy_install command can build eggs for packages that use the distutils. It's also there so that other distutils extensions that monkeypatch distutils (and there are a few of them out there) will have a better chance of working with setuptools. I originally took a minimally-invasive approach to setuptools-distutils interaction, but it was simply not possible to provide a high-degree of backward and "sideward" compatibility without it. In fact, I seem to recall finding some behaviors in some versions of distutils that can't be modified without monkeypatching, although the details are escaping me at this moment. > > As for discussion, Guido originally brought up the question here a few > > months ago, and it's been listed in PEP 356 for a while. I've also > > posted things related to the inclusion both here and in distutils-sig. > >I know, but the discussions haven't really helped much in >getting the setuptools design compatible with standard >distutils. That's because the job was already done. :) The setuptools design bends over backwards to be compatible with Python 2.3 and 2.4 versions of distutils, not to mention py2exe, Pyrex, and other distutils extensions, along with the quirky uses of distutils that exist in dozens of distributed Python packages. However, I think you and I may perhaps have different definitions of "compatibility". Mine is that things "just work" and users don't have to do anything special. For that definition, setuptools is extremely compatible with the standard distutils. In many situations it's more compatible than the distutils themselves, in that more things "just work". ;) From jeremy at alum.mit.edu Tue Apr 18 19:58:43 2006 From: jeremy at alum.mit.edu (Jeremy Hylton) Date: Tue, 18 Apr 2006 13:58:43 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4445247F.1010903@egenix.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> Message-ID: <e8bf7a530604181058j53c19b57v50cccb49f79df2c3@mail.gmail.com> On 4/18/06, M.-A. Lemburg <mal at egenix.com> wrote: > Anthony Baxter wrote: > > On Thursday 06 April 2006 04:10, Benji York wrote: > >> On a related note: it might be nice to put a pystone run in the > >> buildbot so it'd be easier to compare pystones across different > >> releases, different architectures, and between particular changes > >> to the code. (That's assuming that the machines are otherwise idle, > >> though.) -- > > -1. > > > > A bad benchmark (which pystone is) is much worse than no benchmark. > > I could contribute pybench to the Tools/ directory if that > makes a difference: I'd find that helpful. Jeremy > > pybench -- The Python Benchmark Suite > > Extendable suite of of low-level benchmarks for measuring > the performance of the Python implementation > (interpreter, compiler or VM). > > ________________________________________________________________________ > > WHAT IS IT ?: > > pybench is a collection of tests that provides a standardized way > to measure the performance of Python implementations. It takes a > very close look at different aspects of Python programs and let's > you decide which factors are more important to you than others, > rather than wrapping everything up in one number, like the other > performance tests do (e.g. pystone which is included in the Python > Standard Library). > > pybench has been used in the past by several Python developers to > track down performance bottlenecks or to demonstrate the impact > of optimizations and new features in Python. > > There's currently no documentation and no distutils support in > pybench; that'll go into one of the next releases. For now, > please read the source code. The command line interface for pybench > is the file pybench.py. Run this script with option '--help' > to get a listing of the possible options. Without options, > pybench will simply execute the benchmark and then print out > a report to stdout. Here's some sample output: > """ > Tests: per run per oper. overhead > ---------------------------------------------------------------------- > BuiltinFunctionCalls: 131.50 ms 1.03 us 0.50 ms > BuiltinMethodLookup: 195.85 ms 0.37 us 1.00 ms > CompareFloats: 126.00 ms 0.28 us 1.00 ms > CompareFloatsIntegers: 201.05 ms 0.45 us 0.50 ms > CompareIntegers: 192.05 ms 0.21 us 2.00 ms > CompareInternedStrings: 117.65 ms 0.24 us 3.50 ms > ... > TryExcept: 289.75 ms 0.19 us 3.00 ms > TryRaiseExcept: 179.05 ms 11.94 us 1.00 ms > TupleSlicing: 159.75 ms 1.52 us 0.50 ms > UnicodeMappings: 171.85 ms 9.55 us 1.00 ms > UnicodePredicates: 152.05 ms 0.68 us 4.00 ms > UnicodeProperties: 203.00 ms 1.01 us 4.00 ms > UnicodeSlicing: 190.10 ms 1.09 us 2.00 ms > ---------------------------------------------------------------------- > Average round time: 10965.00 ms > """ > > This is the current version: > > http://www.lemburg.com/files/python/pybench-1.2.zip > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Apr 18 2006) > >>> Python/Zope Consulting and Support ... http://www.egenix.com/ > >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > > ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jeremy%40alum.mit.edu > From rhettinger at ewtllc.com Tue Apr 18 20:05:06 2006 From: rhettinger at ewtllc.com (Raymond Hettinger) Date: Tue, 18 Apr 2006 11:05:06 -0700 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4445247F.1010903@egenix.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> Message-ID: <44452A52.7070303@ewtllc.com> M.-A. Lemburg wrote: >I could contribute pybench to the Tools/ directory if that >makes a difference: > > > +1 on adding pybench. And we already have parrotbench in the sandbox. From tim.peters at gmail.com Tue Apr 18 20:16:48 2006 From: tim.peters at gmail.com (Tim Peters) Date: Tue, 18 Apr 2006 14:16:48 -0400 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <4445247F.1010903@egenix.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> Message-ID: <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> [M.-A. Lemburg] > I could contribute pybench to the Tools/ directory if that > makes a difference: +1. It's frequently used and nice work. Besides, then we could easily fiddle the tests to make Python look better ;-) From brett at python.org Tue Apr 18 20:19:09 2006 From: brett at python.org (Brett Cannon) Date: Tue, 18 Apr 2006 11:19:09 -0700 Subject: [Python-Dev] possible fix for recursive __call__ segfault In-Reply-To: <20060418124109.GA8836@code0.codespeak.net> References: <bbaeab100604171734x4bbe017ci94dce673b8d54585@mail.gmail.com> <20060418124109.GA8836@code0.codespeak.net> Message-ID: <bbaeab100604181119k220b02e8i65861231a35fd725@mail.gmail.com> On 4/18/06, Armin Rigo <arigo at tunes.org> wrote: > Hi Brett, > > On Mon, Apr 17, 2006 at 05:34:16PM -0700, Brett Cannon wrote: > > + if (meth == self) { > > + PyErr_SetString(PyExc_RuntimeError, > > + "recursive __call__ definition"); > > + return NULL; > > + } > > This is not the proper way, as it can be worked around with a pair of > objects whose __call__ point to each other. Yeah, I know. It was just a quick hack that at least helped me identify where the problem was. I didn't really expect for it to stick around. > The solution is to use the > counter of Py_{Enter,Leave}RecursiveCall(), as was done for old-style > classes (see classobject.c). > OK. Makes sense. > By the way, this is a known problem: the example you show is > Lib/test/crashers/infinite_rec_3.py, and the four other > infinite_rec_*.py are all slightly more subtle ways to trigger a similar > infinite loop in C. They point to the SF bug report at > http://python.org/sf/1202533, where we discuss the problem in general. > Basically, someone should try to drop many > Py_{Enter,Leave}RecursiveCall() pairs in the source until all the > currently-known bugs go away, and then measure if this has a noticeable > performance impact. > OK, good to know. I will have to fix this eventually for sandboxing reasons for my dissertation so I will get to it eventually. -Brett From brett at python.org Tue Apr 18 20:30:24 2006 From: brett at python.org (Brett Cannon) Date: Tue, 18 Apr 2006 11:30:24 -0700 Subject: [Python-Dev] Updated: PEP 359: The make statement In-Reply-To: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> References: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> Message-ID: <bbaeab100604181130y259ea4ebic9208d4ac216675@mail.gmail.com> Definitely an intriguing idea! I am +0 just because I don't know how needed it is, but it is definitely cool. As for your open issues, ditching __metaclass__ is fine if this goes in, but I would keep 'class' around as simplified syntactic sugar for the common case. -Brett On 4/18/06, Steven Bethard <steven.bethard at gmail.com> wrote: > I've updated PEP 359 with a bunch of the recent suggestions. The > patch is available at: > http://bugs.python.org/1472459 > and I've pasted the full text below. > > I've tried to be more explicit about the goals -- the make statement > is mostly syntactic sugar for:: > > class <name> <tuple>: > __metaclass__ = <callable> > <block> > > so that you don't have to lie to your readers when you're not actually > creating a class. I've also added some new examples and expanded the > discussion of the old ones to give the statement some better > motivation. And I've expanded the Open Issues discussions to consider > a few alternate keywords and to indicate some of the difficulties in > allowing a ``__make_dict__`` attribute for customizing the dict in > which the block is executed. > > > > PEP: 359 > Title: The "make" Statement > Version: $Revision: 45366 $ > Last-Modified: $Date: 2006-04-13 07:36:24 -0600 (Thu, 13 Apr 2006) $ > Author: Steven Bethard <steven.bethard at gmail.com> > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 05-Apr-2006 > Python-Version: 2.6 > Post-History: 05-Apr-2006, 06-Apr-2006, 13-Apr-2006 > > > Abstract > ======== > > This PEP proposes a generalization of the class-declaration syntax, > the ``make`` statement. The proposed syntax and semantics parallel > the syntax for class definition, and so:: > > make <callable> <name> <tuple>: > <block> > > is translated into the assignment:: > > <name> = <callable>("<name>", <tuple>, <namespace>) > > where ``<namespace>`` is the dict created by executing ``<block>``. > This is mostly syntactic sugar for:: > > class <name> <tuple>: > __metaclass__ = <callable> > <block> > > and is intended to help more clearly express the intent of the > statement when something other than a class is being created. Of > course, other syntax for such a statement is possible, but it is > hoped that by keeping a strong parallel to the class statement, an > understanding of how classes and metaclasses work will translate into > an understanding of how the make-statement works as well. > > The PEP is based on a suggestion [1]_ from Michele Simionato on the > python-dev list. > > > Motivation > ========== > > Class statements provide two nice facilities to Python: > > (1) They execute a block of statements and provide the resulting > bindings as a dict to the metaclass. > > (2) They encourage DRY (don't repeat yourself) by allowing the class > being created to know the name it is being assigned. > > Thus in a simple class statement like:: > > class C(object): > x = 1 > def foo(self): > return 'bar' > > the metaclass (``type``) gets called with something like:: > > C = type('C', (object,), {'x':1, 'foo':<function foo at ...>}) > > The class statement is just syntactic sugar for the above assignment > statement, but clearly a very useful sort of syntactic sugar. It > avoids not only the repetition of ``C``, but also simplifies the > creation of the dict by allowing it to be expressed as a series of > statements. > > Historically, type instances (a.k.a. class objects) have been the > only objects blessed with this sort of syntactic support. The make > statement aims to extend this support to other sorts of objects where > such syntax would also be useful. > > > Example: simple namespaces > -------------------------- > > Let's say I have some attributes in a module that I access like:: > > mod.thematic_roletype > mod.opinion_roletype > > mod.text_format > mod.html_format > > and since "Namespaces are one honking great idea", I'd like to be > able to access these attributes instead as:: > > mod.roletypes.thematic > mod.roletypes.opinion > > mod.format.text > mod.format.html > > I currently have two main options: > > (1) Turn the module into a package, turn ``roletypes`` and > ``format`` into submodules, and move the attributes to > the submodules. > > (2) Create ``roletypes`` and ``format`` classes, and move the > attributes to the classes. > > The former is a fair chunk of refactoring work, and produces two > tiny modules without much content. The latter keeps the attributes > local to the module, but creates classes when there is no intention > of ever creating instances of those classes. > > In situations like this, it would be nice to simply be able to > declare a "namespace" to hold the few attributes. With the new make > statement, I could introduce my new namespaces with something like:: > > make namespace roletypes: > thematic = ... > opinion = ... > > make namespace format: > text = ... > html = ... > > and keep my attributes local to the module without making classes > that are never intended to be instantiated. One definition of > namespace that would make this work is:: > > class namespace(object): > def __init__(self, name, args, kwargs): > self.__dict__.update(kwargs) > > Given this definition, at the end of the make-statements above, > ``roletypes`` and ``format`` would be namespace instances. > > > Example: gui objects > -------------------- > > In gui toolkits, objects like frames and panels are often associated > with attributes and functions. With the make-statement, code that > looks something like:: > > root = Tkinter.Tk() > frame = Tkinter.Frame(root) > frame.pack() > def say_hi(): > print "hi there, everyone!" > hi_there = Tkinter.Button(frame, text="Hello", command=say_hi) > hi_there.pack(side=Tkinter.LEFT) > root.mainloop() > > could be rewritten to group the the Button's function with its > declaration:: > > root = Tkinter.Tk() > frame = Tkinter.Frame(root) > frame.pack() > make Tkinter.Button hi_there(frame): > text = "Hello" > def command(): > print "hi there, everyone!" > hi_there.pack(side=Tkinter.LEFT) > root.mainloop() > > > Example: custom descriptors > --------------------------- > > Since descriptors are used to customize access to an attribute, it's > often useful to know the name of that attribute. Current Python > doesn't give an easy way to find this name and so a lot of custom > descriptors, like Ian Bicking's setonce descriptor [2]_, have to > hack around this somehow. With the make-statement, you could create > a ``setonce`` attribute like:: > > class A(object): > ... > make setonce x: > "A's x attribute" > ... > > where the ``setonce`` descriptor would be defined like:: > > class setonce(object): > > def __init__(self, name, args, kwargs): > self._name = '_setonce_attr_%s' % name > self.__doc__ = kwargs.pop('__doc__', None) > > def __get__(self, obj, type=None): > if obj is None: > return self > return getattr(obj, self._name) > > def __set__(self, obj, value): > try: > getattr(obj, self._name) > except AttributeError: > setattr(obj, self._name, value) > else: > raise AttributeError, "Attribute already set" > > def set(self, obj, value): > setattr(obj, self._name, value) > > def __delete__(self, obj): > delattr(obj, self._name) > > Note that unlike the original implementation, the private attribute > name is stable since it uses the name of the descriptor, and > therefore instances of class A are pickleable. > > > > Example: property namespaces > ---------------------------- > > Python's property type takes three function arguments and a docstring > argument which, though relevant only to the property, must be > declared before it and then passed as arguments to the property call, > e.g.:: > > class C(object): > ... > def get_x(self): > ... > def set_x(self): > ... > x = property(get_x, set_x, "the x of the frobulation") > > This issue has been brought up before, and Guido [3]_ and others [4]_ > have briefly mused over alternate property syntaxes to make declaring > properties easier. With the make-statement, the following syntax > could be supported:: > > class C(object): > ... > make block_property x: > '''The x of the frobulation''' > def fget(self): > ... > def fset(self): > ... > > with the following definition of ``block_property``:: > > def block_property(name, args, block_dict): > fget = block_dict.pop('fget', None) > fset = block_dict.pop('fset', None) > fdel = block_dict.pop('fdel', None) > doc = block_dict.pop('__doc__', None) > assert not block_dict > return property(fget, fset, fdel, doc) > > > Example: interfaces > ------------------- > > Guido [5]_ and others have occasionally suggested introducing > interfaces into python. Most suggestions have offered syntax along > the lines of:: > > interface IFoo: > """Foo blah blah""" > > def fumble(name, count): > """docstring""" > > but since there is currently no way in Python to declare an interface > in this manner, most implementations of Python interfaces use class > objects instead, e.g. Zope's:: > > class IFoo(Interface): > """Foo blah blah""" > > def fumble(name, count): > """docstring""" > > With the new make-statement, these interfaces could instead be > declared as:: > > make Interface IFoo: > """Foo blah blah""" > > def fumble(name, count): > """docstring""" > > which makes the intent (that this is an interface, not a class) much > clearer. > > > Specification > ============= > > Python will translate a make-statement:: > > make <callable> <name> <tuple>: > <block> > > into the assignment:: > > <name> = <callable>("<name>", <tuple>, <namespace>) > > where ``<namespace>`` is the dict created by executing ``<block>``. > The ``<tuple>`` expression is optional; if not present, an empty tuple > will be assumed. > > A patch is available implementing these semantics [6]_. > > The make-statement introduces a new keyword, ``make``. Thus in Python > 2.6, the make-statement will have to be enabled using ``from > __future__ import make_statement``. > > > Open Issues > =========== > > Keyword > ------- > > Does the ``make`` keyword break too much code? Originally, the make > statement used the keyword ``create`` (a suggestion due to Nick > Coghlan). However, investigations into the standard library [7]_ and > Zope+Plone code [8]_ revealed that ``create`` would break a lot more > code, so ``make`` was adopted as the keyword instead. However, there > are still a few instances where ``make`` would break code. Is there a > better keyword for the statement? > > Some possible keywords and their counts in the standard library (plus > some installed packages): > > * make - 2 (both in tests) > * create - 19 (including existing function in imaplib) > * build - 83 (including existing class in distutils.command.build) > * construct - 0 > * produce - 0 > > > The make-statement as an alternate constructor > ---------------------------------------------- > > Currently, there are not many functions which have the signature > ``(name, args, kwargs)``. That means that something like:: > > make dict params: > x = 1 > y = 2 > > is currently impossible because the dict constructor has a different > signature. Does this sort of thing need to be supported? One > suggestion, by Carl Banks, would be to add a ``__make__`` magic method > that if found would be called instead of ``__call__``. For types, > the ``__make__`` method would be identical to ``__call__`` and thus > unnecessary, but dicts could support the make-statement by defining a > ``__make__`` method on the dict type that looks something like:: > > def __make__(cls, name, args, kwargs): > return cls(**kwargs) > > Of course, rather than adding another magic method, the dict type > could just grow a classmethod something like ``dict.fromblock`` that > could be used like:: > > make dict.fromblock params: > x = 1 > y = 2 > > So the question is, will many types want to use the make-statement as > an alternate constructor? And if so, does that alternate constructor > need to have the same name as the original constructor? > > > Customizing the dict in which the block is executed > --------------------------------------------------- > > Should users of the make-statement be able to determine in which dict > object the code is executed? This would allow the make-statement to > be used in situations where a normal dict object would not suffice, > e.g. if order and repeated names must be allowed. Allowing this sort > of customization could allow XML to be written without repeating > element names, and with nesting of make-statements corresponding to > nesting of XML elements:: > > make Element html: > make Element body: > text('before first h1') > make Element h1: > attrib(style='first') > text('first h1') > tail('after first h1') > make Element h1: > attrib(style='second') > text('second h1') > tail('after second h1') > > If the make-statement tried to get the dict in which to execute its > block by calling the callable's ``__make_dict__`` method, the > following code would allow the make-statement to be used as above:: > > class Element(object): > > class __make_dict__(dict): > > def __init__(self, *args, **kwargs): > self._super = super(Element.__make_dict__, self) > self._super.__init__(*args, **kwargs) > self.elements = [] > self.text = None > self.tail = None > self.attrib = {} > > def __getitem__(self, name): > try: > return self._super.__getitem__(name) > except KeyError: > if name in ['attrib', 'text', 'tail']: > return getattr(self, 'set_%s' % name) > else: > return globals()[name] > > def __setitem__(self, name, value): > self._super.__setitem__(name, value) > self.elements.append(value) > > def set_attrib(self, **kwargs): > self.attrib = kwargs > > def set_text(self, text): > self.text = text > > def set_tail(self, text): > self.tail = text > > def __new__(cls, name, args, edict): > get_element = etree.ElementTree.Element > result = get_element(name, attrib=edict.attrib) > result.text = edict.text > result.tail = edict.tail > for element in edict.elements: > result.append(element) > return result > > Note, however, that the code to support this is somewhat fragile -- > it has to magically populate the namespace with ``attrib``, ``text`` > and ``tail``, and it assumes that every name binding inside the make > statement body is creating an Element. As it stands, this code would > break with the introduction of a simple for-loop to any one of the > make-statement bodies, because the for-loop would bind a name to a > non-Element object. This could be worked around by adding some sort > of isinstance check or attribute examination, but this still results > in a somewhat fragile solution. > > It has also been pointed out that the with-statement can provide > equivalent nesting with a much more explicit syntax:: > > with Element('html') as html: > with Element('body') as body: > body.text = 'before first h1' > with Element('h1', style='first') as h1: > h1.text = 'first h1' > h1.tail = 'after first h1' > with Element('h1', style='second') as h1: > h1.text = 'second h1' > h1.tail = 'after second h1' > > And if the repetition of the element names here is too much of a DRY > violoation, it is also possible to eliminate all as-clauses except > for the first by adding a few methods to Element. [9]_ > > So are there real use-cases for executing the block in a dict of a > different type? And if so, should the make-statement be extended to > support them? > > > Optional Extensions > =================== > > Remove the make keyword > ------------------------- > > It might be possible to remove the make keyword so that such > statements would begin with the callable being called, e.g.:: > > namespace ns: > badger = 42 > def spam(): > ... > > interface C(...): > ... > > However, almost all other Python statements begin with a keyword, and > removing the keyword would make it harder to look up this construct in > the documentation. Additionally, this would add some complexity in > the grammar and so far I (Steven Bethard) have not been able to > implement the feature without the keyword. > > > Removing __metaclass__ in Python 3000 > ------------------------------------- > > As a side-effect of its generality, the make-statement mostly > eliminates the need for the ``__metaclass__`` attribute in class > objects. Thus in Python 3000, instead of:: > > class <name> <bases-tuple>: > __metaclass__ = <metaclass> > <block> > > metaclasses could be supported by using the metaclass as the callable > in a make-statement:: > > make <metaclass> <name> <bases-tuple>: > <block> > > Removing the ``__metaclass__`` hook would simplify the BUILD_CLASS > opcode a bit. > > > Removing class statements in Python 3000 > ---------------------------------------- > > In the most extreme application of make-statements, the class > statement itself could be deprecated in favor of ``make type`` > statements. > > > References > ========== > > .. [1] Michele Simionato's original suggestion > (http://mail.python.org/pipermail/python-dev/2005-October/057435.html) > > .. [2] Ian Bicking's setonce descriptor > (http://blog.ianbicking.org/easy-readonly-attributes.html) > > .. [3] Guido ponders property syntax > (http://mail.python.org/pipermail/python-dev/2005-October/057404.html) > > .. [4] Namespace-based property recipe > (http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/442418) > > .. [5] Python interfaces > (http://www.artima.com/weblogs/viewpost.jsp?thread=86641) > > .. [6] Make Statement patch > (http://ucsu.colorado.edu/~bethard/py/make_statement.patch) > > .. [7] Instances of create in the stdlib > (http://mail.python.org/pipermail/python-list/2006-April/335159.html) > > .. [8] Instances of create in Zope+Plone > (http://mail.python.org/pipermail/python-list/2006-April/335284.html) > > .. [9] Eliminate as-clauses in with-statement XML > (http://mail.python.org/pipermail/python-list/2006-April/336774.html) > > > Copyright > ========= > > This document has been placed in the public domain. > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From guido at python.org Tue Apr 18 20:42:23 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 18 Apr 2006 19:42:23 +0100 Subject: [Python-Dev] PEP 343: confusing context terminology In-Reply-To: <5.1.1.6.0.20060418115622.0423db10@mail.telecommunity.com> References: <20060418140419.GA6328@localhost.localdomain> <5.1.1.6.0.20060418115622.0423db10@mail.telecommunity.com> Message-ID: <ca471dc20604181142kd3f5ebak1e95699480c5e27f@mail.gmail.com> On 4/18/06, Phillip J. Eby <pje at telecommunity.com> wrote: > I think we should correct the PEP. Yes please, go ahead. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Tue Apr 18 21:02:20 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 18 Apr 2006 21:02:20 +0200 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> References: <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> Message-ID: <444537BC.7030702@egenix.com> Phillip J. Eby wrote: > At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >> Why should a 3rd party extension be hot-fixing the standard >> Python distribution ? > > Because setuptools installs things in zip files, and older versions of > pydoc don't work for packages zip files. That doesn't answer my question. >> If you want to provide a hot fix for Python 2.3 and 2.4, you >> should make this a separate install, so that users are aware >> that their Python distribution is about to get modified in ways >> that have nothing to do with setuptools. > > Their Python distribution is not "modified" -- new modules are merely > placed on sys.path ahead of the stdlib. (Which, I might add, is a > perfectly normal process in Python -- nothing stops users from installing > their own version of pydoc or any other module via PYTHONPATH.) > > Note also that uninstalling setuptools by removing the .egg file or > directory will effectively remove the hot fix, since the modules live in > the .egg, not in the stdlib. Whether you place a module with the same name in front of the stdlib path in PYTHONPATH (e.g. copy it into site-packages) or replace the file in the Python installation is really the same thing to the user. Third-party extension *should not do this* ! It's OK to have private modified copies of a module inside a package or used inside an application, but "python setup.py install" should never (effectively) replace a Python stdlib module with some modified copy without explicit user interaction. >> If an application wants to use setuptools for e.g. plugin >> management, then standard distutils features will get >> replaced by setuptools implementations which are not compatible >> to the standard distutils commands, effectively making it >> impossible to access the original versions. > > Please do a little research before you spread FUD. The 'pkg_resources' > module is used for runtime plugin management, and it does not monkeypatch > anything. I'm talking about the setuptools package which does apply monkey patching and is needed to manage the download and installation of plugin eggs, AFAIK. >> Monkey patching is only a last resort in case nothing >> else works. In this case, it's not needed, since distutils >> provides the interfaces needed to extend its command classes >> via the setup() call. > > The monkeypatching is there so that the easy_install command can build eggs > for packages that use the distutils. It's also there so that other > distutils extensions that monkeypatch distutils (and there are a few of > them out there) will have a better chance of working with setuptools. > > I originally took a minimally-invasive approach to setuptools-distutils > interaction, but it was simply not possible to provide a high-degree of > backward and "sideward" compatibility without it. In fact, I seem to > recall finding some behaviors in some versions of distutils that can't be > modified without monkeypatching, although the details are escaping me at > this moment. That's a very vague comment. The distutils mechanism for providing your own command classes lets you take complete control over distutils if needed. What's good about it, is that this approach doesn't modify anything inside distutils at run-time, but does these modifications on a per-setup()-call basis. As for setuptools, you import the package and suddenly distutils isn't what's documented on python.org anymore. >>> As for discussion, Guido originally brought up the question here a few >>> months ago, and it's been listed in PEP 356 for a while. I've also >>> posted things related to the inclusion both here and in distutils-sig. >> >> I know, but the discussions haven't really helped much in >> getting the setuptools design compatible with standard >> distutils. > > That's because the job was already done. :) Not much of an argument, if you ask me. Some of the design decisions you made in setuptools are simply wrong IMHO and these need to be discussed in a PEP process. > The setuptools design bends > over backwards to be compatible with Python 2.3 and 2.4 versions of > distutils, not to mention py2exe, Pyrex, and other distutils extensions, > along with the quirky uses of distutils that exist in dozens of distributed > Python packages. > > However, I think you and I may perhaps have different definitions of > "compatibility". Mine is that things "just work" and users don't have to > do anything special. For that definition, setuptools is extremely > compatible with the standard distutils. In many situations it's more > compatible than the distutils themselves, in that more things "just work". ;) You've implemented your own view of "just works". This is fine, but please remember that Python is a collaborative work, so design decisions have to be worked out in collaboration as well. There aren't all that many things that are wrong in setuptools, but some of them are essential: * setuptools should not monkey patch distutils on import * the standard "python setup.py install" should continue to work as documented in the Python documentation; any new install command should use a different name, e.g. "install_egg" * placing too many ZIP files on PYTHONPATH is a bad idea since it slows down import searches for all Python applications, not just ones relying on eggs; a solution to this would be to have a PYTHONEGGPATH which is then only scanned by egg-aware modules/applications * the user should have freedom of choice in whether to have her Python installation rely on eggs or not (and not only --by-using-some-complicated-options) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From mal at egenix.com Tue Apr 18 21:06:41 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 18 Apr 2006 21:06:41 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> Message-ID: <444538C1.90608@egenix.com> Tim Peters wrote: > [M.-A. Lemburg] >> I could contribute pybench to the Tools/ directory if that >> makes a difference: > > +1. It's frequently used and nice work. Besides, then we could > easily fiddle the tests to make Python look better ;-) That's a good argument :-) Note that the tests are versioned and the tools refuses to compare tests with different version numbers. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 18 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From amk at amk.ca Tue Apr 18 22:18:29 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 18 Apr 2006 16:18:29 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060418185518.0359E1E407C@bag.python.org> References: <20060418185518.0359E1E407C@bag.python.org> Message-ID: <20060418201829.GA31427@rogue.amk.ca> On Tue, Apr 18, 2006 at 08:55:18PM +0200, phillip.eby wrote: > Modified: > peps/trunk/pep-0343.txt > > + "context manager" then encompasses all objects with a __context__() > + method that returns a context object. (This means that all contexts > + are context managers, but not all context managers are contexts). This change reminds of another question I had about the parenthetical statement: all contexts are context managers (= 'has a __context__' method). Why? The context object isn't necessarily available to the Python programmer, so they can't write: with context_mgr as context: with context: # uses the same context ... Why do contexts need to have a __context__() method? --amk From tjreedy at udel.edu Tue Apr 18 21:19:09 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 18 Apr 2006 15:19:09 -0400 Subject: [Python-Dev] adding Construct to the standard library? References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com><a467ca4f0604180625q49c62696k6f0b87c271524dd1@mail.gmail.com> <1d85506f0604180825v6d7880eo2638f8569555018d@mail.gmail.com> Message-ID: <e23e3d$p0l$1@sea.gmane.org> "tomer filiba" <tomerfiliba at gmail.com> wrote in message >* using lambda functions for meta expressions, instead of eval(string) -- >perhaps >it's faster, but lambda is getting deprecated by python3k :( Good news for you then: Guido's latest thought that I have read is to leave lambda alone, as is. tjr From p.f.moore at gmail.com Tue Apr 18 21:25:34 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 18 Apr 2006 20:25:34 +0100 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> Message-ID: <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> On 4/17/06, tomer filiba <tomerfiliba at gmail.com> wrote: > after several people (several > 10) contacted me and said "IMHO 'construct' > is a good candidate for stdlib", > i thought i should give it a try. of course i'm not saying it should be > included right now, but in 6 months time, or such a > timeframe (aiming at python 2.6? some 2.5.x release?) Now that ctypes is part of the standard library, that provides a structured datatype facility. Here's an example demonstrating the first example from the Construct wiki: >>> from ctypes import * >>> def str_to_ctype(s, typ): ... t = typ() ... memmove(addressof(t), s, sizeof(t)) ... return t ... >>> class ethernet_header(Structure): ... _fields_ = [("destination", c_char * 6), ... ("source", c_char * 6), ... ("type", c_short)] ... >>> s = "ABCDEF123456\x08\x00" >>> e = str_to_ctype(s, ethernet_header) >>> e.source '123456' >>> e.destination 'ABCDEF' >>> e.type 8 I'm not sure I understand the other wiki examples - but the ones I do, look doable in ctypes. There are a couple of things to note: * ctypes doesn't have a way (that I'm aware of) to specify the endianness of types like c_short - so my example, when run on Windows (intel architecture) gives type = 8, rather than type = 2048 (from the wiki). But the wiki example doesn't explicitly specify endianness, so maybe that's a limitation in Construct as well? * ctypes doesn't have an easy way to parse a string based on a structure definition - hence my str_to_ctype function. But that's a trivial helper function to write, so that's not a big issue. Personally, I'd rather see the ctypes facilities for structure packing and unpacking be better documented, and enhanced if necessary, rather than having yet another way of doing the same thing added to the stdlib. Paul. From steven.bethard at gmail.com Tue Apr 18 21:30:04 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Tue, 18 Apr 2006 13:30:04 -0600 Subject: [Python-Dev] Updated: PEP 359: The make statement In-Reply-To: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> References: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> Message-ID: <d11dcfba0604181230k46749a4bh6150c13ed3ccaecf@mail.gmail.com> On 4/18/06, Steven Bethard <steven.bethard at gmail.com> wrote: > I've updated PEP 359 with a bunch of the recent suggestions. The > patch is available at: > http://bugs.python.org/1472459 > and I've pasted the full text below. > > I've tried to be more explicit about the goals -- the make statement > is mostly syntactic sugar for:: > > class <name> <tuple>: > __metaclass__ = <callable> > <block> > > so that you don't have to lie to your readers when you're not actually > creating a class. I've also added some new examples and expanded the > discussion of the old ones to give the statement some better > motivation. And I've expanded the Open Issues discussions to consider > a few alternate keywords and to indicate some of the difficulties in > allowing a ``__make_dict__`` attribute for customizing the dict in > which the block is executed. Guido has pronounced on this PEP: http://mail.python.org/pipermail/python-3000/2006-April/000936.html Consider it dead. =) STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From pje at telecommunity.com Tue Apr 18 21:37:37 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 15:37:37 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060418201829.GA31427@rogue.amk.ca> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> Message-ID: <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> At 04:18 PM 4/18/2006 -0400, A.M. Kuchling wrote: >On Tue, Apr 18, 2006 at 08:55:18PM +0200, phillip.eby wrote: > > Modified: > > peps/trunk/pep-0343.txt > > > > + "context manager" then encompasses all objects with a __context__() > > + method that returns a context object. (This means that all contexts > > + are context managers, but not all context managers are contexts). > >This change reminds of another question I had about the parenthetical >statement: all contexts are context managers (= 'has a __context__' >method). Why? The context object isn't necessarily available to the >Python programmer, so they can't write: > >with context_mgr as context: > with context: # uses the same context > ... > >Why do contexts need to have a __context__() method? I was going to say, "so they can be context managers", but I suppose you have a point. There is no need for a context to have a __context__ method, unless it is also a context manager. Ugh. From theller at python.net Tue Apr 18 21:45:03 2006 From: theller at python.net (Thomas Heller) Date: Tue, 18 Apr 2006 21:45:03 +0200 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> Message-ID: <e23fmc$ur7$1@sea.gmane.org> Paul Moore wrote: > On 4/17/06, tomer filiba <tomerfiliba at gmail.com> wrote: >> after several people (several > 10) contacted me and said "IMHO 'construct' >> is a good candidate for stdlib", >> i thought i should give it a try. of course i'm not saying it should be >> included right now, but in 6 months time, or such a >> timeframe (aiming at python 2.6? some 2.5.x release?) > > Now that ctypes is part of the standard library, that provides a > structured datatype facility. Here's an example demonstrating the > first example from the Construct wiki: > >>>> from ctypes import * > >>>> def str_to_ctype(s, typ): > ... t = typ() > ... memmove(addressof(t), s, sizeof(t)) > ... return t > ... >>>> class ethernet_header(Structure): > ... _fields_ = [("destination", c_char * 6), > ... ("source", c_char * 6), > ... ("type", c_short)] > ... >>>> s = "ABCDEF123456\x08\x00" >>>> e = str_to_ctype(s, ethernet_header) > >>>> e.source > '123456' >>>> e.destination > 'ABCDEF' >>>> e.type > 8 > > I'm not sure I understand the other wiki examples - but the ones I do, > look doable in ctypes. > > There are a couple of things to note: > > * ctypes doesn't have a way (that I'm aware of) to specify the > endianness of types like c_short - so my example, when run on Windows > (intel architecture) gives type = 8, rather than type = 2048 (from the > wiki). But the wiki example doesn't explicitly specify endianness, so > maybe that's a limitation in Construct as well? The currently undocumented BigEndianStructure or LittleEndianStructure bases classes allow to specify the byte order. They are designed so that they refuse to work when the structure contains pointer fields. > * ctypes doesn't have an easy way to parse a string based on a > structure definition - hence my str_to_ctype function. But that's a > trivial helper function to write, so that's not a big issue. > > Personally, I'd rather see the ctypes facilities for structure packing > and unpacking be better documented, and enhanced if necessary, rather > than having yet another way of doing the same thing added to the > stdlib. It is not yet too late (but the timeslot left is very small) to propose enhancements to ctypes. classmethods like 'from_string', 'from_buffer' or whatever would probably make sense. Thomas From amk at amk.ca Tue Apr 18 22:01:31 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 18 Apr 2006 16:01:31 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> Message-ID: <20060418200131.GA12715@localhost.localdomain> On Tue, Apr 18, 2006 at 03:37:37PM -0400, Phillip J. Eby wrote: > I was going to say, "so they can be context managers", but I suppose you > have a point. There is no need for a context to have a __context__ method, > unless it is also a context manager. Ugh. It would be easy to just remove the parenthetical comment from the PEP and forget about it, if in fact the statement is now purposeless. But these are murky waters, and maybe there's still some deeper reason for it. Nick, do you have any comments? --amk From tomerfiliba at gmail.com Tue Apr 18 22:39:27 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Tue, 18 Apr 2006 22:39:27 +0200 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> Message-ID: <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> ctypes, as the name implies, is relevant to *C data structures* only. you cannot extend it and you cannot define complex things with it, at least not easily. * ctypes doesn't have a way (that I'm aware of) to specify the > endianness of types like c_short - so my example, when run on Windows > (intel architecture) gives type = 8, rather than type = 2048 (from the > wiki). But the wiki example doesn't explicitly specify endianness, so > maybe that's a limitation in Construct as well? > in ctypes, the endianness of numbers is determined by the platform, since they are passed to a C (platform-dependent) function. you share your address space with the dll you load -- so both python and the dll live on the same platform. so except of writing data to files or sockets, you shouldn't care about the byte ordering. in Construct you have UBInt16 and ULInt16, for big and little ordering. and UInt16 is an alias to UBInt16 (because network ordering is more common in protocols) * ctypes doesn't have an easy way to parse a string based on a > structure definition - hence my str_to_ctype function. But that's a > trivial helper function to write, so that's not a big issue. > sorry, but if you mean people must use memmove in order to parse string, you better slap yourself. this is a python mailing list, not a C one. we don't have a concept of addressof() or physically moving data. we use objects and references. no offense, but "so that's not a big issue" makes me think you don't belong to this mailing list. I'm not sure I understand the other wiki examples - but the ones I do, > look doable in ctypes. i gues you should also look at http://pyconstruct.wikispaces.com/demos to get a better understanding, but i only uploaded it a couple of hours ago. sorry for that. anyway, on the projects page i explain thoroughly why there is room for yet another parsing/building library. but for the example you mentioned above, the ethernet header, struct is good enough: struct.pack(">6s6sH", "123456", "ABCDEF", 0x0800) but -- how would you parse a pascal-string (length byte followed by data of that length) using ctypes? how would you read a 61 bit, unaligned field? how would you convert "\x00\x11P\x88kW" to "00-11-50-88-6B-57", the way people would like to see MAC addresses? yeah, the MAC address is only a representation issue, but adapters can do much more powerful things. plus, people usually prefer seeing "IP" instead of "0x0800" in their parsed objects. how would you define mappings in ctypes? Personally, I'd rather see the ctypes facilities for structure packing > and unpacking be better documented, and enhanced if necessary, rather > than having yet another way of doing the same thing added to the > stdlib. the stdlib is too messy already. it must be revised anyway, since it's full of shit nobody uses. the point is -- ctypes can define C types. not the TCP/IP stack. Construct can do both. it's a superset of ctype's typing mechanism. but of course both have the right to *coexist* -- ctypes is oriented at interop with dlls, and provides the mechanisms needed for that. Construst is about data structures of all sorts and kinds. ctypes is a very helpful library as a builtin, and so is Construct. the two don't compete on a spot in the stdlib. -tomer On 4/18/06, Paul Moore <p.f.moore at gmail.com> wrote: > > On 4/17/06, tomer filiba <tomerfiliba at gmail.com> wrote: > > after several people (several > 10) contacted me and said "IMHO > 'construct' > > is a good candidate for stdlib", > > i thought i should give it a try. of course i'm not saying it should be > > included right now, but in 6 months time, or such a > > timeframe (aiming at python 2.6? some 2.5.x release?) > > Now that ctypes is part of the standard library, that provides a > structured datatype facility. Here's an example demonstrating the > first example from the Construct wiki: > > >>> from ctypes import * > > >>> def str_to_ctype(s, typ): > ... t = typ() > ... memmove(addressof(t), s, sizeof(t)) > ... return t > ... > >>> class ethernet_header(Structure): > ... _fields_ = [("destination", c_char * 6), > ... ("source", c_char * 6), > ... ("type", c_short)] > ... > >>> s = "ABCDEF123456\x08\x00" > >>> e = str_to_ctype(s, ethernet_header) > > >>> e.source > '123456' > >>> e.destination > 'ABCDEF' > >>> e.type > 8 > > I'm not sure I understand the other wiki examples - but the ones I do, > look doable in ctypes. > > There are a couple of things to note: > > * ctypes doesn't have a way (that I'm aware of) to specify the > endianness of types like c_short - so my example, when run on Windows > (intel architecture) gives type = 8, rather than type = 2048 (from the > wiki). But the wiki example doesn't explicitly specify endianness, so > maybe that's a limitation in Construct as well? > > * ctypes doesn't have an easy way to parse a string based on a > structure definition - hence my str_to_ctype function. But that's a > trivial helper function to write, so that's not a big issue. > > Personally, I'd rather see the ctypes facilities for structure packing > and unpacking be better documented, and enhanced if necessary, rather > than having yet another way of doing the same thing added to the > stdlib. > > Paul. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060418/d0115426/attachment.htm From tomerfiliba at gmail.com Tue Apr 18 23:04:18 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Tue, 18 Apr 2006 23:04:18 +0200 Subject: [Python-Dev] a flattening operator? Message-ID: <1d85506f0604181404v4a330b10j95063098662db105@mail.gmail.com> <DISCLAIMER> i'm not going to defend and fight for this idea too much. i only bring it up because it bothers me. i'm sure some people here would kill me for even suggesting this, and i really don't want to be killed right now, so i bring it up as something you should think about. nothing more. </DISCLAIMER> <NOTE> PEP-225 has some weird ideas which may or may not be related to this, but i don't understand how this magical ~ operator can do everything from tuple flattening to list arithmetics, replacing map(), changing the order of operations, deep-copying, list comprehension, rich comparison, and whatever not. so i don't consider this a serious PEP. looks more like an april fool's joke to me, and it seems those japanese celebrate it on september for some reason. </NOTE> [reposted from comp.lang.python] as we all know, * (asterisk) can be used to "inline" or "flatten" a tuple into an argument list, i.e.: def f(a, b, c): ... x = (1,2,3) f(*x) so... mainly for symmetry's sake, why not make a "flattening" operator that also works outside the context of function calls? for example: a = (1,2,3) b = (4,5) c = (*a, *b) # ==> (1,2,3,4,5) yeah, a + b would also give you the same result, but it could be used like format-strings, for "templating" tuples, i.e. c = (*a, 7, 8, *b) i used to have a concrete use-case for this feature some time ago, but i can't recall it now. sorry. still, the main argument is symmetry: it's a syntactic sugar, but it can be useful sometimes, so why limit it to function calls only? allowing it to be a generic operator would make things like this possible: f(*args, 7) # an implied last argument, 7, is always passed to the function today you have to do f(*(args + (7,))) which is quite ugly. and if you have to sequences, one being a list and the other being a tuple, e.g. x = [1,2] y = (3,4) you can't just x+y them. in order to concat them you'd have to use "casting" like f(*(tuple(x) + y)) instead of f(*x, *y) isn't the latter more elegant? just an idea. i'm sure people could come up with more creative use-cases of a standard "flattening operator". but even without the creative use cases -- isn't symmetry strong enough an argument? why are function calls more important than regular expressions? and the zen supports my point: (*) Beautiful is better than ugly --> f(*(args + (7,))) is ugly (*) Flat is better than nested --> less parenthesis (*) Sparse is better than dense --> less noise (*) Readability counts --> again, less noise (*) Special cases aren't special enough to break the rules --> then why are function calls so special to add a unique syntactic sugar for them? the flattening operator would work on any sequence (having __iter__ or __next__), not just tuples and lists. one very useful feature i can think of is "expanding" generators, i.e.: print xrange(10) # ==> xrange(10) print *xrange(10) # ==> (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) i mean, python already supports this half-way: >>> def f(*args): ... print args ... >>> f(*xrange(10)) (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) so... why can't i just do "print *xrange(10)" directly? defining a function just to expand a generator? well, i could use list(xrange(10)) to expand it, but it's less intuitive. the other way is with list- comprehension, [x for x in xrange(10)] which is just, but isn't *xrange(10) more to-the-point? also, "There should be one-- and preferably only one --obvious way to do it"... so which one? (*) list(xrange(10)) (*) [x for x in xrange(10)] (*) mylist.extend(xrange(10)) (*) f(*xrange(10)) they all expand generators, but which is the preferable way? and imagine this: f(*xrange(10), 7) this time you can't do *(xrange(10) + (7,)) as generators do not support addition... you'd have to do *(tuple(xrange(10)) + (7,)) which is getting quite long already. so as you can see, there are many inconsistencies between function-call expressions and regular expressions, that impose artificial limitations on the language. after all, the code is already in there to support the function-call version... all it takes is adding support for regular expressions. so, what do you think? isn't symmetry worth it? -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060418/1f756a77/attachment.htm From p.f.moore at gmail.com Tue Apr 18 23:06:37 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 18 Apr 2006 22:06:37 +0100 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <e23fmc$ur7$1@sea.gmane.org> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <e23fmc$ur7$1@sea.gmane.org> Message-ID: <79990c6b0604181406i2b778c1fta2dab2bf1eee607@mail.gmail.com> On 4/18/06, Thomas Heller <theller at python.net> wrote: > It is not yet too late (but the timeslot left is very small) to propose > enhancements to ctypes. classmethods like 'from_string', 'from_buffer' or > whatever would probably make sense. A from_buffer classmethod would probably be good. I didn't think to suggest it as I recall from a long time ago on the ctypes list, a discussion which basically came to the conclusion that this wasn't needed (I think it was that discussion that resulted in the inclusion of memmove etc). But if you're not against the idea, I'd go for it - it took me a bit of thinking to get my helper function right. Also, I'm not sure if a to_string method has any value. You can use str(buffer(obj)), but I don't know if that's a good long-term solution - the buffer object has always been somewhat discouraged, AFAICT. Paul. From pje at telecommunity.com Tue Apr 18 23:10:42 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 17:10:42 -0400 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <444537BC.7030702@egenix.com> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> At 09:02 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >Phillip J. Eby wrote: > > At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: > >> Why should a 3rd party extension be hot-fixing the standard > >> Python distribution ? > > > > Because setuptools installs things in zip files, and older versions of > > pydoc don't work for packages zip files. > >That doesn't answer my question. That is the answer to the question you asked: "why hot-fix?" Because setuptools uses zip files, and older pydocs crash when trying to display the documentation for a package (not module; modules were fine) that is in a zip file. >Third-party extension *should not do this* ! If you install setuptools, you presumably would like for things to work, and the hot fix eliminates a bug that interferes with them working. I'm not sure, however, what you believe any of that has to do with python-checkins or python-dev. The version of setuptools that will do this is not yet released, and the hotfix aspect will be documented. If it causes actual problems for actual setuptools users, it will be actually fixed or actually removed. :) (Meanwhile, the separately-distributed setuptools package is not a part of Python, any more than the Optik is, despite Python including the 'optparse' package spun off from Optik.) >I'm talking about the setuptools package which does apply >monkey patching and is needed to manage the download and >installation of plugin eggs, AFAIK. In which case the monkeypatching is needed, to handle situations like building eggs for third-party non-setuptools libraries listed as dependencies. You can't have it both ways; either you need setuptools or you don't. If you do, you will almost certainly need the monkeypatching, and you'll definitely need it to build eggs for non-setuptools-based packages. >What's good about it, is that this approach doesn't modify anything >inside distutils at run-time, but does these modifications >on a per-setup()-call basis. What's bad about it, is that it provides no options for virtualizing the execution of other packages' setup scripts. The 'run_setup' mechanism was horribly broken in the versions of the distutils that I wrestled with. (I also don't see anybody stepping up to provide alternative implementations or patches to third-party setup scripts or distutils extensions to deal with this issue.) >There aren't all that many things that are wrong in setuptools, >but some of them are essential: > >* setuptools should not monkey patch distutils on import Please propose an alternative, or better still, produce a patch. Be sure that it still allows distutils extensions like py2exe to work. The only real alternative to monkeypatching that I'm aware of for that issue is to actually merge setuptools with the distutils, but my guess is that you'll like that even less. :) >* the standard "python setup.py install" should continue to > work as documented in the Python documentation; any new install > command should use a different name, e.g. "install_egg" > >* placing too many ZIP files on PYTHONPATH is a bad idea > since it slows down import searches for all Python > applications, not just ones relying on eggs; a solution > to this would be to have a PYTHONEGGPATH which is then > only scanned by egg-aware modules/applications > >* the user should have freedom of choice in whether to > have her Python installation rely on eggs or not (and > not only --by-using-some-complicated-options) These questions have been hashed to death on the distutils-sig already, but I will try to summarize my responses here as briefly as practical while still covering the full spectrum of issues. I would begin by saying that the tradeoffs I've made favor inexperienced users at the expense of experts (who I assume are capable of learning to use --root or --single-version-externally managed in order to have finer-grained control). Your proposals, however, generally favor experts at the expense of the average user, and treat it as axiomatic that the benefits provided by setuptools are not worth having, no matter how small the cost. By contrast, users who *want* to use setuptools for either distribution or installation of packages, would rather it "just work", no matter how much complexity is required behind-the-scenes. They don't care about my cost to implement, they just care that they type "easy_install foo" to get a package from PyPI or "setup.py sdist bdist_egg upload" to put it there. Therefore, it makes no sense to apply your design approach to setuptools, since by your criteria it wouldn't exist in the first place! After all, expert users can munge distutils extensions to their hearts' content (and you certainly have done so already). Next, your own measurements posted to the distutils-sig debunked the PYTHONPATH performance question, IMO: if you installed *every Python package listed on PyPI at that time* as a zip file (assuming that the average zipfile directory size was the same as in your tests, around 1100 total packages on PyPI, and linear degradation), Python startup time would have increased by little more than half a second. To me, that says the performance is pretty darn good, since the average user isn't going to install anywhere near that many packages. Note also that installing eggs with -m (or --multi-version) doesn't put them on sys.path to begin with unless they're needed at runtime, so if you care about performance tweaking, you can just use -m for everything, or put it in your distutils.cfg or ~/.pydistutils.cfg file so it's automatic. Since scripts that depend on eggs can automatically find and add the eggs to sys.path at runtime, they don't need the eggs to be on sys.path. The reason they are put on sys.path by default is to make the default case of installation "just work". Specifically, it's so that "setup.py install" works the same way as it does with the distutils (in terms of what the users can *do* with a package after installation, not in terms of how the files are laid out). As far as I can tell, your remaining arguments can be reduced to this: Users who care about how stuff works inside (i.e. experts) will have to use command-line options to get the old behaviors or to tweak the new ones, when working with packages that *explicitly chose* to use setuptools. And that is a perfectly valid concern -- for an audience that's quite capable of learning to do something new, however. Also note that nothing stops people from creating packages that warp distutils into installing things in weird places, so arguing that "setup.py install" should always do the same thing is moot anyway; you would not get your wish even if setuptools had never existed. May I suggest that perhaps a better solution to the actual practical problem here is to make the distutils "install" command accept the options that setuptools supports? Then, people who want it can just use --single-version-externally-managed with any package under Python 2.5 and be guaranteed to get the old-fashioned behavior (that can't be cleanly upgraded or uninstalled without a package manager). As far as I know, however, most tools needing the old installation layout use "install --root" anyway (including bdist_wininst, bdist_msi, bdist_rpm, etc.), and setuptools automatically switches to compatibility mode in that case. So setuptools "just works" with these tools already. From gustavo at niemeyer.net Tue Apr 18 23:22:45 2006 From: gustavo at niemeyer.net (Gustavo Niemeyer) Date: Tue, 18 Apr 2006 18:22:45 -0300 Subject: [Python-Dev] Updated: PEP 359: The make statement In-Reply-To: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> References: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> Message-ID: <20060418212245.GD23549@localhost.localdomain> > Removing __metaclass__ in Python 3000 > ------------------------------------- > > As a side-effect of its generality, the make-statement mostly > eliminates the need for the ``__metaclass__`` attribute in class > objects. Thus in Python 3000, instead of:: (...) One of the reasons that this PEP was born is because metaclasses are being used in ways that don't look natural to some, like generating interface instances out of a class statement, so this would add an interesting way to support these constructs. That doesn't look like a good reason, though, to kill the metaclass support that Python took so long to maturate. Otherwise, a new optional extension could be included as well: "Removing the 'class' statement". -- Gustavo Niemeyer http://niemeyer.net From gustavo at niemeyer.net Tue Apr 18 23:24:37 2006 From: gustavo at niemeyer.net (Gustavo Niemeyer) Date: Tue, 18 Apr 2006 18:24:37 -0300 Subject: [Python-Dev] Updated: PEP 359: The make statement In-Reply-To: <d11dcfba0604181230k46749a4bh6150c13ed3ccaecf@mail.gmail.com> References: <d11dcfba0604180953p3f547732k11156dfb56ba5586@mail.gmail.com> <d11dcfba0604181230k46749a4bh6150c13ed3ccaecf@mail.gmail.com> Message-ID: <20060418212437.GE23549@localhost.localdomain> > Consider it dead. =) RIP. ;) -- Gustavo Niemeyer http://niemeyer.net From fredrik at pythonware.com Tue Apr 18 23:55:59 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Tue, 18 Apr 2006 23:55:59 +0200 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><20060418005956.156301E400A@bag.python.org><20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <e23n9h$pqi$1@sea.gmane.org> Phillip J. Eby wrote: > Your proposals, however, generally favor experts at the expense of the > average user, and treat it as axiomatic that the benefits provided by > setuptools are not worth having, no matter how small the cost. mal's arguing from well-established Python design principles (import this), and the kind of respect for existing non-core developers that has been a hallmark of Python development for as long as I've used it. who decided that setuptools should be added to 2.5, btw? it's still listed under "possible additions" in the release PEP, and there don't seem to be a PEP or any other easily located document that explains exactly how it works, what's required from library developers, how it affects existing toolchains, etc. is this really ready for inclusion ? does anyone but Phillip understand how it works ? </F> From jcarlson at uci.edu Wed Apr 19 00:16:08 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Tue, 18 Apr 2006 15:16:08 -0700 Subject: [Python-Dev] a flattening operator? In-Reply-To: <1d85506f0604181404v4a330b10j95063098662db105@mail.gmail.com> References: <1d85506f0604181404v4a330b10j95063098662db105@mail.gmail.com> Message-ID: <20060418145333.A810.JCARLSON@uci.edu> "tomer filiba" <tomerfiliba at gmail.com> wrote: > isn't the latter more elegant? According to my experience with Python, as well as my interpretations of the zens, no. -1 > and the zen supports my point: > (*) Beautiful is better than ugly --> f(*(args + (7,))) is ugly But explicit is better than implicit, and in this case, it would likely be better to just use a keyword argument lastarg=7. If the function called merely accepts *args and **kwargs, and you are passing args unchanged somewhere else, it may be more explicit to redefine f to be... def f(args, kwargs): ... Then use it like... f(args + (7,)) Not quite as ugly as the worst-case option were you were describing moments ago. > (*) Flat is better than nested --> less parenthesis I believe that zen was in regards to overly-nested object hierarchies and namespaces. > (*) Sparse is better than dense --> less noise I believe that zen was in regards to using spaces between operations, as well as using line breaks between sections of code so as to express a visual 'separating' of ideas. > (*) Readability counts --> again, less noise Use keyword arguments for single arguments, ... > (*) Special cases aren't special enough to break the rules --> then why > are function calls so special to add a unique syntactic sugar for them? By necessity, function defintions require a unique syntax to make calls different from anything else you can do with an object. Specifically, we have f() because () are unambiguous for calling functions due to various reasons. > the flattening operator would work on any sequence (having __iter__ or > __next__), not just tuples and lists. one very useful feature i can > think of is "expanding" generators, i.e.: > > print xrange(10) # ==> xrange(10) > print *xrange(10) # ==> (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) One major problem with this is that except for function calls, * is the multiplication operator, which operates on two arguments. *foo is an operation on a single argument, and without parenthesis, would be ambiguously parsed. Further, I would prefer not to see this monstrosity: print x*(*y) Which would print out x copies of the sequence y. With all of that said, there is a recent discussion in the python 3k mailing list which has been discussing "Cleaning up argument list parsing". The vast majority of your ideas can be seen as a variant of the discussion going on there. I haven't been paying much attention to it mostly because I think that the current convention is sufficient, and allowing people to do things like... foo(*arglist1, b, c, d=d_arg, *arglist2, **kwargs) ...is just begging for a mess. - Josiah From martin at v.loewis.de Wed Apr 19 00:33:04 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 19 Apr 2006 00:33:04 +0200 Subject: [Python-Dev] [Python-checkins] r45505 - python/trunk/Modules/posixmodule.c In-Reply-To: <17477.24565.863267.639675@montanaro.dyndns.org> References: <20060418004950.454AC1E400A@bag.python.org> <ee2a432c0604172253y595b7792q7a9a363108ebb1c6@mail.gmail.com> <4444888A.60603@v.loewis.de> <17477.24565.863267.639675@montanaro.dyndns.org> Message-ID: <44456920.3030109@v.loewis.de> skip at pobox.com wrote: > Martin> Also, I suggest to use None as the return value for "no value > Martin> available"; it might be that the configured value is an empty > Martin> string (in which case confstr returns 1). > > I'll work on all of this. Are you sure you want the API to change? Wrt. to the "no configured value" case? If everybody can agree it is the conceptually right thing to do (*), then sure; documentation should get updated, of course (if there is any). This was so broken already that I'm not worried about breaking some user's code: all users apparently only ever used the "successful" cases. OTOH, if people debate whether this actually is the right thing to do, it should not change. Regards, Martin (*) I believe it is conceptually right, because it allows to distinguish two cases which are currently indistinguishable in Python but distinguishable in C, namely: a) there is no configured value (confstr returns 0 an does not change errno), and b) the configured value is an empty string (confstr returns 1). From martin at v.loewis.de Wed Apr 19 00:49:43 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 19 Apr 2006 00:49:43 +0200 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <e23n9h$pqi$1@sea.gmane.org> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><20060418005956.156301E400A@bag.python.org><20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> Message-ID: <44456D07.6020402@v.loewis.de> Fredrik Lundh wrote: > it's still listed under "possible additions" in the release PEP, and there don't > seem to be a PEP or any other easily located document that explains exactly > how it works, what's required from library developers, how it affects existing > toolchains, etc. is this really ready for inclusion ? does anyone but Phillip > understand how it works ? I don't understand it. My concern is that it appears to involve a lot of magic. This magic might do the "right thing" in many cases, and it might indeed help the user that the magic is present, yet I still do believe that magic is inherently evil: explicit is better than implicit. Regards, Martin From pje at telecommunity.com Wed Apr 19 01:00:18 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 19:00:18 -0400 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <e23n9h$pqi$1@sea.gmane.org> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> At 11:55 PM 4/18/2006 +0200, Fredrik Lundh wrote: >who decided that setuptools should be added to 2.5, btw? Guido proposed it on Python-dev when the 2.5 schedule was first being discussed. I discussed it with him off-list, to ensure that it could be done in a way that wouldn't interfere with existing setuptools users or affect Python itself in a negative way. (For example, it needed to be upgradeable in the field in case users wanted/needed a later version than the one included in 2.5.) He then mentioned it in his 2.5 slideshow at PyCon. This is the first anyone's objected to it, however, at least that I'm aware of. >it's still listed under "possible additions" in the release PEP, I imagine that might be why nobody raised any objections sooner, although I understood the possibility to mean "if nobody objects". :) I also posted on Python-dev repeatedly in recent weeks, referring to how the various PEP 302 fixes and updates would interact with setuptools when it got in for 2.5. Also, Neal emailed me the week before last, asking when I would be getting setuptools checked in, and I told him April 17th - i.e., yesterday. So, I was under the impression this was all a done deal. >and there don't >seem to be a PEP or any other easily located document that explains exactly >how it works, what's required from library developers, how it affects existing >toolchains, etc. The setuptools manual is currently at: http://peak.telecommunity.com/DevCenter/setuptools pending conversion to the standard Pythondoc format. I posted earlier today asking about how it, and the other related manuals should be included in the overall Python documentation structure: http://mail.python.org/pipermail/python-dev/2006-April/063846.html The reST source of these manuals is in trunk/sandbox/setuptools, where it has been evolving over the last year. > is this really ready for inclusion ? Please define "ready". I don't mean that in a flippant way, I just don't know what you mean. >does anyone but Phillip understand how it works ? Does anybody besides Thomas understand how ctypes works? ;) Rhetorical jokes aside, every time I've made a significant change to how setuptools works, I've posted it to the distutils-sig -- and usually I make proposals in advance to get feedback or stimulate discussion. I regularly post explanations there in response to questions from people who are integrating it with system packaging tools, or creating various other customizations. And there are other people on distutils-sig who can answer questions about it. The TurboGears community is proficient enough with it that it's only once every few months now that a question gets kicked upstairs to me to answer. A number of people have contributed patches, including Ian Bicking and Tres Seaver. Bob Ippolito was a significant participant in the original design and wrote some of the initial code for the runtime. A *considerable* number of distutils-sig participants have had design input, either through direct suggestions, or through their giving more use case examples that I needed to make "just work". So, I'm not too pleased by insinuations that setuptools is anything other than a Python community project. But MAL and MvL are the only folks from Python-Dev who I've seen over there arguing for changes to setuptools -- and I actually made changes based on their input, although they rarely got 100% of what they asked for. The --really-long-option MAL is complaining about was put in to provide a feature that *he and MvL wanted me to include*; I just don't want that behavior to be the default behavior for setuptools. (And neither do package developers who have to support "non-root" users on virtual hosting systems, or other environments where system packaging tools aren't available.) So, it seems to me that MAL claiming that nobody got to participate in the design process is rather misleading. It's like somebody who wanted decorators in Python and then gripes about the '@' syntax. Everybody's got to compromise a bit. I put their feature in months ago, and it is even the default when you use --root with install. From fdrake at acm.org Wed Apr 19 01:08:07 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 18 Apr 2006 19:08:07 -0400 Subject: [Python-Dev] setuptools in the stdlib In-Reply-To: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> Message-ID: <200604181908.07664.fdrake@acm.org> On Tuesday 18 April 2006 19:00, Phillip J. Eby wrote: > He then mentioned it in his 2.5 slideshow at PyCon. This is the first > anyone's objected to it, however, at least that I'm aware of. Until the past week, I wasn't aware it was being considered. But then, I've not been paying a lot of attention lately, so I suspect that's my fault. > The setuptools manual is currently at: > > http://peak.telecommunity.com/DevCenter/setuptools > > pending conversion to the standard Pythondoc format. I posted earlier > today asking about how it, and the other related manuals should be > included in the overall Python documentation structure: Saw that; hopefully I'll have a chance to look at it soon. I wonder, generally, if it should be merged into the distutils documentation. Those documents happen to be distutils-centric now, because that's what's been provided. Their titles should be the guide to their content, however. > So, I'm not too pleased by insinuations that setuptools is anything other > than a Python community project. I've no doubt about that at all, FWIW. I think you've put a lot of effort into discussing it with the community, and applaud you for that as well as your implementation efforts. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From rasky at develer.com Wed Apr 19 01:52:12 2006 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 19 Apr 2006 01:52:12 +0200 Subject: [Python-Dev] adding Construct to the standard library? References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com><79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> Message-ID: <094301c66343$1d4e57b0$2452fea9@bagio> tomer filiba <tomerfiliba at gmail.com> wrote: > the point is -- ctypes can define C types. not the TCP/IP stack. > Construct can do both. it's a superset of ctype's typing mechanism. > but of course both have the right to *coexist* -- > ctypes is oriented at interop with dlls, and provides the mechanisms > needed for that. > Construst is about data structures of all sorts and kinds. > > ctypes is a very helpful library as a builtin, and so is Construct. > the two don't compete on a spot in the stdlib. I don't agree. Both ctypes and construct provide a way to describe a binary-packed structure in Python terms: and this is an overload of functionality. When I first saw Construct, the thing that crossed my head was: "hey, yet another syntax to describe a binary-packed structure in Python". ctypes uses its description to interoperate with native libraries, while Construct uses its to interoperate with binary protocols. I didn't see a good reason why you shouldn't extend ctypes so to provide features that it is currently missing. It looks like it could be easily extended to do so. Giovanni Bajo From pje at telecommunity.com Wed Apr 19 01:59:36 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 19:59:36 -0400 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <44456D07.6020402@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> Message-ID: <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> At 12:49 AM 4/19/2006 +0200, Martin v. L?wis wrote: >Fredrik Lundh wrote: > > it's still listed under "possible additions" in the release PEP, and > there don't > > seem to be a PEP or any other easily located document that explains exactly > > how it works, what's required from library developers, how it affects > existing > > toolchains, etc. is this really ready for inclusion ? does anyone but > Phillip > > understand how it works ? > >I don't understand it. Have you read the manuals? > My concern is that it appears to involve a lot of >magic. Please define "magic". Better, please point to the API functions or classes, or the setup commands documented in the manual, to show me what these things are that appear to be magic. There do exist implementation details that are not included in user documentation, but this is going to be true of any system. These details are sometimes necessarily complex due to distutils limitations, behavioral quirks of deployed packages using distutils, and the sometimes baroque variations in sys.path resolution across platforms, Python versions, and invocation methods. That these details are not discussed at length in the user documentation is because they are boring hideous things that I wish I didn't have to know myself, and that few people using setuptools would ever want to know. They *do*, however, get discussed (or at least explained by me) at length on the distutils-sig. I explain the quirks and the tradeoffs for each bit of implementation in detail there, and have been doing so for nearly a year now. Anybody who wants to know what's going on has had plenty of opportunity to learn. >This magic might do the "right thing" in many cases, and it might >indeed help the user that the magic is present, yet I still do believe >that magic is inherently evil: explicit is better than implicit. Are documented defaults "implicit" or "magic"? To the extent that there is anything that may be called "magic" in setuptools, it exists only because the necessary infrastructure isn't already present in Python, or because it was required to work around the quirks of other systems. Setuptools chooses to "work at all costs" because backward-compatibility skirts the chicken-and-egg problems that would otherwise exist. If setuptools *weren't* so highly backward-compatible, its use would never have caught on enough to allow this discussion to be taking place in the first place. :) The primary place where it *isn't* backward compatible is if somebody does "setup.py install" *without* using --root, *and* they care about the exact file layout of the result, *and* they are installing a package that explicitly uses setuptools... in which case their complaint is with the author of the package, if the author didn't explain that they used setuptools or point them to relevant documentation. Setuptools' manual prominently explains what developers should tell their users, if they use setuptools in their setup.py: """To keep these users happy, you should review the following topics in your project's installation instructions, if they are relevant to your project and your target audience isn't already familiar with setuptools and easy_install.""" http://peak.telecommunity.com/DevCenter/setuptools#what-your-users-should-know With respect to you and MAL, I think that some of your judgments regarding setuptools may have perhaps been largely formed at a time last year when, among other things: * That documentation section I just referenced didn't exist yet * Many common installation scenarios (e.g. custom PYTHONPATH) didn't "just work" without special setup steps * --single-version-externally-managed was barely a proposal, and it wasn't automatically activated when --root was used These are significant changes that are directly relevant to the objections that you and he raised (regarding startup time, path length, tools compatibility, etc.), and which I added because of those objections. I think that they are appropriate responses to the issues raised, in that they allow the audience that cares about them (you, MAL, and system packagers in general) to avoid the problems that those features' absence caused. It would probably helpful if you would both be as specific as possible in your objections so that they can be addressed individually. If you don't want setuptools in 2.5, let's be clear either on the specific objections. If the objection is to the very *idea* of setuptools, that's fine too, as long as we're clear that's what the objection is. So I would ask that you please make a list of what you would have to see changed in setuptools before you would consider it viable for stdlib inclusion, or simply admit that there are no changes that would satisfy you, or that you don't know enough about it to say, or that you'd like it to be kicked back to distutils-sig for more discussion ad infinitum, or whatever your actual objections are. Then I will make my responses to your proposals, and then Guido can have his say based on the cons from you and the pro's from me. If he says no for 2.5, that's okay by me. I didn't propose its inclusion, the users (and Guido) did. But now that I've busted butt to get it ready in time, I'd prefer that any withdrawal decision be based on actual facts, rather than FUD, hand-waving, and vague innuendo. Meanwhile, this discussion has used up the time that I otherwise would have spent writing 2.5 documentation today (e.g., for the pkgutil additions). From pje at telecommunity.com Wed Apr 19 02:10:05 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 20:10:05 -0400 Subject: [Python-Dev] setuptools in the stdlib In-Reply-To: <200604181908.07664.fdrake@acm.org> References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060418200209.036b00c8@mail.telecommunity.com> At 07:08 PM 4/18/2006 -0400, Fred L. Drake, Jr. wrote: >Saw that; hopefully I'll have a chance to look at it soon. I wonder, >generally, if it should be merged into the distutils documentation. Those >documents happen to be distutils-centric now, because that's what's been >provided. Their titles should be the guide to their content, however. No doubt that's the proper thing to do in the long term, when/if setuptools is the official One Obvious Way To Do It. I was wondering, however, if perhaps Python 2.5 should include them as "Building and Distributing Python Eggs" (for what's now setuptools.txt) and "Installing Python Eggs" (for what's now EasyInstall.txt). >I've no doubt about that at all, FWIW. I think you've put a lot of effort >into discussing it with the community, and applaud you for that as well as >your implementation efforts. Thank you. From oliphant.travis at ieee.org Wed Apr 19 02:09:39 2006 From: oliphant.travis at ieee.org (Travis Oliphant) Date: Tue, 18 Apr 2006 18:09:39 -0600 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <094301c66343$1d4e57b0$2452fea9@bagio> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com><79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> <094301c66343$1d4e57b0$2452fea9@bagio> Message-ID: <e23v44$fin$1@sea.gmane.org> Giovanni Bajo wrote: > tomer filiba <tomerfiliba at gmail.com> wrote: > > >>the point is -- ctypes can define C types. not the TCP/IP stack. >>Construct can do both. it's a superset of ctype's typing mechanism. >>but of course both have the right to *coexist* -- >>ctypes is oriented at interop with dlls, and provides the mechanisms >>needed for that. >>Construst is about data structures of all sorts and kinds. >> >>ctypes is a very helpful library as a builtin, and so is Construct. >>the two don't compete on a spot in the stdlib. > > > > I don't agree. Both ctypes and construct provide a way to describe a > binary-packed structure in Python terms: and this is an overload of > functionality. When I first saw Construct, the thing that crossed my head was: > "hey, yet another syntax to describe a binary-packed structure in Python". > ctypes uses its description to interoperate with native libraries, while > Construct uses its to interoperate with binary protocols. I didn't see a good > reason why you shouldn't extend ctypes so to provide features that it is > currently missing. It looks like it could be easily extended to do so. > For what it's worth, NumPy also defines a data-type object which it uses to describe the fundamental data-type of an array. In the context of this thread it is also yet another way to describe a binary-packed structure in Python. This data-type object is a builtin object which provides information such as byte-order, element size, "kind" as well as the notion of fields so that nested structures can be easily defined. Soon (over the next six months) a basic array object (a super class of NumPy) will be proposed for inclusion in Python. When that happens some kind of data-type object (a super class of the NumPy dtype object) will be needed as well. I think some cross-talk between all of us different users of the notion of what we in the NumPy community call a data-type might be useful. -Travis Oliphant From rasky at develer.com Wed Apr 19 02:12:38 2006 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 19 Apr 2006 02:12:38 +0200 Subject: [Python-Dev] setuptools in the stdlib References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com><5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <200604181908.07664.fdrake@acm.org> Message-ID: <097201c66345$f7c31b90$2452fea9@bagio> Fred L. Drake, Jr. <fdrake at acm.org> wrote: > > So, I'm not too pleased by insinuations that setuptools is > anything other > than a Python community project. > > I've no doubt about that at all, FWIW. I think you've put a lot of > effort into discussing it with the community, and applaud you for > that as well as your implementation efforts. I agree but I have a question for Phil though: why can't many of the setuptools feature be simply integrated within the existing distutils? I have been fighting with distutils quite some time and have had to monkey-patch it somehow to fit my needs. I later discovered that setuptools included many of those fixes already (let alone the new features). I actually welcome all those setuptools fixes in the "Just Works(TM)" principle with which I totally agree. But, why can't setuptools be gradually merged into distutils, instead of being kept as a separate package? Let's take a real example: setuptools' sdist is much enhanced, has integration with CVS/SVN, uses MANIFEST in a way that it really works, etc. Why can't it be merged into the original distutils? Is it just for backward compatibility? If so, can't we have some kind of versioning system? Giovanni Bajo From pje at telecommunity.com Wed Apr 19 02:53:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 18 Apr 2006 20:53:53 -0400 Subject: [Python-Dev] setuptools in the stdlib In-Reply-To: <097201c66345$f7c31b90$2452fea9@bagio> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <200604181908.07664.fdrake@acm.org> Message-ID: <5.1.1.6.0.20060418204337.0596e1a0@mail.telecommunity.com> At 02:12 AM 4/19/2006 +0200, Giovanni Bajo wrote: >But, why can't setuptools be gradually merged into distutils, instead of being >kept as a separate package? Let's take a real example: setuptools' sdist is >much enhanced, has integration with CVS/SVN, uses MANIFEST in a way that it >really works, etc. Why can't it be merged into the original distutils? Is it >just for backward compatibility? This specific issue was discussed last year on the distutils-sig, and the issue is indeed one of compatibility. Setuptools' behavior for MANIFEST generation definitely matches new or infrequent users' expectations 1000% better than the distutils, and requires much less work to get right, even for experts. But for anybody who has extended the distutils using external tools, it would not necessarily work. MAL gave the example of someone who has written other scripts or Makefile rules to add things to MANIFEST or use it to do other things. They might be relying on quirks of the existing behaviors, in other words, and thus it should not be changed without explicit action on their part. And I agree with his reasoning, although I also think that any "distutils 2" should have only One Obvious Way to send input to that process, and it should be via MANIFEST.in, not MANIFEST. Likewise, it should have only one way to get the output. However, unless somebody explicitly chooses to use "distutils 2", they should get the old behavior. This unfortunately means that we can't backport most of setuptools' enhancements to the existing distutils without breaking backward compatibility for people who may have made extensive investment in integrating with the distutils. (Of course, how many of these people exist I don't know; in my personal experience it seems rare for people to integrate with external tools in this fashion, versus simply subclassing things in Python or abandoning distutils altogether. But that's a separate question.) >If so, can't we have some kind of versioning >system? We do: "import setuptools". We could perhaps rename it to "import distutils2" if you prefer, but it would mean essentially the same thing. :) From nnorwitz at gmail.com Wed Apr 19 06:10:20 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 18 Apr 2006 21:10:20 -0700 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) Message-ID: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> On 4/18/06, M.-A. Lemburg <mal at egenix.com> wrote: > Phillip J. Eby wrote: > > As for discussion, Guido originally brought up the question here a few > > months ago, and it's been listed in PEP 356 for a while. I've also > > posted things related to the inclusion both here and in distutils-sig. > > I know, but the discussions haven't really helped much in > getting the setuptools design compatible with standard > distutils. I'm glad to see the discussions taking place; better late than never. However, I think we need to do a better job raising objections earlier. I'm not sure how to do this, but one way is to use PEPs. There is an outstanding issues section in the 2.5 release PEP 356. In this case, perhaps it would have been good to add a bullet item there. I've been trying to ensure the issues aren't lost. There's only one item in the list that still needs addressing (Fred, you listening? You had an idea for solving the problem). I plan to start a 2.6 release schedule PEP soon (before 2.5 is released). It will mostly be a place holder/boilerplate until it can be filled out when a major feature is implemented, a PEP accepted, or outstanding issue raised. The PEP should be updated by a committer when any of those occur. If anyone has better ideas how to ensure the issues aren't lost and are addressed, I'd like to know. n From greg.ewing at canterbury.ac.nz Wed Apr 19 06:45:23 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 19 Apr 2006 16:45:23 +1200 Subject: [Python-Dev] a flattening operator? In-Reply-To: <20060418145333.A810.JCARLSON@uci.edu> References: <1d85506f0604181404v4a330b10j95063098662db105@mail.gmail.com> <20060418145333.A810.JCARLSON@uci.edu> Message-ID: <4445C063.3090703@canterbury.ac.nz> Josiah Carlson wrote: > One major problem with this is that except for function calls, * is the > multiplication operator, which operates on two arguments. *foo is an > operation on a single argument, and without parenthesis, would be > ambiguously parsed. No, it wouldn't. There's no problem in giving an operator different unary and binary meanings; '-' already does that. -- Greg From anthony at interlink.com.au Wed Apr 19 06:57:19 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 19 Apr 2006 14:57:19 +1000 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> Message-ID: <200604191457.24195.anthony@interlink.com.au> I'm not sure how people would prefer this be handled. I don't think we need to have a PEP for it - I don't see PEPs for ctypes, elementtree, pysqlite or cProfile, either. I don't have a problem at all with setuptools going into the standard library. It adds a whole pile of extremely useful functionality (easy_install, in particular, is something that people have been asking for, constantly, for YEARS). Making it an additional install is just silly. Sure, it's possible that some people with extremely complicated distutils scripts may find they need to update them. But the alternative to that is complete paralysis - and I can't say that the current state of distutils is at all something to make Python happy. I started refactoring some of the ugliness out of the internals of distutils last year, but was completely stymied by the demand that no existing setup.py scripts be broken. This means that the people who are experts with the current code are fine, but everyone else has to pay the price. From nnorwitz at gmail.com Wed Apr 19 07:14:38 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 18 Apr 2006 22:14:38 -0700 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> Message-ID: <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.com> On 4/18/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 11:55 PM 4/18/2006 +0200, Fredrik Lundh wrote: > >who decided that setuptools should be added to 2.5, btw? > > Guido proposed it on Python-dev when the 2.5 schedule was first being > discussed. I discussed it with him off-list, ... I thought more was discussed on-list, but apparently not. I searched my mail and could find no direct discussion on python-dev. I saw quite a few references to setuptools being included in 2.5, but nothing explicit. That's unfortunate. I think I talked to Guido and probably Anthony off-list to see if they objected to setuptools in 2.5. My only mail to Phillip was to see if it was going in. > >it's still listed under "possible additions" in the release PEP, > > I imagine that might be why nobody raised any objections sooner, although I > understood the possibility to mean "if nobody objects". :) I was also working under the assumption that people would complain if they didn't like something. What do people think should happen for the "Possible features" section? Should I ask if there are any objections to each item? In the current list of possible features, I think the only new features that stand a chance in 2.5 are: wsgiref and python pgen. Ronald is working on the fat Mac binaries, it's only in this section because it's not complete, but anticipated. All the other things are small cleanups (icons, Demo, file/open) or things that have to happen (ssize_t cleanup). @decorator and functools are possible, but no one is doing anything about them, so I suspect they will not go in unless someone cares enough to do the work and Guido agrees. Since he's still on vacation, these won't be decided for another week or so. Probably these will be decided after a2 goes out (hopefully around the 25th). If you have any questions about the status of 2.5, speak up. n From barry at python.org Wed Apr 19 07:33:10 2006 From: barry at python.org (Barry Warsaw) Date: Wed, 19 Apr 2006 01:33:10 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <200604191457.24195.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <1145424790.21740.86.camel@geddy.wooz.org> On Wed, 2006-04-19 at 14:57 +1000, Anthony Baxter wrote: > I'm not sure how people would prefer this be handled. I don't think we > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > pysqlite or cProfile, either. Agreed. If modules like these have a solid history of use outside the stdlib I don't think we need all the formality of a PEP to pull them in. I /do/ think however that we need to be diligent in documenting them so that people who don't follow python-dev (or the packages own development forums) will become aware of what they are and how to use them. Correct me if I'm wrong, but I don't think any of the above are currently documented in the stdlib. > I don't have a problem at all with setuptools going into the standard > library. It adds a whole pile of extremely useful functionality > (easy_install, in particular, is something that people have been > asking for, constantly, for YEARS). Making it an additional install > is just silly. I agree. My one stupid nit is that I don't like the name 'easy_install'. I wish a better, non-underscored word could be found. But as I've been a total bystander in setuptools development, I have no real standing to complain. ;) > I started refactoring some of the ugliness out of the internals of > distutils last year, but was completely stymied by the demand that no > existing setup.py scripts be broken. This means that the people who > are experts with the current code are fine, but everyone else has to > pay the price. I've written some nasty setup.py scripts and I for one would be all for breaking them and rewriting them if they could be done much more simply and were better integrated with external tools. Heck, I wouldn't even mind a big ol' "if sys.hexversion" at the top of them for backward compatibility for a while if necessary. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060419/707516ca/attachment.pgp From greg.ewing at canterbury.ac.nz Wed Apr 19 07:02:32 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 19 Apr 2006 17:02:32 +1200 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <e23v44$fin$1@sea.gmane.org> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> <094301c66343$1d4e57b0$2452fea9@bagio> <e23v44$fin$1@sea.gmane.org> Message-ID: <4445C468.5010708@canterbury.ac.nz> Travis Oliphant wrote: > For what it's worth, NumPy also defines a data-type object which it > uses to describe the fundamental data-type of an array. In the context > of this thread it is also yet another way to describe a binary-packed > structure in Python. Maybe there should be a separate module providing a data-packing facility that ctypes, NumPy, etc. can all use (perhaps with their own domain-specific extensions to it). It does seem rather silly to have about 3 or 4 different incompatible ways to do almost exactly the same thing (struct, ctypes, NumPy and now Construct). -- Greg From pje at telecommunity.com Wed Apr 19 07:48:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 01:48:53 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <200604191457.24195.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> Message-ID: <5.1.1.6.0.20060419012111.04033c48@mail.telecommunity.com> At 02:57 PM 4/19/2006 +1000, Anthony Baxter wrote: >Sure, it's possible that some people with extremely complicated >distutils scripts may find they need to update them. ...if and *only* if they want setuptools' features, or their users do. Sorry to seize on this point out of context, Anthony. I just want to prevent anybody from seizing on this as an objection on "backward compatibility" grounds. Nobody will be forced to use setuptools in 2.5, any more than anyone will be forced to use ctypes or elementtree or sqlite. From nnorwitz at gmail.com Wed Apr 19 07:55:32 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 18 Apr 2006 22:55:32 -0700 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <1145424790.21740.86.camel@geddy.wooz.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> Message-ID: <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> On 4/18/06, Barry Warsaw <barry at python.org> wrote: > On Wed, 2006-04-19 at 14:57 +1000, Anthony Baxter wrote: > > I'm not sure how people would prefer this be handled. I don't think we > > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > > pysqlite or cProfile, either. > > Correct > me if I'm wrong, but I don't think any of the above are currently > documented in the stdlib. Ok, I will. :-) cProfile is documented in libprofile.tex. None of the others are documented, also msilib doesn't appear to have any doc. I've updated the PEP to mention these issues. Thanks for pointing it out! Note that there is probably a bunch of doc, but it all needs to be incorporated into the python repo and format. What are the doc plans for these modules: + * ctypes + * ElementTree/cElementTree + * msilib + * pysqlite n From pje at telecommunity.com Wed Apr 19 07:59:17 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 01:59:17 -0400 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.co m> References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419012817.04030308@mail.telecommunity.com> At 10:14 PM 4/18/2006 -0700, Neal Norwitz wrote: >On 4/18/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 11:55 PM 4/18/2006 +0200, Fredrik Lundh wrote: > > >who decided that setuptools should be added to 2.5, btw? > > > > Guido proposed it on Python-dev when the 2.5 schedule was first being > > discussed. I discussed it with him off-list, ... > >I thought more was discussed on-list, but apparently not. I searched >my mail and could find no direct discussion on python-dev. I saw >quite a few references to setuptools being included in 2.5, but >nothing explicit. That's unfortunate. Here are the threads that Guido started; the longer one includes a number of discussions about setuptools-related features. MvL raised an objection to the whole idea of a Python-specific packaging format, which I responded to, and there was some other discussion about the relative utility of setuptools' approach versus learning (and building) a half-dozen bdist_* formats for different platforms: http://mail.python.org/pipermail/python-dev/2006-February/060723.html http://mail.python.org/pipermail/python-dev/2006-February/060869.html >I was also working under the assumption that people would complain if >they didn't like something. What do people think should happen for >the "Possible features" section? Should I ask if there are any >objections to each item? Well, Guido asked about including setuptools, and it doesn't seem to have done any good in this case. :) I'm not sure how much more explicit you can get. Yes, he called it setuplib in the first thread, but Georg knew what he meant anyway, and then Guido corrected himself in the second thread -- which also had a title that should've caught the eye of anybody interested in distutils-related things. (bdist_* to stdlib.) I was surprised that MAL didn't comment *then*, actually, and mistakenly thought it meant that our last discussion on the distutils-sig (and my attempts to deal with the problems) had been successful. Between that and MvL's mild response to the explicit discussion of supporting setuptools, I thought their votes had effectively moved from -1 to -0. Off-list discussion with Fredrik suggested that he too had shifted from -1 to -0, and since those were the only core developers that I knew of who had ever said anything the least bit negative about setuptools, I assumed this meant that Guido's motion had carried, so to speak. I mention this mainly to clarify that this was not some attempt by me to slip setuptools in past opposition; I genuinely thought the objectors had gone from "don't bring that stuff anywhere near me" to "I don't like it but other people seem to, so whatever." From pje at telecommunity.com Wed Apr 19 08:06:40 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 02:06:40 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <1145424790.21740.86.camel@geddy.wooz.org> References: <200604191457.24195.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <5.1.1.6.0.20060419015931.01e1ca20@mail.telecommunity.com> At 01:33 AM 4/19/2006 -0400, Barry Warsaw wrote: >On Wed, 2006-04-19 at 14:57 +1000, Anthony Baxter wrote: > > I'm not sure how people would prefer this be handled. I don't think we > > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > > pysqlite or cProfile, either. > >Agreed. If modules like these have a solid history of use outside the >stdlib I don't think we need all the formality of a PEP to pull them in. >I /do/ think however that we need to be diligent in documenting them so >that people who don't follow python-dev (or the packages own development >forums) will become aware of what they are and how to use them. Correct >me if I'm wrong, but I don't think any of the above are currently >documented in the stdlib. I thought that ctypes doc had been added, but I guess they're still in-progress. The setuptools docs are definitely on my plan for conversion to Pythondoc format, as per my earlier post today asking where they should go in the overall doc layout. > > I don't have a problem at all with setuptools going into the standard > > library. It adds a whole pile of extremely useful functionality > > (easy_install, in particular, is something that people have been > > asking for, constantly, for YEARS). Making it an additional install > > is just silly. > >I agree. My one stupid nit is that I don't like the name >'easy_install'. I wish a better, non-underscored word could be found. The long term plan is for a tool called "nest" to be offered, which will offer a command-line interface similar to that of the "yum" package manager, with commands to list, uninstall, upgrade, and perform other management functions on installed packages. It's not likely to be available in time for the Python 2.5, release, but when it *is* available you'll just "python -m easy_install --upgrade setuptools" to get it. :) From rasky at develer.com Wed Apr 19 08:11:43 2006 From: rasky at develer.com (Giovanni Bajo) Date: Wed, 19 Apr 2006 08:11:43 +0200 Subject: [Python-Dev] setuptools in the stdlib References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <200604181908.07664.fdrake@acm.org> <5.1.1.6.0.20060418204337.0596e1a0@mail.telecommunity.com> Message-ID: <0a7c01c66378$21c17630$2452fea9@bagio> Phillip J. Eby <pje at telecommunity.com> wrote: >> If so, can't we have some kind of versioning >> system? > > We do: "import setuptools". We could perhaps rename it to "import > distutils2" if you prefer, but it would mean essentially the same > thing. :) I believe the naming is important, though. I'd rather it be called distutils2, or "from distutils.core import setup2" or something like that. setuptools *is* a new version of distutils, so it shouldn't have a different name. Then, about new commands. Why should I need to do "import distutils2" to do, eg, "setup.py develop"? This doesn't break backward compatibility. Giovanni Bajo From walter at livinglogic.de Wed Apr 19 08:22:13 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Wed, 19 Apr 2006 08:22:13 +0200 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <200604191457.24195.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <4445D715.2040409@livinglogic.de> Anthony Baxter wrote: > I'm not sure how people would prefer this be handled. I don't think we > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > pysqlite or cProfile, either. If I'm not calling shared libraries from Python I can ignore ctypes. If I'm not doing XML, I can ignore elementtree. If I'm not doing SQL I can ignore pysqlite and if I'm not interested in profiling I can ignore cProfile. But setuptools will potentially affect anyone that uses third-party modules/packages. And ctypes, elementtree and pysqlite are mature packages. setuptools isn't even finished yet. > I don't have a problem at all with setuptools going into the standard > library. It adds a whole pile of extremely useful functionality > (easy_install, in particular, is something that people have been > asking for, constantly, for YEARS). Making it an additional install > is just silly > > Sure, it's possible that some people with extremely complicated > distutils scripts may find they need to update them. Wouldn't I need at least have to change "from distutils.core import setup" to "from setuptools import setup"? Or to something like: try: import ez_setup except ImportError: import distutils.core as tools else: ez_setup.use_setuptools() import setuptools as tools for backwards compatibility reasons? > But the > alternative to that is complete paralysis - and I can't say that the > current state of distutils is at all something to make Python happy. > > I started refactoring some of the ugliness out of the internals of > distutils last year, but was completely stymied by the demand that no > existing setup.py scripts be broken. This means that the people who > are experts with the current code are fine, but everyone else has to > pay the price. Servus, Walter From pje at telecommunity.com Wed Apr 19 08:27:06 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 02:27:06 -0400 Subject: [Python-Dev] setuptools in the stdlib In-Reply-To: <0a7c01c66378$21c17630$2452fea9@bagio> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <200604181908.07664.fdrake@acm.org> <5.1.1.6.0.20060418204337.0596e1a0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419022418.0417eef0@mail.telecommunity.com> At 08:11 AM 4/19/2006 +0200, Giovanni Bajo wrote: >Then, about new commands. Why should I need to do "import distutils2" to do, >eg, "setup.py develop"? This doesn't break backward compatibility. The develop command uses the egg_info command. egg_info uses the setuptools enhanced MANIFEST scheme. Both make use of extended setup() arguments, and the entry points feature that allows distutils plugins to co-operate. Develop also uses easy_install... and so on. I'm not saying it would be impossible to merge this stuff into the distutils, just that it's not a trivial undertaking. From greg.ewing at canterbury.ac.nz Wed Apr 19 08:26:22 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 19 Apr 2006 18:26:22 +1200 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <200604191457.24195.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <4445D80E.8050008@canterbury.ac.nz> Anthony Baxter wrote: > I started refactoring some of the ugliness out of the internals of > distutils last year, but was completely stymied by the demand that no > existing setup.py scripts be broken. Instead of trying to fix distutils, maybe it would be better to start afresh with a new package and a new name, then there wouldn't be any backwards-compatibility issues to hold it back. I'd like to see a different approach taken to the design altogether, something more along the lines of Scons. Maybe it could even be an extension of Scons. -- Greg From fredrik at pythonware.com Wed Apr 19 08:26:27 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 08:26:27 +0200 Subject: [Python-Dev] setuptools in the stdlib ( r45510 -python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) References: <20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><444537BC.7030702@egenix.com><5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com><e23n9h$pqi$1@sea.gmane.org><5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.com> Message-ID: <e24l6l$6ei$1@sea.gmane.org> Neal Norwitz wrote: > I was also working under the assumption that people would complain if > they didn't like something. What do people think should happen for > the "Possible features" section? Should I ask if there are any > objections to each item? some discussion on python-dev for each non-trivial item should be required. larger items may need discussion + grace period + discussion before they've checked in. the increasing amount of "but I've discussed this in some other forum" worries me. </F> From fredrik at pythonware.com Wed Apr 19 08:38:29 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 08:38:29 +0200 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com><20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><444537BC.7030702@egenix.com><5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com><e23n9h$pqi$1@sea.gmane.org><5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.co m> <5.1.1.6.0.20060419012817.04030308@mail.telecommunity.com> Message-ID: <e24lt6$831$1@sea.gmane.org> Phillip J. Eby wrote: > I was surprised that MAL didn't comment *then*, actually, and mistakenly > thought it meant that our last discussion on the distutils-sig (and my > attempts to deal with the problems) had been successful. Between that and > MvL's mild response to the explicit discussion of supporting setuptools, I > thought their votes had effectively moved from -1 to -0. Off-list > discussion with Fredrik suggested that he too had shifted from -1 to -0, I'm +1 on adding stuff to distutils (and other install tools) to make it *easier* for setuptools (and other install tools) to make a good job. I'm -1 on adding tools to the core that changes the structure of an installed Python system, without a full PEP process. If nobody can point to (or produce) a technical document that, in full detail, describes the mechanisms *used* by setuptools, including what files it creates, what the files contain, how they are used during import, how non-setuptools code can manipulate (or at least in- spect) the data, etc, setuptools should *not* go into 2.5. </F> From anthony at interlink.com.au Wed Apr 19 08:37:55 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 19 Apr 2006 16:37:55 +1000 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <4445D715.2040409@livinglogic.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D715.2040409@livinglogic.de> Message-ID: <200604191637.57477.anthony@interlink.com.au> On Wednesday 19 April 2006 16:22, Walter D?rwald wrote: > If I'm not calling shared libraries from Python I can ignore > ctypes. If I'm not doing XML, I can ignore elementtree. If I'm not > doing SQL I can ignore pysqlite and if I'm not interested in > profiling I can ignore cProfile. But setuptools will potentially > affect anyone that uses third-party modules/packages. Sure. It might mean people can automatically install something like TurboGears and all it's dependencies, without stuffing their existing required versions (thanks to eggs) and without having to fetch 16 different packages. Oh, the horror! <wink> > And ctypes, elementtree and pysqlite are mature packages. > setuptools isn't even finished yet. Neither is distutils. setuptools at least is likely to be "finished", whatever that means. > Wouldn't I need at least have to change "from distutils.core import > setup" to "from setuptools import setup"? Or to something like: Nope, only if you want to use the new, nicer functionality. If you want to stick with the status quo, you're quite welcome to. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From pje at telecommunity.com Wed Apr 19 08:51:18 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 02:51:18 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <4445D715.2040409@livinglogic.de> References: <200604191457.24195.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <5.1.1.6.0.20060419024005.0402eb08@mail.telecommunity.com> At 08:22 AM 4/19/2006 +0200, Walter D?rwald wrote: >Anthony Baxter wrote: > > > I'm not sure how people would prefer this be handled. I don't think we > > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > > pysqlite or cProfile, either. > >If I'm not calling shared libraries from Python I can ignore ctypes. If >I'm not doing XML, I can ignore elementtree. If I'm not doing SQL I can >ignore pysqlite and if I'm not interested in profiling I can ignore >cProfile. And if you're not using setuptools, or any packages that do, you can ignore it as well. >But setuptools will potentially affect anyone that uses >third-party modules/packages. Setuptools already exists, so it already affects "anyone that uses third-party modules/packages". That genie isn't going back in the metaphorical bottle. However, if setuptools is in the stdlib, it means that those people won't have to install setuptools *first* in order to use a package that's distributed using setuptools. Thus, having it in the stdlib arguably *reduces* the impact of setuptools on such users. > > Sure, it's possible that some people with extremely complicated > > distutils scripts may find they need to update them. > >Wouldn't I need at least have to change "from distutils.core import >setup" to "from setuptools import setup"? Or to something like: > >try: > import ez_setup >except ImportError: > import distutils.core as tools >else: > ez_setup.use_setuptools() > import setuptools as tools > >for backwards compatibility reasons? If, and *only* if, you want to use setuptools, or your users hound you enough to make you. :) Nobody is *forced* to use setuptools for a package they are distributing, any more than they are forced to use ctypes. The "setuptools takes over everything" argument is purest FUD and nonsense. Developers use setuptools because they want the features, or because their users want them to. It's true that sometimes users demand setuptools support, when all they really want is the ability to build eggs. For example, some folks were bugging Greg Ewing to add it to Pyrex, and I had to point out that as long as Pyrex has a PyPI entry and a vanilla setup.py, there is no need for him to start using setuptools. So he created a PyPI entry, making it now possible for easy_install users to do this: $ python2.5 -m easy_install Pyrex Searching for Pyrex Reading http://www.python.org/pypi/Pyrex/ Reading http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/ Best match: Pyrex 0.9.4 Downloading http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/Pyrex-0.9.4.tar.gz Processing Pyrex-0.9.4.tar.gz ... zip_safe flag not set; analyzing archive contents... ... Adding Pyrex 0.9.4 to easy-install.pth file Installing pyrexc script to /home/pje/bin Installed /home/pje/lib/python2.5/site-packages/Pyrex-0.9.4-py2.5.egg Processing dependencies for Pyrex $ So, nobody was forced to import setuptools in order to offer any features to end users. Setuptools is perfectly capable of running setup scripts and making them into eggs on its own, within reasonable levels of distutils customization. From jcarlson at uci.edu Wed Apr 19 08:52:19 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Tue, 18 Apr 2006 23:52:19 -0700 Subject: [Python-Dev] a flattening operator? In-Reply-To: <4445C063.3090703@canterbury.ac.nz> References: <20060418145333.A810.JCARLSON@uci.edu> <4445C063.3090703@canterbury.ac.nz> Message-ID: <20060418235056.A826.JCARLSON@uci.edu> Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > Josiah Carlson wrote: > > > One major problem with this is that except for function calls, * is the > > multiplication operator, which operates on two arguments. *foo is an > > operation on a single argument, and without parenthesis, would be > > ambiguously parsed. > > No, it wouldn't. There's no problem in giving an operator > different unary and binary meanings; '-' already does > that. I stand corrected. Though the look of 'print *e' bothers me. - Josiah From fredrik at pythonware.com Wed Apr 19 08:51:26 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 08:51:26 +0200 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) References: <200604191457.24195.anthony@interlink.com.au><ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com><200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <5.1.1.6.0.20060419015931.01e1ca20@mail.telecommunity.com> Message-ID: <e24mlh$a3f$1@sea.gmane.org> Phillip J. Eby wrote: > The long term plan is for a tool called "nest" to be offered, which will > offer a command-line interface similar to that of the "yum" package > manager, with commands to list, uninstall, upgrade, and perform other > management functions on installed packages. yum already exists, of course. along with many other package managers. do you expect linux and bsd packagers to switch to your stuff for all their python needs, or are you building a parallel universe ? if so, why ? </F> From pje at telecommunity.com Wed Apr 19 09:10:04 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 03:10:04 -0400 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <e24lt6$831$1@sea.gmane.org> References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.co m> <5.1.1.6.0.20060419012817.04030308@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419025250.041f1120@mail.telecommunity.com> At 08:38 AM 4/19/2006 +0200, Fredrik Lundh wrote: >I'm -1 on adding tools to the core that changes the structure of an installed >Python system, without a full PEP process. If nobody can point to (or >produce) >a technical document that, in full detail, describes the mechanisms *used* by >setuptools, including what files it creates, what the files contain, how >they are >used during import, how non-setuptools code can manipulate (or at least in- >spect) the data, etc, setuptools should *not* go into 2.5. And that is a mostly-specific, very fair, and completely reasonable objection. And I think a significant portion of it is answered by the existing documentation, at least with respect to the runtime. The pkg_resources API module includes all of the discovery, dependency resolution and introspection other facilities used by setuptools, and it does not depend on the rest of setuptools, which is directed primarily at building and installing eggs. A number of users have written simple Python scripts using the documented API in order to list installed packages and that sort of thing. The current API reference documentation is available at http://peak.telecommunity.com/DevCenter/PkgResources .) I do think that "changes the structure of an installed Python system" is rather vague, though. Python supports .pth files, for example, so is writing to a .pth file "changing the structure of an installed Python system"? Python supports installing modules in directories on PYTHONPATH or specifying zip files on it, so are these operations "changing structure"? I ask not to argue, but to make sure I know what else it is that you want documented, so I can be specific about what is and isn't documented. My vague guess at the moment is that the only things that setuptools does which could be considered "changing the structure of an installed Python system" are: 1. If installing to a PYTHONPATH directory (instead of site-packages), it adds a special 'site.py' file so that .pth files are processed in PYTHONPATH directories. (Python does not normally process .pth files on PYTHONPATH, but this is necessary to support dynamic installation of packages without modifying PYTHONPATH itself. 'site' is hooked instead of 'sitecustomize' to avoid interfering with a user-defined 'sitecustomize'.) 2. If installing a package in "compatibility mode" (aka --single-version-externally-managed, or if --root was specified), it adds an .egg-info directory to hold setuptools-specific metadata. This will be named something like 'FooBar-1.0.egg-info', and placed in the target directory alongside the installed package. So if the package were installed as Lib/foobar, the egg-info would be stored in Lib/FooBar-1.0.egg-info/. From fredrik at pythonware.com Wed Apr 19 09:08:34 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 09:08:34 +0200 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <e24nlj$d4c$1@sea.gmane.org> Anthony Baxter wrote: > I'm not sure how people would prefer this be handled. I don't think we > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > pysqlite or cProfile, either. That's because they're all trivial building blocks, not all-consuming world views. Any programmer who's ever used a C compiler/linker, xml loader, or embedded database will understand how to use them, and what they do, right from the start. There's not a trace of "you don't need to under- stand this" or "won't anybody think of the newbies!" in any of them. They're also all relatively close to the simplest thing that could possibly work. setuptools, in contrast, appears to be an overengineered kitchen sink with tons of magic in-phillips-head-only features, all of which may have unforseen effects on all parts of your Python installation. I've skimmed the PEAK documentation, and all I find is bullet-point feature lists and endless lists of configuration options. It's like reading Microsoft documentation. </F> From walter at livinglogic.de Wed Apr 19 09:11:37 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Wed, 19 Apr 2006 09:11:37 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <200604191637.57477.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D715.2040409@livinglogic.de> <200604191637.57477.anthony@interlink.com.au> Message-ID: <4445E2A9.8020102@livinglogic.de> Anthony Baxter wrote: > On Wednesday 19 April 2006 16:22, Walter D?rwald wrote: >> If I'm not calling shared libraries from Python I can ignore >> ctypes. If I'm not doing XML, I can ignore elementtree. If I'm not >> doing SQL I can ignore pysqlite and if I'm not interested in >> profiling I can ignore cProfile. But setuptools will potentially >> affect anyone that uses third-party modules/packages. > > Sure. It might mean people can automatically install something like > TurboGears and all it's dependencies, without stuffing their existing > required versions (thanks to eggs) and without having to fetch 16 > different packages. Oh, the horror! <wink> I'm not saying that setuptools doesn't fix a real problem. It does! I'm just saying that for something that might have this far reaching consequences, the "why" and "how" should be documented somewhere. And this "somewhere" shouldn't be the depths of the distutils-SIG archives. >> And ctypes, elementtree and pysqlite are mature packages. >> setuptools isn't even finished yet. > > Neither is distutils. setuptools at least is likely to be "finished", > whatever that means. > >> Wouldn't I need at least have to change "from distutils.core import >> setup" to "from setuptools import setup"? Or to something like: > > Nope, only if you want to use the new, nicer functionality. If you > want to stick with the status quo, you're quite welcome to. After two failed attempts to use setuptools for my own projects, I'd still like to use it, but this seems difficult (maybe this has changed since then, so correct me if I'm wrong): I'd like to distribute my modules as modules inside the top-level ll package. With distutils I can put the package __init__.py into either a separate distributing archive or into the core distribution archive and ignore the warning given by setup.py for the other packages. With setuptools this doesn't work, because the package is distributed over multiple egg-directories. AFAICR setuptools has a solution for this, but only if the package __init__.py is empty (because setuptools generates it). But I'd like to put at least a useful docstring into this __init__.py. Servus, Walter From pje at telecommunity.com Wed Apr 19 09:20:33 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 03:20:33 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <e24mlh$a3f$1@sea.gmane.org> References: <200604191457.24195.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <5.1.1.6.0.20060419015931.01e1ca20@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419031029.0419ea68@mail.telecommunity.com> At 08:51 AM 4/19/2006 +0200, Fredrik Lundh wrote: >do you expect linux and bsd packagers to switch to your stuff for all their >python needs, Heck no, which is why setuptools tries hard to be compatible with bdist_* commands. As long as they use --root or --single-version-externally-managed, setuptools should play nice with them. >or are you building a parallel universe ? if so, why ? Many Python users are unable to use existing packaging systems - if for example they're not root, or their platform doesn't have one (Windows and Mac). People developing Python applications often need to switch between different versions of Python packages, and manage what packages are distributed with their application, while sometimes using bleeding edge versions. They can't use a system packaging tool either. People distributing Python libraries don't want to have to build packages for each and every Linux flavor, plus BSDs and other platforms. People creating extensible applications and frameworks (Zope, TurboGears, Chandler, Trac, ...) can't use system packagers to install and manage their plugins, or the libraries used by their plugins, especially if they're end-user applications or are run in shared hosting space. And there are probably other reasons that are escaping me at the moment. Mainly, though, setuptools offers developers relief from having to support myriad packaging systems in order to be able to depend on other Python packages. They specify their dependencies in terms of PyPI, not in terms of distribution X's name for a system package that installs that facility. That and plugin support are setuptools' core raison d'etre. From pje at telecommunity.com Wed Apr 19 09:34:23 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 03:34:23 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <e24nlj$d4c$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <5.1.1.6.0.20060419032558.0422a360@mail.telecommunity.com> At 09:08 AM 4/19/2006 +0200, Fredrik Lundh wrote: >I've skimmed >the PEAK documentation, and all I find is bullet-point feature lists and >endless lists of configuration options. It's like reading Microsoft >documentation. And I've read your email about the documentation, and all I find is hyperbole, whining, and a total absence of any specific criticism that could be used to *improve* the documentation. It's like reading an effbot rant. :) Perhaps you would care to at least share *which* piece of documentation you are referring to, perhaps by URL, since your description does not sound like any of the three setuptools-related manuals to me. (Unless of course you didn't bother to scroll past the table of contents or to click on any of those underlined "bullet points" in the tables of contents to go to the sections offering an explanation of the features in question.) From pje at telecommunity.com Wed Apr 19 09:43:04 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 03:43:04 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <4445E2A9.8020102@livinglogic.de> References: <200604191637.57477.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D715.2040409@livinglogic.de> <200604191637.57477.anthony@interlink.com.au> Message-ID: <5.1.1.6.0.20060419033509.0427e6b0@mail.telecommunity.com> At 09:11 AM 4/19/2006 +0200, Walter D?rwald wrote: > With setuptools this doesn't work, because the package is distributed > over multiple egg-directories. AFAICR setuptools has a solution for this, > but only if the package __init__.py is empty (because setuptools > generates it). But I'd like to put at least a useful docstring into this > __init__.py. Actually, I believe that recent versions of setuptools actually do support your use case. In fact, I seem to recall having emailed you once I got it to work with your scenario, and you told me you'd already decided not to use it. But it's late at night now and I'm tired and should stop answering these emails for now anyway. From gh at ghaering.de Wed Apr 19 10:54:09 2006 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Wed, 19 Apr 2006 10:54:09 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> Message-ID: <4445FAB1.7010901@ghaering.de> Neal Norwitz wrote: > What are the doc plans for these modules: > + * ctypes > + * ElementTree/cElementTree > + * msilib > + * pysqlite pysqlite: I've started on new module docs for the "sqlite3" module in the Python standard library, based on the text from the existing pysqlite reference manual. Progress is about 5 % perhaps, I spent the most time figuring out how the Python doc build process works. We should probably check my docs in soon even in a preliminary state, so they can be reviewed/improved. Speaking of which, what about SVN commit privileges for me? It's not a big problem to tunnel my stuff through Anthony or others, but I think this would save resources. -- Gerhard From theller at python.net Wed Apr 19 10:54:55 2006 From: theller at python.net (Thomas Heller) Date: Wed, 19 Apr 2006 10:54:55 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> Message-ID: <4445FADF.90307@python.net> Neal Norwitz wrote: > On 4/18/06, Barry Warsaw <barry at python.org> wrote: >> On Wed, 2006-04-19 at 14:57 +1000, Anthony Baxter wrote: >>> I'm not sure how people would prefer this be handled. I don't think we >>> need to have a PEP for it - I don't see PEPs for ctypes, elementtree, >>> pysqlite or cProfile, either. >> Correct >> me if I'm wrong, but I don't think any of the above are currently >> documented in the stdlib. > > Ok, I will. :-) cProfile is documented in libprofile.tex. > > None of the others are documented, also msilib doesn't appear to have > any doc. I've updated the PEP to mention these issues. Thanks for > pointing it out! > > Note that there is probably a bunch of doc, but it all needs to be > incorporated into the python repo and format. > > What are the doc plans for these modules: > + * ctypes > + * ElementTree/cElementTree > + * msilib > + * pysqlite > > n I'm now happy with the tool that converts the ctypes tutorial from reST to LaTeX, I will later (today or tomorrow) commit that into Python SVN. Thomas From mal at egenix.com Wed Apr 19 11:33:13 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 19 Apr 2006 11:33:13 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <444538C1.90608@egenix.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> <444538C1.90608@egenix.com> Message-ID: <444603D9.1060600@egenix.com> M.-A. Lemburg wrote: > Tim Peters wrote: >> [M.-A. Lemburg] >>> I could contribute pybench to the Tools/ directory if that >>> makes a difference: >> +1. It's frequently used and nice work. Besides, then we could >> easily fiddle the tests to make Python look better ;-) > > That's a good argument :-) > > Note that the tests are versioned and the tools refuses to > compare tests with different version numbers. If there are no objections, I'll do some pybench cleanup today and check it in. Regarding the procedure, I think I have to import it under external/ first and then copy it over to Tools/. Is that correct ? I'll also have to add some documentation. Question is: where ? -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 19 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From tomerfiliba at gmail.com Wed Apr 19 11:35:27 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Wed, 19 Apr 2006 11:35:27 +0200 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <094301c66343$1d4e57b0$2452fea9@bagio> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> <094301c66343$1d4e57b0$2452fea9@bagio> Message-ID: <1d85506f0604190235o520f9045he271a24945c2f960@mail.gmail.com> Giovanni Bajo: > Both ctypes and construct provide a way to describe a > binary-packed structure in Python terms: and this is an overload of > functionality so does struct, so why not just use struct? there's a receipe at the python cookbook that adds "naming ability" to fields, i.e. ">6s.destincation 6s.source H.type" something like that, so when you parse you get named attributes instead of a tuple. so why did ctypes wrote another mechanism? because you can't extend it and you can't nest it. but ctypes, just as well, provides the mechanisms for its requirements -- defining C structs, not arbitrarily complex structures. so of course it doesnt, and shouldnt, support variable-length fields or data pointers, which are common in file formats, protocols, and other complex data structures -- what you can't do with a C struct you don't need to do with ctypes. ---- now i'll save me a mail and put this also here: Greg Ewing: > It does seem rather silly to have about 3 or 4 > different incompatible ways to do almost exactly > the same thing (struct, ctypes, NumPy and now > Construct). * struct is designed for packing and unpacking, but is very limited * ctypes is not oriented at packing/unpacking, it only provided a mechanism to handle its requirements, which are domain specific and not general purpose. * i never checked how NumPy packs arrays, etc., but it's also domain-specific, as it's a math library, not a generic packer/unpacker. and trust me those are not the only ones. the reason people have to *reinvent the wheel* every time is the lack of a *generic* parsing/building library. (i prefer the term parsing over unpacking. check my blog for more details) yes, putting bytes together isn't too complicated, and because people don't have a built-in mechanism for that, they tend to just "oh, well, it can't be too complicated, i'll just write one for me", and this yields many flavors of packers/unpackers, all incompatible. Construct is the first library, that i'm aware of, that is dedicated to parsing/building, instead of doing it as a side-kick domain-specific mechanism. Construct is a *superset* of all those packers and unpackers, and had it been part of stdlib, people would have used it instead. of course it's only been released a month ago, and couldnt have been already included in the stdlib, i still think it has a room there. existing projects can be ported without too much effort, and new ones could benefit from it as well. -tomer On 4/19/06, Giovanni Bajo <rasky at develer.com> wrote: > > tomer filiba <tomerfiliba at gmail.com> wrote: > > > the point is -- ctypes can define C types. not the TCP/IP stack. > > Construct can do both. it's a superset of ctype's typing mechanism. > > but of course both have the right to *coexist* -- > > ctypes is oriented at interop with dlls, and provides the mechanisms > > needed for that. > > Construst is about data structures of all sorts and kinds. > > > > ctypes is a very helpful library as a builtin, and so is Construct. > > the two don't compete on a spot in the stdlib. > > > I don't agree. Both ctypes and construct provide a way to describe a > binary-packed structure in Python terms: and this is an overload of > functionality. When I first saw Construct, the thing that crossed my head > was: > "hey, yet another syntax to describe a binary-packed structure in Python". > ctypes uses its description to interoperate with native libraries, while > Construct uses its to interoperate with binary protocols. I didn't see a > good > reason why you shouldn't extend ctypes so to provide features that it is > currently missing. It looks like it could be easily extended to do so. > > Giovanni Bajo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060419/83e806af/attachment.html From walter at livinglogic.de Wed Apr 19 11:57:56 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Wed, 19 Apr 2006 11:57:56 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <5.1.1.6.0.20060419033509.0427e6b0@mail.telecommunity.com> References: <200604191637.57477.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D715.2040409@livinglogic.de> <200604191637.57477.anthony@interlink.com.au> <5.1.1.6.0.20060419033509.0427e6b0@mail.telecommunity.com> Message-ID: <444609A4.20903@livinglogic.de> Phillip J. Eby wrote: > At 09:11 AM 4/19/2006 +0200, Walter D?rwald wrote: >> With setuptools this doesn't work, because the package is distributed >> over multiple egg-directories. AFAICR setuptools has a solution for >> this, but only if the package __init__.py is empty (because setuptools >> generates it). But I'd like to put at least a useful docstring into >> this __init__.py. > > Actually, I believe that recent versions of setuptools actually do > support your use case. Yes (except for the "docstring in __init__.py" bit). > In fact, I seem to recall having emailed you > once I got it to work with your scenario, and you told me you'd already > decided not to use it. Yes, however I'm still thinking about changing my package structure to be able to use setuptools. > But it's late at night now and I'm tired and > should stop answering these emails for now anyway. Servus, Walter From ncoghlan at gmail.com Wed Apr 19 14:00:21 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 19 Apr 2006 22:00:21 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060418200131.GA12715@localhost.localdomain> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> Message-ID: <44462655.60603@gmail.com> A.M. Kuchling wrote: > On Tue, Apr 18, 2006 at 03:37:37PM -0400, Phillip J. Eby wrote: >> I was going to say, "so they can be context managers", but I suppose you >> have a point. There is no need for a context to have a __context__ method, >> unless it is also a context manager. Ugh. > > It would be easy to just remove the parenthetical comment from the PEP > and forget about it, if in fact the statement is now purposeless. But > these are murky waters, and maybe there's still some deeper reason for > it. Nick, do you have any comments? Aside from wondering "Did I even pretend to proofread that paragraph?"? The second occurrence of "context manager" is meant to say "context": This PEP proposes that the protocol used by the with statement be known as the "context management protocol", and that objects that implement that protocol be known as "context managers". The term "context" then encompasses all objects with a __context__() method that returns a context object. And the parenthetical comment was completely backwards and should have read: (This means that all context managers are contexts, but not all contexts are context managers). The reason for recommending that context managers should be contexts is similar to the reason that iterators should be iterables - so that doing the __context__() call manually will still give you something that can be used in a with statement. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From amk at amk.ca Wed Apr 19 14:39:52 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 19 Apr 2006 08:39:52 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> Message-ID: <20060419123952.GB7584@localhost.localdomain> On Tue, Apr 18, 2006 at 09:10:20PM -0700, Neal Norwitz wrote: > There is an outstanding issues section in the 2.5 release PEP 356. In > this case, perhaps it would have been good to add a bullet item there. > I've been trying to ensure the issues aren't lost. There's only one > item in the list that still needs addressing... I've added another one: getting the SoC-funded new mailbox code into the stdlib. I trust someone will push back if they think that's a bad idea. --amk From amk at amk.ca Wed Apr 19 14:56:50 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 19 Apr 2006 08:56:50 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <4445FAB1.7010901@ghaering.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FAB1.7010901@ghaering.de> Message-ID: <20060419125650.GC7584@localhost.localdomain> On Wed, Apr 19, 2006 at 10:54:09AM +0200, Gerhard H?ring wrote: > We should probably check my docs in soon even in a preliminary state, so > they can be reviewed/improved. There's a group of volunteers who will help fix the LaTeX markup, so you certainly don't need to have everything working (or even learn much LaTeX) before the docs get committed. --amk From amk at amk.ca Wed Apr 19 15:00:25 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 19 Apr 2006 09:00:25 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <44462655.60603@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> Message-ID: <20060419130025.GD7584@localhost.localdomain> On Wed, Apr 19, 2006 at 10:00:21PM +1000, Nick Coghlan wrote: > And the parenthetical comment was completely backwards and should have read: > > (This means that all context managers are contexts, but not all contexts > are > context managers). > > The reason for recommending that context managers should be contexts is > similar to the reason that iterators should be iterables - so that doing > the __context__() call manually will still give you something that can be > used in a with statement. So the intention is to enable something like: ctx = lock.__context__() with ctx: ... Which works as long as people don't try to use the context in two nested 'with' statements. Fair enough. I don't think I'll suggest doing this in the "What's New", though. :) --amk From ncoghlan at gmail.com Wed Apr 19 15:10:55 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 19 Apr 2006 23:10:55 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <44462655.60603@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> Message-ID: <444636DF.8090506@gmail.com> Nick Coghlan wrote: > The second occurrence of "context manager" is meant to say "context": > > This PEP proposes that the protocol used by the with statement be > known as the "context management protocol", and that objects that > implement that protocol be known as "context managers". The term > "context" then encompasses all objects with a __context__() > method that returns a context object. > > And the parenthetical comment was completely backwards and should have read: > > (This means that all context managers are contexts, but not all contexts are > context managers). > > The reason for recommending that context managers should be contexts is > similar to the reason that iterators should be iterables - so that doing the > __context__() call manually will still give you something that can be used in > a with statement. Ah, all is explained by svn blame, with a little help from svn log. When Phillip went through to make the terminology consistent he actually swapped the meanings of "context" (which meant 'has a __context__ method' in the original PEP) and "context manager" (which meant 'has __enter__ and __exit__ methods and a __context__ method that returns self' in the original PEP). I clearly wasn't paying attention when that diff went past on the checkins list, but I'd humbly request that we change it back :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From amk at amk.ca Wed Apr 19 16:21:05 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 19 Apr 2006 10:21:05 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <444636DF.8090506@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> Message-ID: <20060419142105.GA32149@rogue.amk.ca> On Wed, Apr 19, 2006 at 11:10:55PM +1000, Nick Coghlan wrote: > When Phillip went through to make the terminology consistent he actually > swapped the meanings of "context" (which meant 'has a __context__ method' > in the original PEP) and "context manager" (which meant 'has __enter__ and > __exit__ methods and a __context__ method that returns self' in the > original PEP). Yes, that was exactly what he set out to do. I noted that the terms "context" and "context manager" seemed to be exactly reversed from what's intuitive, and from how the user-facing documentation is written; the post at <http://mail.python.org/pipermail/python-dev/2006-April/063842.html> contains my argument. GvR approved, and Phillip went ahead and made the change. --amk From ncoghlan at gmail.com Wed Apr 19 15:41:43 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 19 Apr 2006 23:41:43 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060419142105.GA32149@rogue.amk.ca> References: <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> Message-ID: <44463E17.5080001@gmail.com> A.M. Kuchling wrote: > On Wed, Apr 19, 2006 at 11:10:55PM +1000, Nick Coghlan wrote: >> When Phillip went through to make the terminology consistent he actually >> swapped the meanings of "context" (which meant 'has a __context__ method' >> in the original PEP) and "context manager" (which meant 'has __enter__ and >> __exit__ methods and a __context__ method that returns self' in the >> original PEP). > > Yes, that was exactly what he set out to do. I noted that the terms > "context" and "context manager" seemed to be exactly reversed from > what's intuitive, and from how the user-facing documentation is > written; the post at > <http://mail.python.org/pipermail/python-dev/2006-April/063842.html> > contains my argument. GvR approved, and Phillip went ahead and made > the change. I'm a little behind on python-dev *and* python-checkins, so I'd missed the first half of this discussion. It'll take me a while to swap the two in my head, but I'll get used to it eventually. Phillip's point that calling __context__ should give you a context as the result is a good one, and "context manager" makes sense as "an object that can generate new contexts". Given that naming though, I think contextlib.contextmanager should be renamed to contextlib.context. The current name was assigned when objects with all 3 methods were the one's being called "context managers". With the terms being swapped, the name of the decorator should probably change, too. Chopping 7 characters off the name doesn't hurt, either :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From pje at telecommunity.com Wed Apr 19 17:30:00 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 11:30:00 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <444636DF.8090506@gmail.com> References: <44462655.60603@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> Message-ID: <5.1.1.6.0.20060419112620.01e27230@mail.telecommunity.com> At 11:10 PM 4/19/2006 +1000, Nick Coghlan wrote: >Ah, all is explained by svn blame, with a little help from svn log. > >When Phillip went through to make the terminology consistent he actually >swapped the meanings of "context" (which meant 'has a __context__ method' in >the original PEP) At least AMK and I thought that it was obvious that "context manager" means "has a context method", and it's the only way to have the PEP read consistently. Note that the decorator is called @contextmanager, not @context, for example. > and "context manager" (which meant 'has __enter__ and >__exit__ methods and a __context__ method that returns self' in the >original PEP). That's true - *before* there was a __context__ method. Now that there *is* a context method, it makes more sense to say that the thing that has it is the context manager, and the thing with the __enter__/__exit__ methods is the context. >I clearly wasn't paying attention when that diff went past on the checkins >list, but I'd humbly request that we change it back :) The change made the PEP self-consistent in its terminology, if you are encountering it for the first time, and don't already have it in your head that it's one way or the other. If we change it the other way, we'll have to rename @contextmanager. From pje at telecommunity.com Wed Apr 19 17:31:57 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 11:31:57 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <44463E17.5080001@gmail.com> References: <20060419142105.GA32149@rogue.amk.ca> <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> Message-ID: <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> At 11:41 PM 4/19/2006 +1000, Nick Coghlan wrote: >Given that naming though, I think contextlib.contextmanager should be renamed >to contextlib.context. The name is actually correct under this terminology arrangement. @contextmanager *returns* a context manager. It happens to also be a context, but that's not important. "With" takes a context manager expression. From barry at python.org Wed Apr 19 17:31:05 2006 From: barry at python.org (Barry Warsaw) Date: Wed, 19 Apr 2006 11:31:05 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060419015931.01e1ca20@mail.telecommunity.com> References: <200604191457.24195.anthony@interlink.com.au> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <5.1.1.6.0.20060419015931.01e1ca20@mail.telecommunity.com> Message-ID: <1145460665.21198.141.camel@resist.wooz.org> On Wed, 2006-04-19 at 02:06 -0400, Phillip J. Eby wrote: > > > >I agree. My one stupid nit is that I don't like the name > >'easy_install'. I wish a better, non-underscored word could be found. > > The long term plan is for a tool called "nest" to be offered, which will > offer a command-line interface similar to that of the "yum" package > manager, with commands to list, uninstall, upgrade, and perform other > management functions on installed packages. It's not likely to be > available in time for the Python 2.5, release, but when it *is* available > you'll just "python -m easy_install --upgrade setuptools" to get it. :) Cool. Sticking with the metaphor, where's the 'hatch' command and what would it do? :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060419/c1888725/attachment-0001.pgp From barry at python.org Wed Apr 19 17:43:49 2006 From: barry at python.org (Barry Warsaw) Date: Wed, 19 Apr 2006 11:43:49 -0400 Subject: [Python-Dev] Raising objections (was: setuptools in the stdlib) In-Reply-To: <4445D80E.8050008@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> Message-ID: <1145461429.21199.144.camel@resist.wooz.org> On Wed, 2006-04-19 at 18:26 +1200, Greg Ewing wrote: > I'd like to see a different approach taken to the design > altogether, something more along the lines of Scons. > Maybe it could even be an extension of Scons. As much as I like Scons, there's too much unpythonic magic going on there that I wouldn't want emulated. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060419/ed7ecce0/attachment.pgp From mal at egenix.com Wed Apr 19 19:17:17 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 19 Apr 2006 19:17:17 +0200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <444603D9.1060600@egenix.com> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> <444538C1.90608@egenix.com> <444603D9.1060600@egenix.com> Message-ID: <4446709D.9030507@egenix.com> M.-A. Lemburg wrote: > M.-A. Lemburg wrote: >> Tim Peters wrote: >>> [M.-A. Lemburg] >>>> I could contribute pybench to the Tools/ directory if that >>>> makes a difference: >>> +1. It's frequently used and nice work. Besides, then we could >>> easily fiddle the tests to make Python look better ;-) >> That's a good argument :-) >> >> Note that the tests are versioned and the tools refuses to >> compare tests with different version numbers. > > If there are no objections, I'll do some pybench cleanup today and > check it in. > > Regarding the procedure, I think I have to import it under external/ > first and then copy it over to Tools/. Is that correct ? It's in svn now and only an update away. > I'll also have to add some documentation. Question is: where ? FWIW: The docs are currently in the README file in Tools/pybench. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 19 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From martin at v.loewis.de Wed Apr 19 19:39:29 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 19 Apr 2006 19:39:29 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <4445FAB1.7010901@ghaering.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FAB1.7010901@ghaering.de> Message-ID: <444675D1.8050708@v.loewis.de> Gerhard H?ring wrote: > Speaking of which, what about SVN commit privileges for me? Send me your key, and I'll add you. I assume 'gerhard.haering' would work as the commit name? Regards, Martin From martin at v.loewis.de Wed Apr 19 19:46:22 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 19 Apr 2006 19:46:22 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <4445D80E.8050008@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> Message-ID: <4446776E.5090702@v.loewis.de> Greg Ewing wrote: >> I started refactoring some of the ugliness out of the internals of >> distutils last year, but was completely stymied by the demand that no >> existing setup.py scripts be broken. > > Instead of trying to fix distutils, maybe it would be better > to start afresh with a new package and a new name, then > there wouldn't be any backwards-compatibility issues to > hold it back. It is *precisely* my concern that this happens. For whatever reason, writing packaging-and-deployment software is totally unsexy. This is why setuptools is a one-man show, and this is why the original distutils authors ran away after they convinced everybody that distutils should be part of Python. If distutils is now abandoned and replaced with something else, the same story will happen again: the developers will run away, the package gets abandoned, and, after a few years of sadness, a new, smart developer will come along and provide a super replacement. And that will repeat in cycles of roughly 10 years. We have to stop this. If distutils has flaws, fix them. Never ever even think about rewriting software: http://www.joelonsoftware.com/articles/fog0000000069.html Regards, Martin From ronaldoussoren at mac.com Wed Apr 19 20:33:36 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 19 Apr 2006 20:33:36 +0200 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <BB756535-1BBF-4F20-86B9-185FC86A266A@mac.com> On 18-apr-2006, at 23:10, Phillip J. Eby wrote: > >> There aren't all that many things that are wrong in setuptools, >> but some of them are essential: >> >> * setuptools should not monkey patch distutils on import > > Please propose an alternative, or better still, produce a patch. > Be sure > that it still allows distutils extensions like py2exe to work. The > only > real alternative to monkeypatching that I'm aware of for that issue > is to > actually merge setuptools with the distutils, but my guess is that > you'll > like that even less. :) Have you considered such a merger? It's rather odd to include a package in the stdlib that monkeypatches another part of the stdlib. For the record: I like setuptools and eggs. Ronald From mal at egenix.com Wed Apr 19 20:45:45 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 19 Apr 2006 20:45:45 +0200 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> References: <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <44468559.3030208@egenix.com> [removing the python-checkins CC] Phillip J. Eby wrote: > At 09:02 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >> Phillip J. Eby wrote: >>> At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >>>> Why should a 3rd party extension be hot-fixing the standard >>>> Python distribution ? >>> Because setuptools installs things in zip files, and older versions of >>> pydoc don't work for packages zip files. >> That doesn't answer my question. > > That is the answer to the question you asked: "why hot-fix?" Because > setuptools uses zip files, and older pydocs crash when trying to display > the documentation for a package (not module; modules were fine) that is in > a zip file. I fail to see the relationship between setuptools and pydoc. Please don't follow this route - you are putting the integrity of the Python stdlib at risk. The setuptools distribution is not the authoritative source for these kinds of fixes and that should be made clear by separating those parts out into a different package and making the installation an explicit user option. You should also note that users won't benefit from bug fixed versions of e.g. such modules in patch level releases. >> Third-party extensions *should not do this* ! > > If you install setuptools, you presumably would like for things to work, > and the hot fix eliminates a bug that interferes with them working. > > I'm not sure, however, what you believe any of that has to do with > python-checkins or python-dev. The version of setuptools that will do this > is not yet released, and the hotfix aspect will be documented. If it > causes actual problems for actual setuptools users, it will be actually > fixed or actually removed. :) (Meanwhile, the separately-distributed > setuptools package is not a part of Python, any more than the Optik is, > despite Python including the 'optparse' package spun off from Optik.) It's an issue to discuss on python-dev because it touches things that python-dev folks develop. However, it's not related to the rest of this discussion, so I'm changing the subject line and will respond to the rest in a separate email. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 19 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From tomerfiliba at gmail.com Wed Apr 19 20:59:10 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Wed, 19 Apr 2006 20:59:10 +0200 Subject: [Python-Dev] bug with __dict__? In-Reply-To: <1d85506f0604191132n1c640301re80ebd886370a8f7@mail.gmail.com> References: <1d85506f0604191132n1c640301re80ebd886370a8f7@mail.gmail.com> Message-ID: <1d85506f0604191159g76f38843l8ec6b51fb4968f1c@mail.gmail.com> overriding __getattr__ and __setattr__ has several negative side effects, for example: * inside__getattr__/__setattr__ you have to use self.__dict__["attr"] instead of self.attr * it's easy to get stack overflow exceptions when you do something wrong * you must remember to call the super's [get/set]attr or the MRO is broken * when deriving from a class that overrides one of the speical methods, you usually need to override the special function of your class as well, to allow some "local storage" for yourself so i had an idea -- why not just replace __dict__? this does not affect the MRO. i wrote an AttrDict class, which is like dict, only it allows you to acces its keys as attributes. i later saw something like this on the python cookbook as well. class AttrDict(dict): def __init__(self, *a, **k): dict.__init__(self, *a, **k) self.__dict__ = self this code basically causes __getattr/setattr__ to use __getitem/setitem__: a =AttrDict() a["blah"] = 5 a.yadda = 6 print a.blah print a["yadda"] which is exactly what i wanted. now, i thought, instead of overriding __getattr/setattr__, i'd just write a class that overrides __getitem/setitem__. for example: # old way class A(object): def __getattr__(self, name): return 5 a = A() print a.xyz # 5 # new way class mydict(dict): def __getitem__(self, key): return 5 class A(object): def __init__(self): self.__dict__ = mydict() a = A() print a.xyz # should print 5 but lo and behold, python does not call my overriden method. it just bypasses it and always calls the original dict's method. i made several tests, trying to see if it calls __contains__, placed hooks on __getattribute__, but nothing from *my* overriden methods is ever called. this is probably because the C implementation calls PyDict_GetItem or whatever directly... i think it's a bug. i should be able to override the __getitem__ of my __dict__. it breaks the polymorphism of python-level objects! on the one hand, i'm allowed to change __dict__ to any object that derives from dict, but i'm not allowed to override it's methods! python must either disable assigning to __dict__, or be willing to call overriden methods, not silently ignore mine. -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060419/67fac888/attachment.htm From pje at telecommunity.com Wed Apr 19 21:02:15 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 15:02:15 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <4446776E.5090702@v.loewis.de> References: <4445D80E.8050008@canterbury.ac.nz> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> Message-ID: <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> At 07:46 PM 4/19/2006 +0200, Martin v. L?wis wrote: >Greg Ewing wrote: > >> I started refactoring some of the ugliness out of the internals of > >> distutils last year, but was completely stymied by the demand that no > >> existing setup.py scripts be broken. > > > > Instead of trying to fix distutils, maybe it would be better > > to start afresh with a new package and a new name, then > > there wouldn't be any backwards-compatibility issues to > > hold it back. > >It is *precisely* my concern that this happens. For whatever reason, >writing packaging-and-deployment software is totally unsexy. I can tell you the reasons, no need to guess: 1. It's bloody hard work. Honestly, if I really knew what I was in for by doing this, I might not have started. That doesn't mean I'm going to *stop*, just that I'd have thought twice before starting. 2. Everybody thinks they can do it better, and isn't afraid to tell you so. 3. People complain that things didn't magically work the way they expected, but rarely provide any information about what they did or didn't do or how they think it should've figured out what they mean. 4. When it works, nobody notices or cares. When it doesn't work *once*, people blog that it's crap and should never be used -- but don't report the problem or ask for help, and don't change their judgment of "crap" even if the problem is fixed in a few days through somebody sending me mail about the blog article. > If distutils is now abandoned and replaced with >something else, the same story will happen again: the developers will >run away, the package gets abandoned, and, after a few years of sadness, >a new, smart developer will come along and provide a super replacement. >And that will repeat in cycles of roughly 10 years. > >We have to stop this. If distutils has flaws, fix them. I agree with you, which is why setuptools fixes distutils' flaws by subclassing where possible and monkeypatching only where necessary to ensure compatibility. (Only three classes are monkeypatched: Distribution, Command, and Extension.) It would be better to simply integrate those changes into the distutils in the long run, rather than maintaining two layers. The problem in the short term, however, is backward compatibility: sometimes people are relying on internals or implementation details of the existing distutils. I would suggest that setuptools be considered "distutils2" for all practical purposes, with the intention of being merged into the distutils proper for Py3K. Whether setuptools gets included in Python 2.5 or not, it should be in *some* 2.x release, and merge fully in 3.x. From pje at telecommunity.com Wed Apr 19 21:06:02 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 15:06:02 -0400 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <BB756535-1BBF-4F20-86B9-185FC86A266A@mac.com> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419150224.045c73e0@mail.telecommunity.com> At 08:33 PM 4/19/2006 +0200, Ronald Oussoren wrote: >Have you considered such a merger? It's rather odd to include a >package in >the stdlib that monkeypatches another part of the stdlib. I assumed that it would be more controversial to merge setuptools into distutils, than to provide it as an enhanced alternative. Also, there are two practical issues: 1. How to activate or deactivate backward compatibility for packages or people relying on intimate details of current distutils behaviors 2. Maintaining the existing version of setuptools to work with Python 2.3 and 2.4, which would not have this integration with the distutils. From pje at telecommunity.com Wed Apr 19 21:19:24 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 15:19:24 -0400 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <44468559.3030208@egenix.com> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> At 08:45 PM 4/19/2006 +0200, M.-A. Lemburg wrote: >Phillip J. Eby wrote: > > At 09:02 PM 4/18/2006 +0200, M.-A. Lemburg wrote: > >> Phillip J. Eby wrote: > >>> At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: > >>>> Why should a 3rd party extension be hot-fixing the standard > >>>> Python distribution ? > >>> Because setuptools installs things in zip files, and older versions of > >>> pydoc don't work for packages zip files. > >> That doesn't answer my question. > > > > That is the answer to the question you asked: "why hot-fix?" Because > > setuptools uses zip files, and older pydocs crash when trying to display > > the documentation for a package (not module; modules were fine) that is in > > a zip file. > >I fail to see the relationship between setuptools and pydoc. People blame setuptools when pydoc doesn't work on packages in zip files. Rather than refer to some theoretical argument why it's not my fault and I shouldn't be the one to fix it, I prefer to fix the problem. >The setuptools distribution is not the authoritative source for >these kinds of fixes and that should be made clear by separating >those parts out into a different package and making the >installation an explicit user option. ...which nobody will use, following which they will still blame setuptools for pydoc's failure. But I do agree that it might be a good idea to: 1) tell people about the issue 2) tell people that the fix is being installed 3) make it easy to remove the fix However, maybe I should just create 'pydoc-hotfix' and 'doctest-hotfix' packages on PyPI and then tell people to "easy_install pydoc-hotfix doctest-hotfix". Or perhaps just create a "pep302-hotfixes" package that includes all of the PEP 302 support changes for linecache, traceback, and so on, although I'm not sure how many of those can be installed in such a way that the fixes would be active prior to the stdlib versions being imported. I suppose I could handle the "nobody will use it" problem by having easy_install nag about the hotfixes not being installed when running under 2.3 or 2.4. >You should also note that users won't benefit from bug fixed >versions of e.g. such modules in patch level releases. The pydoc fixes to support zip files are too extensive to justify backporting, as they rely on features provided elsewhere. So if bug fixes occur on the 2.5 trunk, I would have to update the hotfix versions directly. From fredrik at pythonware.com Wed Apr 19 21:50:55 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 21:50:55 +0200 Subject: [Python-Dev] Raising objections References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au><4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> Message-ID: <e264b2$ega$1@sea.gmane.org> Martin v. Löwis wrote: > It is *precisely* my concern that this happens. For whatever reason, > writing packaging-and-deployment software is totally unsexy. for some reason, tools of this kind tend to reach the big ball of mud stage even before they reach the dogfood stage. and once you have a big ball of mud, you simply won't get much outside help -- not because it's unsexy, but because it's nearly impossible to contribute, at any level, unless you have days and days to spare... (distutils and setuptools are over 15000 lines of code, according to sloc- count. it took Python several years to accumulate a standard library that big ;-) </F> From skip at pobox.com Wed Apr 19 22:08:08 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 19 Apr 2006 15:08:08 -0500 Subject: [Python-Dev] Raising objections In-Reply-To: <e264b2$ega$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <e264b2$ega$1@sea.gmane.org> Message-ID: <17478.39080.617619.606593@montanaro.dyndns.org> Fredrik> for some reason, tools of this kind tend to reach the big ball Fredrik> of mud stage even before they reach the dogfood stage. and Fredrik> once you have a big ball of mud, you simply won't get much Fredrik> outside help Not to mention many dogs won't eat mud... Skip From amk at amk.ca Wed Apr 19 22:15:08 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 19 Apr 2006 16:15:08 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> References: <4445D80E.8050008@canterbury.ac.nz> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> Message-ID: <20060419201508.GA24963@localhost.localdomain> On Wed, Apr 19, 2006 at 03:02:15PM -0400, Phillip J. Eby wrote: > I can tell you the reasons, no need to guess: 5. The Distutils has lots of customization hooks, but if the exact hook you need isn't there, you're in deep trouble. I learned this when trying to implement a package database. > I agree with you, which is why setuptools fixes distutils' flaws by > subclassing where possible and monkeypatching only where necessary to > ensure compatibility. (Only three classes are monkeypatched: Distribution, > Command, and Extension.) At least some of these changes to Distutils seem unobjectionable for inclusion. For example, the changes to Command just allow keyword arguments on two methods and adds a class attribute; they seem unlikely to break any existing users of the class. The Extension change replaces .pyx source files with .c; I'm not sure what the purpose of this change is, but it clearly might cause problems for distributions with Pyrex source files. The Distribution changes add some more optional kw arguments -- no problem there -- and a bunch of egg-specific methods. This set would need some further study, and also bakes in .egg support; we'd have to be very sure that .eggs are the way to go, so this might need a PEP and/or BDFL pronouncement. Obviously, applying changes to Distutils makes Phillip's maintenance of setuptools more difficult -- now he has to worry about monkeypatching or not depending on the Python version. But at least some of the monkeypatching can be removed -- maybe all of it if the Distribution class proves amenable. --amk From pje at telecommunity.com Wed Apr 19 22:42:18 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 16:42:18 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <20060419201508.GA24963@localhost.localdomain> References: <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> <4445D80E.8050008@canterbury.ac.nz> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419162308.01e20718@mail.telecommunity.com> At 04:15 PM 4/19/2006 -0400, A.M. Kuchling wrote: >At least some of these changes to Distutils seem unobjectionable for >inclusion. > >For example, the changes to Command just allow keyword arguments on >two methods and adds a class attribute; they seem unlikely to break >any existing users of the class. > >The Extension change replaces .pyx source files with .c; I'm not sure >what the purpose of this change is, but it clearly might cause >problems for distributions with Pyrex source files. The purpose of the extension change is that it first checks whether Pyrex's distutils extension is available. If it is not available, it changes the source file names to .c, so that any pre-processed Pyrex files will be processed. This allows Pyrex-using packages to include pre-translations from Pyrex to C, and non-Pyrex users can still compile the package. For example, PyProtocols ships with a _speedups.pyx file, and a _speedups.c file, produced locally with Pyrex. Let's say a user is building PyProtocols from source. If they have Pyrex installed, setuptools uses Pyrex to rebuild the .c from the .pyx. If they do *not* have Pyrex installed, setuptools tells distutils to just build directly from the .c file, so that the distutils isn't confused by all this '.pyx' business. >The Distribution changes add some more optional kw arguments -- no >problem there -- and a bunch of egg-specific methods. This set would >need some further study, and also bakes in .egg support; we'd have to >be very sure that .eggs are the way to go, so this might need a PEP >and/or BDFL pronouncement. There's a slightly deeper issue there than just egg support itself. It's *plugin* support. Setuptools allows eggs to register plugins to provide new commands or keyword options that then apply either globally or only to specific setup scripts. Setuptools then uses this mechanism to register its own commands as distutils replacements. What this means is that if setuptools is installed, and you use setuptools' Distribution class, then setuptools' command plugins will apply -- which means that you now have setuptools behavior for those commands, instead of distutils default behavior. And this would be the case even if you never imported setuptools, if you just ported the Distribution class over. So, you can't actually activate the plugin support in setuptools' Distribution for distutils, or you'll be removing backward compatibility. There would have to be some way to specify whether setuptools or distutils should take precedence in the event that both define a given command. For example, a 'use_setuptools' argument that defaults to false, unless you first imported setuptools (for back/side compatibility w/existing setuptools scripts). >Obviously, applying changes to Distutils makes Phillip's maintenance >of setuptools more difficult -- now he has to worry about >monkeypatching or not depending on the Python version. Yeah, there's already some of this to deal with for package data. Setuptools was added to the sandbox in 2004, and shortly afterwards, Fred Drake took on the task of adding some of its features to distutils for Python 2.4. Which was great, except that then setuptools had to start worrying about whether or not the stuff it was subclassing was in fact already doing what it was trying to do. So, there's a bunch of fragile conditional code in there, and I'm not thrilled at the idea of adding more, because it significantly increases the number of testing combinations required to ensure that things work when changes are made, as well as making it harder to tell what's going on in the code. > But at least >some of the monkeypatching can be removed -- maybe all of it if the >Distribution class proves amenable. The Distribution class also adds the ability for individual commands to have positional argument parsing. From foom at fuhm.net Wed Apr 19 23:03:52 2006 From: foom at fuhm.net (James Y Knight) Date: Wed, 19 Apr 2006 17:03:52 -0400 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> Message-ID: <5EBAB7C6-E3FC-442D-B254-FA813EE40420@fuhm.net> On Apr 19, 2006, at 3:19 PM, Phillip J. Eby wrote: > At 08:45 PM 4/19/2006 +0200, M.-A. Lemburg wrote: >> Phillip J. Eby wrote: >>> At 09:02 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >>>> Phillip J. Eby wrote: >>>>> At 07:15 PM 4/18/2006 +0200, M.-A. Lemburg wrote: >>>>>> Why should a 3rd party extension be hot-fixing the standard >>>>>> Python distribution ? >>>>> Because setuptools installs things in zip files, and older >>>>> versions of >>>>> pydoc don't work for packages zip files. >>>> That doesn't answer my question. >>> >>> That is the answer to the question you asked: "why hot-fix?" >>> Because >>> setuptools uses zip files, and older pydocs crash when trying to >>> display >>> the documentation for a package (not module; modules were fine) >>> that is in >>> a zip file. >> >> I fail to see the relationship between setuptools and pydoc. > > People blame setuptools when pydoc doesn't work on packages in zip > files. Rather than refer to some theoretical argument why it's not my > fault and I shouldn't be the one to fix it, I prefer to fix the > problem. FWIW, I see very little harm from setuptools replacing the old version of pydoc with a later official python version which causes it to actually work with setuptools. Twisted has also done some of of this kind of thing in the past, so that the code can be written to use modules/features in current versions of python yet still run on earlier versions where that functionality doesn't exist/doesn't work. I don't recall ever hearing of a user complaining about said monkeypatching. Users want compatibility with older versions of python, and sometimes that may mean you have to patch said older version's stdlib to make it work. James From fredrik at pythonware.com Wed Apr 19 23:36:11 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Wed, 19 Apr 2006 23:36:11 +0200 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com><20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><444537BC.7030702@egenix.com><5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com><e23n9h$pqi$1@sea.gmane.org><5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com><ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.co m><5.1.1.6.0.20060419012817.04030308@mail.telecommunity.com> <e24lt6$831$1@sea.gmane.org> <5.1.1.6.0.20060419025250.041f1120@mail.telecommunity.com> Message-ID: <e26agd$41t$1@sea.gmane.org> Phillip J. Eby wrote: > >a technical document that, in full detail, describes the mechanisms *used* by > >setuptools, including what files it creates, what the files contain, how > >they are used during import, how non-setuptools code can manipulate (or at > > least inpect) the data, etc, setuptools should *not* go into 2.5. > > And that is a mostly-specific, very fair, and completely reasonable > objection. And I think a significant portion of it is answered by the > existing documentation, at least with respect to the runtime. The > pkg_resources API module includes all of the discovery, dependency > resolution and introspection other facilities used by setuptools, and it > does not depend on the rest of setuptools, which is directed primarily at > building and installing eggs. A number of users have written simple Python > scripts using the documented API in order to list installed packages and > that sort of thing. The current API reference documentation is available > at http://peak.telecommunity.com/DevCenter/PkgResources .) What I want is a PEP that explains the things that are manipulated by this API. What things are stored where, what do they contain, and how are they are used by the various tools ? The API in itself isn't very interesting; I'm interested in the underlying design. </F> From pje at telecommunity.com Thu Apr 20 01:45:27 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 19:45:27 -0400 Subject: [Python-Dev] setuptools in the stdlib ( r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <e26agd$41t$1@sea.gmane.org> References: <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418181450.036aa8e0@mail.telecommunity.com> <ee2a432c0604182214y516fde36rf3420de5732017b6@mail.gmail.co m> <5.1.1.6.0.20060419012817.04030308@mail.telecommunity.com> <e24lt6$831$1@sea.gmane.org> <5.1.1.6.0.20060419025250.041f1120@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060419192552.01e47318@mail.telecommunity.com> At 11:36 PM 4/19/2006 +0200, Fredrik Lundh wrote: >Phillip J. Eby wrote: > > > >a technical document that, in full detail, describes the mechanisms > *used* by > > >setuptools, including what files it creates, what the files contain, how > > >they are used during import, how non-setuptools code can manipulate (or at > > > least inpect) the data, etc, setuptools should *not* go into 2.5. > > > > And that is a mostly-specific, very fair, and completely reasonable > > objection. And I think a significant portion of it is answered by the > > existing documentation, at least with respect to the runtime. The > > pkg_resources API module includes all of the discovery, dependency > > resolution and introspection other facilities used by setuptools, and it > > does not depend on the rest of setuptools, which is directed primarily at > > building and installing eggs. A number of users have written simple Python > > scripts using the documented API in order to list installed packages and > > that sort of thing. The current API reference documentation is available > > at http://peak.telecommunity.com/DevCenter/PkgResources .) > >What I want is a PEP that explains the things that are manipulated by >this API. What things are stored where, what do they contain, and how >are they are used by the various tools ? > >The API in itself isn't very interesting; I'm interested in the underlying >design. But the API *documentation* is where you'll find: * The external formats understood for version numbers and version requirements * The escaping and comparison rules for project names and version numbers * The external formats used for dependency listings * The external formats used for declaring plugins * The conceptual overview for the design, including terminology and the design concepts exposed by the objects and operations * A link to the original design overview, which I wrote with a mind to it becoming part of a PEP someday: http://mail.python.org/pipermail/distutils-sig/2005-June/004652.html And probably many more things that I don't recall off the top of my head. If what you want is an implementer's guide to which actual files contain what data in which of the above formats, I'll admit there isn't such a roadmap. I could probably write one up by tomorrow, since it would mostly be a list of "here are the filenames, this is what format in the existing documentation is used for this file, and here's what the contents are used for, and here are some pointers to relevant parts of the user and API docs that you should read in order to understand what this is for". However, if you don't read at least the API docs, you're definitely going to be missing most of the design picture. From tjreedy at udel.edu Thu Apr 20 01:47:19 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 19 Apr 2006 19:47:19 -0400 Subject: [Python-Dev] 2.5a1 Performance References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> <444538C1.90608@egenix.com><444603D9.1060600@egenix.com> <4446709D.9030507@egenix.com> Message-ID: <e26i67$pg3$1@sea.gmane.org> "M.-A. Lemburg" <mal at egenix.com> wrote in message news:4446709D.9030507 at egenix.com... > FWIW: The docs are currently in the README file in Tools/pybench. I took a look. The only thing that puzzles me is 'warp factor', which appears exactly once. tjr From anthony at interlink.com.au Thu Apr 20 02:39:22 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 10:39:22 +1000 Subject: [Python-Dev] Raising objections In-Reply-To: <4446776E.5090702@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> Message-ID: <200604201039.29138.anthony@interlink.com.au> On Thursday 20 April 2006 03:46, Martin v. L?wis wrote: > It is *precisely* my concern that this happens. For whatever > reason, writing packaging-and-deployment software is totally > unsexy. This is why setuptools is a one-man show, and this is why > the original distutils authors ran away after they convinced > everybody that distutils should be part of Python. If distutils is > now abandoned and replaced with something else, the same story will > happen again: the developers will run away, Well, I've seen no indication that this is Phillip's plan. If it is, could he tell us now? <wink> > the package gets > abandoned, and, after a few years of sadness, a new, smart > developer will come along and provide a super replacement. And that > will repeat in cycles of roughly 10 years. Well, I'm planning on trying to get across the setuptools codebase before 2.5 final. > We have to stop this. If distutils has flaws, fix them. Never ever > even think about rewriting software: I started looking at this. The number of complaints I got when I started on this that it would break the existing distutils based installers totally discouraged me. In addition, the existing distutils codebase is ... not good. It is flatly not possible to "fix" distutils and preserve backwards compatibility. Sometimes you _have_ to rewrite. I point to urllib->urllib2, asyncore->twisted, rfc822/mimelib/&c->email. This approach means that people's existing code continues to work, there's a separate installer of the new code that is available for older versions of Python, plus we have the newer features. > http://www.joelonsoftware.com/articles/fog0000000069.html Yes. I remember that piece. In particular, he wrote the original rant about this about Mozilla/Firefox. How did that work out again? Oh, that's right - we now have a much, much more successful and usable browser. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From greg.ewing at canterbury.ac.nz Thu Apr 20 03:06:54 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 13:06:54 +1200 Subject: [Python-Dev] Raising objections In-Reply-To: <4446776E.5090702@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> Message-ID: <4446DEAE.50204@canterbury.ac.nz> Martin v. L?wis wrote: > If distutils is now abandoned and replaced with > something else, the same story will happen again: the developers will > run away, the package gets abandoned, Seems to me that if we had something with a clean design that was easy to understand, maintain and extend, that this wouldn't be so much of a problem. If the original author ran away, others would more easily be able to take over the task. > We have to stop this. If distutils has flaws, fix them. Never ever > even think about rewriting software: Usually this is good advice, but it is possible for something to be so badly broken that the only reasonable way to fix it is to throw it away and start over. I'm not sure whether distutils is really that badly broken. But an earlier poster seemed to be saying that he had great difficulty finding anything that could be changed without breaking something that someone relied on. It's hard to fix something if you can't change it at all. I'd be happy to discuss ways of evolving distutils into something better, but first we have to decide that it is actually permissible to change it and possibly break stuff that's relying on its internals. -- Greg From anthony at interlink.com.au Thu Apr 20 03:08:03 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 11:08:03 +1000 Subject: [Python-Dev] Raising objections In-Reply-To: <17478.39080.617619.606593@montanaro.dyndns.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <e264b2$ega$1@sea.gmane.org> <17478.39080.617619.606593@montanaro.dyndns.org> Message-ID: <200604201108.04436.anthony@interlink.com.au> On Thursday 20 April 2006 06:08, skip at pobox.com wrote: > Fredrik> for some reason, tools of this kind tend to reach the > big ball Fredrik> of mud stage even before they reach the dogfood > stage. and Fredrik> once you have a big ball of mud, you simply > won't get much Fredrik> outside help > > Not to mention many dogs won't eat mud... Most dogs will eat _anything_ that might look like food. <wink> From pje at telecommunity.com Thu Apr 20 04:04:39 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 19 Apr 2006 22:04:39 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <200604201039.29138.anthony@interlink.com.au> References: <4446776E.5090702@v.loewis.de> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> Message-ID: <5.1.1.6.0.20060419214144.01ef9630@mail.telecommunity.com> At 10:39 AM 4/20/2006 +1000, Anthony Baxter wrote: >On Thursday 20 April 2006 03:46, Martin v. L?wis wrote: > > It is *precisely* my concern that this happens. For whatever > > reason, writing packaging-and-deployment software is totally > > unsexy. This is why setuptools is a one-man show, and this is why > > the original distutils authors ran away after they convinced > > everybody that distutils should be part of Python. If distutils is > > now abandoned and replaced with something else, the same story will > > happen again: the developers will run away, > >Well, I've seen no indication that this is Phillip's plan. If it is, >could he tell us now? <wink> I don't know how widely the distutils were used *before* they became a part of Python, so it's possible the authors didn't know what they were in for. Of course, it's still possible that I don't know what I'm in for either. However, the amount of work I've been putting in wouldn't be possible without OSAF's indirect finanical support for the project. But, setuptools solves problems that commercial entities are often interested in solving. The Envisage folks inquired at Pycon about me possibly helping expand setuptools' scope to building and installing C dynamic link libraries, and this is also a feature potentially desired for Chandler's build system as well. What this means is that there is a potential basis for supporting the work going forward, at least through what I'm hoping will be the "1.0" version. And I suspect that there are other improvements possible that other commercial entities would be interested in funding. How much any of that could have also applied to the distutils at one time, I don't know. My point is merely that setuptools has enough commercial value to make me believe that sponsorship for features shouldn't be that hard to come by, and there are certainly worse things I could do with my work days. This shouldn't be construed as saying I'll only work on setuptools if paid; it merely means that the huge gobs of time I've been putting into it at times are only *possible* because it aligns with OSAF goals enough to let me put a substantial chunk of my work days and nights into it. >I started looking at this. The number of complaints I got when I >started on this that it would break the existing distutils based >installers totally discouraged me. In addition, the existing >distutils codebase is ... not good. You know, after my experience doing setuptools, I'm a lot less inclined to criticize anybody's code, if it works and solves a problem. Sometimes, the structure of the problems are just not amenable to beautiful solutions. Or more to the point, beautiful solutions are not always economical to implement while you're supporting actual users and backwards compatibility is a constant requirement. So my attitude to the distutils code has mellowed considerably; I view it as a collection of things that I at least don't have to think about. C compilers, for example, are something I don't have to touch at all, except for some recent experiments in implementing shared library building. So, I don't really view the distutils as broken per se; it's just that they lack a lot of desirable features that are hard to implement without breaking backward compatibility. That's not quite the same thing as "the code sucks and we should rewrite it". There have been a lot of calls to scrap it and rewrite it throughout the years I've been working on setuptools, but I can't imagine who could possibly *afford* such an undertaking (as opposed to talking about it). >Yes. I remember that piece. In particular, he wrote the original rant >about this about Mozilla/Firefox. How did that work out again? Oh, >that's right - we now have a much, much more successful and usable >browser. Well, the analogy isn't great anyway, because setuptools is hardly a rewrite of the distutils. I didn't rewrite a single thing I could get away with calling or subclassing. Setuptools is all about adding new features, not "cleaning up" the distutils code base, so arguing about whether rewrites are good or not is a straw man holding a red herring. :) From murman at gmail.com Thu Apr 20 04:36:56 2006 From: murman at gmail.com (Michael Urman) Date: Wed, 19 Apr 2006 21:36:56 -0500 Subject: [Python-Dev] a flattening operator? In-Reply-To: <4445C063.3090703@canterbury.ac.nz> References: <1d85506f0604181404v4a330b10j95063098662db105@mail.gmail.com> <20060418145333.A810.JCARLSON@uci.edu> <4445C063.3090703@canterbury.ac.nz> Message-ID: <dcbbbb410604191936m17690203q6cbbb8b00e376668@mail.gmail.com> On 4/18/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > No, it wouldn't. There's no problem in giving an operator > different unary and binary meanings; '-' already does > that. However unlike -, there is a two character ** operator, so while x--y is the same as x - - y, x**y would not be the same as x * * y. Michael -- Michael Urman http://www.tortall.net/mu/blog From murman at gmail.com Thu Apr 20 05:08:00 2006 From: murman at gmail.com (Michael Urman) Date: Wed, 19 Apr 2006 22:08:00 -0500 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <44468559.3030208@egenix.com> <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> Message-ID: <dcbbbb410604192008t4cf53f11oe6658e3d3ec1fad5@mail.gmail.com> On 4/19/06, Phillip J. Eby <pje at telecommunity.com> wrote: > People blame setuptools when pydoc doesn't work on packages in zip > files. Rather than refer to some theoretical argument why it's not my > fault and I shouldn't be the one to fix it, I prefer to fix the problem. So rather than extract the zip at install time (something purely within setuptool's domain), you found modifying pydoc's behavior to be a more compelling story. Are you aware that zipimport fails on 64-bit systems in Python 2.3, or do you plan to patch over that as well? Michael -- Michael Urman http://www.tortall.net/mu/blog From fredrik at pythonware.com Thu Apr 20 06:18:04 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 06:18:04 +0200 Subject: [Python-Dev] Raising objections References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com><4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> Message-ID: <e2721t$2b5$1@sea.gmane.org> Anthony Baxter wrote: > > http://www.joelonsoftware.com/articles/fog0000000069.html > > Yes. I remember that piece. In particular, he wrote the original rant > about this about Mozilla/Firefox. How did that work out again? Oh, > that's right - we now have a much, much more successful and usable > browser. his point isn't that it cannot be done, it's that it is horridly expensive and time consuming. exactly how much money and time has gone into getting firefox to the point where it is today? (insert obligatory bastiat reference here) </F> From pje at telecommunity.com Thu Apr 20 06:24:42 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 00:24:42 -0400 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <dcbbbb410604192008t4cf53f11oe6658e3d3ec1fad5@mail.gmail.co m> References: <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <44468559.3030208@egenix.com> <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060420001817.01e1f420@mail.telecommunity.com> At 10:08 PM 4/19/2006 -0500, Michael Urman wrote: >On 4/19/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > People blame setuptools when pydoc doesn't work on packages in zip > > files. Rather than refer to some theoretical argument why it's not my > > fault and I shouldn't be the one to fix it, I prefer to fix the problem. > >So rather than extract the zip at install time (something purely >within setuptool's domain), you found modifying pydoc's behavior to be >a more compelling story. Absolutely! It adds a new and useful feature to Python, while extracting the files doesn't. In the meantime, of course, I've told affected users to use the 'always-unzip' option when installing. > Are you aware that zipimport fails on 64-bit >systems in Python 2.3, or do you plan to patch over that as well? Nope. I implemented a workaround instead. :) For a few months now, setuptools generates uncompressed zipfiles for Python 2.3, since it's only compressed zipfiles that give 64-bit zipimport fits. The setuptools docs still claim you have to use Python 2.4 for 64-bit platforms, but I only left that in because there are still plenty of 2.3 compressed eggs floating around. Do you know of any other Python bugs I should be fixing or working around? :) From anthony at interlink.com.au Thu Apr 20 06:34:55 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 14:34:55 +1000 Subject: [Python-Dev] Raising objections In-Reply-To: <e2721t$2b5$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604201039.29138.anthony@interlink.com.au> <e2721t$2b5$1@sea.gmane.org> Message-ID: <200604201435.11378.anthony@interlink.com.au> On Thursday 20 April 2006 14:18, Fredrik Lundh wrote: > Anthony Baxter wrote: > > > http://www.joelonsoftware.com/articles/fog0000000069.html > > > > Yes. I remember that piece. In particular, he wrote the original > > rant about this about Mozilla/Firefox. How did that work out > > again? Oh, that's right - we now have a much, much more > > successful and usable browser. > > his point isn't that it cannot be done, it's that it is horridly > expensive and time consuming. exactly how much money and time has > gone into getting firefox to the point where it is today? And I would reply that sometimes a rewrite is necessary. I doubt firefox would be at the state it is today if it was still based on the ancient netscape codebase. In any case, this is not relevant, since setuptools is not a rewrite. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Thu Apr 20 06:56:07 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 14:56:07 +1000 Subject: [Python-Dev] setuptools in 2.5. Message-ID: <200604201456.13148.anthony@interlink.com.au> In an attempt to help this thread reach some sort of resolution, here's a collection of arguments against and in favour of setuptools in 2.5. My conclusions are at the end. The arguments against: - Someone should instead just fix distutils. Right. And the amount of yelling if distutils changed in a non-b/w compat way wouldn't be high. There's also the basic problem that the distutils code is _horrible_. - It monkeypatches pydoc and distutils! It only monkeypatches pydoc when the separate setuptools installer is used on older Pythons. How is this relevant for this discussion of Python 2.5? The monkeypatching for distutils should be reduced - see AMK's message for a breakdown of this. - Documentation beaker% pydoc xmlcore.etree no Python documentation found for 'xmlcore.etree' beaker% pydoc ctypes no Python documentation found for 'ctypes' The documentation (of which there is plenty) can and will be folded into the standard python docs. Most of the new modules in 2.5 went in before their docs. - Where's the PEP? I don't see the need. The stuff that could go into a PEP about formats and the like should go into the existing Distutils documentation. It's a far more useful place, and many more people are likely to find it there than in a PEP. - It's a huge amount of code (or "ball of mud"), or, it adds too many features. Most of these have been added over the last 2 years in response to feedback and requests from people on distutils-sig. There's been an obvious pent-up demand for a bunch of this work, and now that someone's working on it, these can get done. - It will break existing setup.py scripts! No it won't. If you don't type the letters 'import setuptools' into your setup.py, it won't be affected. - Rewriting from scratch is bad This isn't a rewrite - it's built on top of distutils. (An aside, I don't buy the "never rewrite" argument. As I mentioned in an earlier message, look at urllib2, twisted and email for starters. In addition, look at Firefox, Windows XP, and Mac OSX. Hell, Linux could be considered a rewrite of Minix, once upon a time.) - Eggs are inferior to distribution-specific packaging Not all operating systems have a decent packaging system. The ones that do, don't support multiple versions of the same library. In addition, there's no reason why existing packaging systems can't just bundle up the code as they do now - if they also add a .egg-info file to the packages, that would be even better! Finally, these don't support user installation of software. This is particularly useful in a hosting environment. And now let's look at some of the stuff that setuptools gives us: - We have a CPAN-type system I do quite a number of Python talks, and this is _always_ one of the most requested features. There's been many attempts to write this, none have been completed until now. If you honestly don't see that this is a big thing for Python, then I am very, very suprised. I suspect that this will be the #1 new feature of Python 2.5 that the users will notice and be happy about. - Multiple installs of different versions of the same package, including per-user installs. Again, as Python gets more widely used, this becomes a big issue. Sure, it's not necessarily a killer argument for python-dev, but stuff that's added to Python shouldn't just be just for the use of python-dev. The multiple installed versions feature also avoids the CPAN dependency hell problem - back when I used to work with Perl, this was a constant source of nightmarish problems. - The "develop" mode This makes life that bit less painful all-round. - The plugin/extension support Extending distutils currently is a total pain in the arse. - Backwards compatibility easy_install even works with existing packages that use traditional distutils, so long as they're in the Cheeseshop. Damn, this is nice. If you don't want to do the work to change your installation code, don't bother - it will still be useful. The conclusions: I'm a little suprised by the amount of fear and loathing this has generated. To me, there are such obvious benefits that I don't see why people are so vehemently against setuptools. I haven't seen any arguments that have convinced me that this isn't the right thing to do. Yes, there's still work to be done - but hell, we've only released the first alpha so far. For inclusion in the standard library, the usual benchmark is that the code offers useful functionality, and that it be the "best of breed". setuptools clearly meets these two criteria. (Ok, it's really "only of breed", but that also makes it "best", by default <wink>). It's also been under development for over 2 years - according to svn, 0.0.1 was checked into svn back in March 2004. I'm also suprised by how much some people seem to think that the current state of distutils functionality is acceptable or desirable. It's not - it's a mess. Finally, I'd like to point out that I think some of the hostility towards Phillip's work has been excessive. He's done an amazing amount of work on this (look at the distutils-sig archive for the last two years for more), and produced something that's very very useful. He deserves far more credit for this than he seems to have been getting here. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From fredrik at pythonware.com Thu Apr 20 07:31:23 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 07:31:23 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <e276bc$b06$1@sea.gmane.org> Anthony Baxter wrote: > - Multiple installs of different versions of the same package, > including per-user installs. yeah, but where is the documentation on how this works ? phillip points to a 30-page API description, which says absolutely nothing whatsoever about the files I'm going to find on my machine after doing a setuptools- based installation, how those files are used, and what I can do with them myself. maybe it's nice to have a big abstraction with a dozen classes and inter- faces, but if not even the developer can describe the concrete things be- neath the abstractions, something's not entirely right. especially when the developer don't even seem to understand why anyone would care, when he's already written all the code. ::: and yes, for the record, setuptools has already created one support mess for me, where the "gotta protect the average users from how this really works" approach resulted in many average users not being able to get things to work at all, even after trying out a magic workaround that someone provided (along the lines of "python -msomething --something walla walla/" - note the trailing slash, will do something else without it). and I wasn't even using setuptools myself. not that the users cared about that; setuptools told them that my stuff didn't work, and couldn't be installed, so of course they contacted me. ::: but if this is so obvious, maybe someone could whip up a brief overview document, that support volunteers like myself can refer to when people get stuck ? (preferrably someone who isn't Phillip, to prove that some- one else actually understands all the relevant details). if not a PEP, at least a FAQ entry ? pointing people to a couple of hundred pages of API and option listings won't really work. and "look in the distutils archives" won't work either. </F> From martin at v.loewis.de Thu Apr 20 07:53:55 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 07:53:55 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <200604201039.29138.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> Message-ID: <444721F3.6070704@v.loewis.de> Anthony Baxter wrote: >> It is *precisely* my concern that this happens. For whatever >> reason, writing packaging-and-deployment software is totally >> unsexy. This is why setuptools is a one-man show, and this is why >> the original distutils authors ran away after they convinced >> everybody that distutils should be part of Python. If distutils is >> now abandoned and replaced with something else, the same story will >> happen again: the developers will run away, > > Well, I've seen no indication that this is Phillip's plan. If it is, > could he tell us now? <wink> I don't know about Phillip's plans, but I do see many indications that people stop using distutils, and use setuptools instead. People change their setup.py files to setuptools.setup instead of distutils.setup, since the former provides the same things to them, just better. So effectively, distutils disappears except as an implementation detail of setuptools. There is much talk about backwards compatibility: these package are then, of course, not backwards compatible with earlier Python versions anymore. No problem: just install setuptools on these earlier versions. So distutils isn't just abandoned for the future versions, but also for past versions. > I started looking at this. The number of complaints I got when I > started on this that it would break the existing distutils based > installers totally discouraged me. I believe this is a myth. Some changes might cause breakage, yes, but others wouldn't. For example, introducing additional parameters to setup is just fine: existing packages won't break; new packages using these parameters won't build on older Python releases, of course. > In addition, the existing distutils codebase is ... not good. Then it shouldn't have become part of Python in the first place. Can you please elaborate what specific aspects of distutils you dislike? > It is flatly not possible to "fix" distutils and preserve backwards > compatibility. Why? > Sometimes you _have_ to rewrite. I point to > urllib->urllib2, asyncore->twisted, rfc822/mimelib/&c->email. If I had the time, I would question each of these. Yes, it is easier for the author of the new package to build "in the green", but it is (nearly) never necessary, and almost always bad for the project. > This approach means that people's existing code continues to work, > there's a separate installer of the new code that is available for > older versions of Python, plus we have the newer features. Yes, and everybody has to rewrite their code, because the "old" modules don't see fixes, and get obsoleted and eventually removed. Users get the impression that Python breaks their APIs for no good reason every now and then. Regards, Martin From martin at v.loewis.de Thu Apr 20 07:54:01 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 07:54:01 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <200604201435.11378.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604201039.29138.anthony@interlink.com.au> <e2721t$2b5$1@sea.gmane.org> <200604201435.11378.anthony@interlink.com.au> Message-ID: <444721F9.1010102@v.loewis.de> Anthony Baxter wrote: > And I would reply that sometimes a rewrite is necessary. I doubt > firefox would be at the state it is today if it was still based on > the ancient netscape codebase. It's off-topic here certainly: but firefox is actually not a complete rewrite; it still has tons of "ancient netscape codebase" in it. The could not have completed it otherwise. Regards, Martin From martin at v.loewis.de Thu Apr 20 07:54:04 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 07:54:04 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <200604191457.24195.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> Message-ID: <444721FC.9010609@v.loewis.de> Anthony Baxter wrote: > I'm not sure how people would prefer this be handled. I don't think we > need to have a PEP for it - I don't see PEPs for ctypes, elementtree, > pysqlite or cProfile, either. I see a significant procedural difference between what happened for ctypes, elementtree, and pysqlite, as opposed to setuptools. For all these packages, there was 1. a desire of users to include it 2. an indication from the package maintainer that it's ok to include the package, and that he is willing to maintain it 3. some discussion on python-dev, which resulted only in support and no objection 4. some (other) committer who "approved" incorporation of the library. In essence, that committer is a "second" for the package inclusion. setuptools has 1 and 2, but fails on 3 and 4 so far. There is discussion now after the fact, but it results in objection. I'd like to point out the importance of 4: Fredrik Lundh originally asked "who approved that change", which really meant "who can I blame for it". I feel that I approved inclusion of ctypes and elementtree: I talked with the developers on how precisely it should happen, and I checked then that everything that I thought should happened indeed happened. And I did the majority of the communication on python-dev. So the package authors can get all the praise, and I happily take all the blame. The same didn't happen for setuptools: there is no second, so Phillip Eby takes all the praise *and* the blame. It's still a one-man show. Now, I know that Neal Norwitz had asked him what the status is and when it will happen, but he apparently did not want to *approve* inclusion of that package. Likewise, Guido van Rossum (apparently) did not want to approve it, either (he just would not object). If you (Anthony) want to act as a second for setuptools, I would feel much happier - because I can then blame you for whatever problems that decision causes five years from now. Regards, Martin From anthony at interlink.com.au Thu Apr 20 08:00:22 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 16:00:22 +1000 Subject: [Python-Dev] Raising objections In-Reply-To: <444721FC.9010609@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <444721FC.9010609@v.loewis.de> Message-ID: <200604201600.27845.anthony@interlink.com.au> On Thursday 20 April 2006 15:54, Martin v. L?wis wrote: > If you (Anthony) want to act as a second for setuptools, I > would feel much happier - because I can then blame you for > whatever problems that decision causes five years from now. Done. See my longer reply "setuptools in 2.5". I think I just did that. I'm happy to share any blame, but leave the credit to Phillip. I don't think it's fair to say that Phillip just checked this in off on his own. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From nnorwitz at gmail.com Thu Apr 20 08:13:57 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Wed, 19 Apr 2006 23:13:57 -0700 Subject: [Python-Dev] Raising objections In-Reply-To: <444721FC.9010609@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <444721FC.9010609@v.loewis.de> Message-ID: <ee2a432c0604192313h53cfbef3jaa5bdaff809c32b6@mail.gmail.com> On 4/19/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > I see a significant procedural difference between what happened > for ctypes, elementtree, and pysqlite, as opposed to setuptools. > For all these packages, there was > 1. a desire of users to include it > 2. an indication from the package maintainer that it's ok > to include the package, and that he is willing to maintain it > 3. some discussion on python-dev, which resulted only in support > and no objection > 4. some (other) committer who "approved" incorporation of the > library. In essence, that committer is a "second" for the > package inclusion. > > setuptools has 1 and 2, but fails on 3 and 4 so far. There is > discussion now after the fact, but it results in objection. > > Now, I know that Neal Norwitz had asked him what the status > is and when it will happen, but he apparently did not want > to *approve* inclusion of that package. Likewise, Guido > van Rossum (apparently) did not want to approve it, either > (he just would not object). I think Guido was more enthusiastic about it going in. I want the functionality, though have concerns about it. These concerns are based on my ignorance, I have never used setuptools nor looked at the code. I had these reservations for each of the packages we imported. I plan to review the setuptools code. Yes, I know it's 7k+ lines of code. It's near the top of my TODO list. Summer of Code is the only thing higher and that should not require too much time moving forward. I will at least be familiar with the code. I think if people have specific concerns about the code, they should address them rather than generalities. I hope Phillip has or can easily write some doc that will address the high-level or a roadmap. I don't particularly care, but others do. I think it's reasonable to have and should be relatively short (1-2 pages max IMO). As for #3, there was some discussion and there was a chance for objections, it just wasn't explicit. So mistakes were made. I will try to ensure these issues are raised going forward and this problem doesn't recur. It's water under the bridge at this point. I suspect setuptools is the best we've got and the best we're gonna get. It's good enough to move forward. While it may be a pain for some of us in the short term, many users have asked for this for many years. Let's make it work. n From anthony at interlink.com.au Thu Apr 20 08:17:51 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 16:17:51 +1000 Subject: [Python-Dev] Raising objections In-Reply-To: <444721F3.6070704@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> Message-ID: <200604201617.52934.anthony@interlink.com.au> On Thursday 20 April 2006 15:53, Martin v. L?wis wrote: > I don't know about Phillip's plans, but I do see many indications > that people stop using distutils, and use setuptools instead. Surely that's an indication that it _should_ become part of Python? If there's an obvious demand for the features. In addition, it's available for 2.3 and 2.4 as a separate package. > > I started looking at this. The number of complaints I got when I > > started on this that it would break the existing distutils based > > installers totally discouraged me. > > I believe this is a myth. Some changes might cause breakage, yes, > but others wouldn't. For example, introducing additional parameters > to setup is just fine: existing packages won't break; new packages > using these parameters won't build on older Python releases, of > course. Something as simple as replacing the distutils.fancy_getopt with optparse couldn't be done in a completely backwards compatible way, because of some of the funkiness in fancy_getopt that people use today. > > In addition, the existing distutils codebase is ... not good. > > Then it shouldn't have become part of Python in the first place. > Can you please elaborate what specific aspects of distutils > you dislike? It's extremely awkward to plug in functionality that _should_ be more obvious and extensible. At least, unless you want to make every single setup.py include a magic bit of patching to Do The Right Thing. There are a number of places where the code is obviously not finished, when Greg clearly got fed up with the project <wink>. There was no decent documentation of the internals. > > It is flatly not possible to "fix" distutils and preserve > > backwards compatibility. > > Why? Because there are too many people that have poked too deeply into the innards with their setup.py scripts. Partly this is a function of the lack of documentation - until I sat down and spent a couple of days on it a year or so ago, there wasn't a library reference. There's still no decent "here's the Approved Way to poke about inside distutils" doc. > If I had the time, I would question each of these. Yes, it is > easier for the author of the new package to build "in the > green", but it is (nearly) never necessary, and almost always > bad for the project. I guess I disagree almost entirely. The new email package is vastly, vastly better than the old hodge-podge of stuff that was written back in python's Dark Ages. Trying to fix those would have been almost impossible. In addition, we'd be stuck with the need to maintain backwards compatibility. Something like spambayes or almost anything else handling modern email traffic would then not be possible in Python. > Yes, and everybody has to rewrite their code, because the "old" > modules don't see fixes, and get obsoleted and eventually removed. > Users get the impression that Python breaks their APIs for no good > reason every now and then. distutils doesn't _get_ fixes. Well, hardly any. Compare that to a 'svn log' in sandbox/setuptools. I know I much prefer an actively maintained and developed codebase to some ancient but inert existing system. Heck, most of the recent work on distutils was an offshoot of setuptools. As far as "breaks their APIs" - this is a pretty rare occurance, it might have happened half a dozen times so far. And the old module still keeps working fine. If there's bugs that someone logs, they will get addressed exactly the same as any other bug. I don't see people closing patches off as "that's an old module, I'm not going to apply it". It also takes _years_ for older modules to go through the process of deprecation, moving to lib-old, and final removal. I mean, heck, rfc822 still doesn't issue any deprecation warnings. I suspect C++ or Java change faster <wink>. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From martin at v.loewis.de Thu Apr 20 08:30:55 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 08:30:55 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <44472A9F.7050004@v.loewis.de> Anthony Baxter wrote: > And now let's look at some of the stuff that setuptools gives us: > > - We have a CPAN-type system I think it is unfair (to Richard Jones) to attribute this to setuptools. We already have a CPAN-type system: the Cheeseshop. What setuptools adds is roughly the CPAN shell (ie. CPAN.pm or however it is implemented). Actually, I think it is ez_setup that provides (something like) the CPAN shell. It is my understanding that setuptools itself has nothing to do with a CPAN-like system, just as Makemakefile.pl has nothing to do with CPAN. > - Multiple installs of different versions of the same package, > including per-user installs. I never had the need, but I trust others do. > - The "develop" mode > > This makes life that bit less painful all-round. This could be added to distutils with no problems, right? > - The plugin/extension support > > Extending distutils currently is a total pain in the arse. If that's a desirable feature to have, it *should* be added to distutils; IOW, it should be available if you do "from distutils import setup", not just when you do "from setuptools import setup". > I'm a little suprised by the amount of fear and loathing this has > generated. To me, there are such obvious benefits that I don't see > why people are so vehemently against setuptools. I haven't seen any > arguments that have convinced me that this isn't the right thing to > do. Yes, there's still work to be done - but hell, we've only > released the first alpha so far. I'm not looking at it from a end-user's point of view. I trust that setuptools provides features that end-users want, and in a convenient way. My fear is in the maintainer's side: how many new bug reports will this add? How much code do I have to digest to even make the slightest change? That is says from itself that it is version 0.7a1dev-r45536 doesn't help to reduce that fear. > I'm also suprised by how much some people seem to think that the > current state of distutils functionality is acceptable or desirable. > It's not - it's a mess. I would like to require that this is solved by contributing to distutils, instead of replacing it. I know this is an unrealistic request to make - in particular because there is only a single developer world-wide which actively develops "something like that". Requiring Phillip to rewrite distutils instead is certainly unfair - but I'm still unhappy with the path events take. > Finally, I'd like to point out that I think some of the hostility > towards Phillip's work has been excessive. I'd like to make clear that my hostility is only towards his work; never towards Phillip Eby himself. Regards, Martin From martin at v.loewis.de Thu Apr 20 08:40:52 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 08:40:52 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <4446DEAE.50204@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <4446DEAE.50204@canterbury.ac.nz> Message-ID: <44472CF4.8070309@v.loewis.de> Greg Ewing wrote: > I'm not sure whether distutils is really that > badly broken. But an earlier poster seemed to be > saying that he had great difficulty finding anything > that could be changed without breaking something > that someone relied on. It's hard to fix something > if you can't change it at all. That's not been my experience with modifying distutils. I fixed several bugs over the years, and always found a way to fix them (or, rather, the contributor whose patch I committed found a fix). OTOH, I never tried to extend the architecture (but never had a need to, either). Regards, Martin From anthony at interlink.com.au Thu Apr 20 08:52:49 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 16:52:49 +1000 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <44472A9F.7050004@v.loewis.de> References: <200604201456.13148.anthony@interlink.com.au> <44472A9F.7050004@v.loewis.de> Message-ID: <200604201652.51448.anthony@interlink.com.au> On Thursday 20 April 2006 16:30, Martin v. L?wis wrote: > I think it is unfair (to Richard Jones) to attribute this to > setuptools. We already have a CPAN-type system: the Cheeseshop. > What setuptools adds is roughly the CPAN shell (ie. CPAN.pm > or however it is implemented). Actually, I think it is ez_setup > that provides (something like) the CPAN shell. No, we have _half_ of a CPAN. We don't have the shell and the dependency finding and the ability to support multiple versions of the same package (at least, not without PYTHONPATH hell) and all the other stuff that makes CPAN useful. And heck, Richard knows full well how good cheeseshop is. :-) > It is my understanding that setuptools itself has nothing > to do with a CPAN-like system, just as Makemakefile.pl has > nothing to do with CPAN. I'd disagree. A package index is a very nice and useful thing in and of itself. But it's only part of the solution. > > - The "develop" mode > > > > This makes life that bit less painful all-round. > > This could be added to distutils with no problems, right? Not without a lot of the other stuff that's in setuptools. > That is says from itself that it is version 0.7a1dev-r45536 > doesn't help to reduce that fear. It's had _two_ _years_ of quite active development, so the version number of 0.7 is hardly a good indication. As far as all the other stuff on the end of the version string - well, right now Python's SVN trunk really could be considered 2.5a2dev-r45575. > I would like to require that this is solved by contributing to > distutils, instead of replacing it. I know this is an unrealistic > request to make - in particular because there is only a single > developer world-wide which actively develops "something like that". > > Requiring Phillip to rewrite distutils instead is certainly > unfair - but I'm still unhappy with the path events take. He's written code on _top_ _of_ _distutils_. How is this bad? It's using the underlying existing code. > > Finally, I'd like to point out that I think some of the hostility > > towards Phillip's work has been excessive. > > I'd like to make clear that my hostility is only towards his work; > never towards Phillip Eby himself. See, I don't get the hostility thing. While I have some concerns about the state of distutils today, I still admire Greg Ward's efforts in producing the code, and Python is in a much better place than had he not done the work. Responding to an effort like Greg's, or Phillip's, with hostility is a fantastic way to discourage people from working further on Python or on the code in question. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From martin at v.loewis.de Thu Apr 20 08:53:54 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 08:53:54 +0200 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <5.1.1.6.0.20060419150224.045c73e0@mail.telecommunity.com> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060419150224.045c73e0@mail.telecommunity.com> Message-ID: <44473002.4070208@v.loewis.de> Phillip J. Eby wrote: > I assumed that it would be more controversial to merge setuptools into > distutils, than to provide it as an enhanced alternative. It is this assumption in setuptools that seems to have guided many design decisions: that it is completely unacceptable to change implementation details, because somebody might rely on them. I firmly believe that this position is misguided, and that decisions resulting from it should be corrected (over time, of course). Beautiful is better than ugly: if you believe that the distutils code is wrong in some respect, then change it. Of course, things are more complicated in this approach: you have to actively consider the likelyhood of breakage, and you have to a) clearly document the potential for breakage: the more likely the breakage, the more visible the documentation should be b) try to come up with recommendations for users should the any code actually break: users then want to know how they should change their code so that it works with previous versions of Python still. c) ask for consent in advance to making a potentially-breaking change. > 1. How to activate or deactivate backward compatibility for packages or > people relying on intimate details of current distutils behaviors See above: on a case-by-case basis. > 2. Maintaining the existing version of setuptools to work with Python 2.3 > and 2.4, which would not have this integration with the distutils. One way would be to make another distutils release, and require setuptools users to install this distutils release as a prerequisite. Another solution would be to fork setuptools, in a pre-2.5 branch and a post-2.5 branch. Over time, the pre-2.5 branch could be abandoned. A third solution likely exists on a case-by-case basis: conditionalize code inside setuptools, depending on Python version (or other criteria). Regards, Martin From guido at python.org Thu Apr 20 08:57:24 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 07:57:24 +0100 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <ca471dc20604192357s704d574au71ba3b4cfc703edb@mail.gmail.com> Thanks for posting an excellent summary, Anthony. For the record, I'm taking full responsibility for the decision that setuptools is going in. Having read much (though not all) of the endless bitching, and the responses from Phillip and others in defense, I still believe it's the right decision. I've learned my lesson about discussing library additions early -- I didn't anticipate any controversy on this one. The way I see it is this: a distribution tool that handles dependencies and other problems that distutils doesn't solve is sorely needed; blessing a particular solution makes a huge difference in acceptance, and acceptance is key for a distribution tool; setuptools is pretty much the only available candidate (platform-specific tools don't count). So I'm blessing setuptools. I'm sure Phillip will fix the remaining issues enumerated by Anthony below, and any new ones that might come up during the release cycle. Please help by being constructive in your criticism. Bug reports are useful. Whining is not. --Guido On 4/20/06, Anthony Baxter <anthony at interlink.com.au> wrote: > In an attempt to help this thread reach some sort of resolution, > here's a collection of arguments against and in favour of setuptools > in 2.5. My conclusions are at the end. > > The arguments against: > > - Someone should instead just fix distutils. > > Right. And the amount of yelling if distutils changed in a non-b/w > compat way wouldn't be high. There's also the basic problem that the > distutils code is _horrible_. > > - It monkeypatches pydoc and distutils! > > It only monkeypatches pydoc when the separate setuptools installer > is used on older Pythons. How is this relevant for this discussion of > Python 2.5? The monkeypatching for distutils should be reduced - see > AMK's message for a breakdown of this. > > - Documentation > > beaker% pydoc xmlcore.etree > no Python documentation found for 'xmlcore.etree' > > beaker% pydoc ctypes > no Python documentation found for 'ctypes' > > The documentation (of which there is plenty) can and will be folded > into the standard python docs. Most of the new modules in 2.5 went in > before their docs. > > - Where's the PEP? > > I don't see the need. The stuff that could go into a PEP about > formats and the like should go into the existing Distutils > documentation. It's a far more useful place, and many more people are > likely to find it there than in a PEP. > > - It's a huge amount of code (or "ball of mud"), or, it adds too many > features. > > Most of these have been added over the last 2 years in response to > feedback and requests from people on distutils-sig. There's been an > obvious pent-up demand for a bunch of this work, and now that > someone's working on it, these can get done. > > - It will break existing setup.py scripts! > > No it won't. If you don't type the letters 'import setuptools' into > your setup.py, it won't be affected. > > - Rewriting from scratch is bad > > This isn't a rewrite - it's built on top of distutils. > (An aside, I don't buy the "never rewrite" argument. As I mentioned in > an earlier message, look at urllib2, twisted and email for starters. > In addition, look at Firefox, Windows XP, and Mac OSX. Hell, Linux > could be considered a rewrite of Minix, once upon a time.) > > - Eggs are inferior to distribution-specific packaging > > Not all operating systems have a decent packaging system. The ones > that do, don't support multiple versions of the same library. In > addition, there's no reason why existing packaging systems can't just > bundle up the code as they do now - if they also add a .egg-info file > to the packages, that would be even better! Finally, these don't > support user installation of software. This is particularly useful in > a hosting environment. > > > And now let's look at some of the stuff that setuptools gives us: > > - We have a CPAN-type system > > I do quite a number of Python talks, and this is _always_ one of > the most requested features. There's been many attempts to write this, > none have been completed until now. If you honestly don't see that > this is a big thing for Python, then I am very, very suprised. I > suspect that this will be the #1 new feature of Python 2.5 that the > users will notice and be happy about. > > - Multiple installs of different versions of the same package, > including per-user installs. > > Again, as Python gets more widely used, this becomes a big issue. > Sure, it's not necessarily a killer argument for python-dev, but stuff > that's added to Python shouldn't just be just for the use of > python-dev. The multiple installed versions feature also avoids the > CPAN dependency hell problem - back when I used to work with Perl, > this was a constant source of nightmarish problems. > > - The "develop" mode > > This makes life that bit less painful all-round. > > - The plugin/extension support > > Extending distutils currently is a total pain in the arse. > > - Backwards compatibility > > easy_install even works with existing packages that use traditional > distutils, so long as they're in the Cheeseshop. Damn, this is nice. > If you don't want to do the work to change your installation code, > don't bother - it will still be useful. > > The conclusions: > > I'm a little suprised by the amount of fear and loathing this has > generated. To me, there are such obvious benefits that I don't see > why people are so vehemently against setuptools. I haven't seen any > arguments that have convinced me that this isn't the right thing to > do. Yes, there's still work to be done - but hell, we've only > released the first alpha so far. > > For inclusion in the standard library, the usual benchmark is that the > code offers useful functionality, and that it be the "best of breed". > setuptools clearly meets these two criteria. (Ok, it's really "only of > breed", but that also makes it "best", by default <wink>). It's also > been under development for over 2 years - according to svn, 0.0.1 was > checked into svn back in March 2004. > > I'm also suprised by how much some people seem to think that the > current state of distutils functionality is acceptable or desirable. > It's not - it's a mess. > > Finally, I'd like to point out that I think some of the hostility > towards Phillip's work has been excessive. He's done an amazing > amount of work on this (look at the distutils-sig archive for the > last two years for more), and produced something that's very very > useful. > > He deserves far more credit for this than he seems to have been > getting here. > > Anthony > -- > Anthony Baxter <anthony at interlink.com.au> > It's never too late to have a happy childhood. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From nnorwitz at gmail.com Thu Apr 20 09:32:56 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 20 Apr 2006 00:32:56 -0700 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code Message-ID: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> This spring and summer, Google will again provide stipends for students (18+, undergraduate thru PhD programs) to write new open-source code. The Python Software Foundation (PSF) http://www.python.org/psf/ will again act as a sponsoring organization in Google's Summer of Code, matching mentors and projects benefiting Python and Python users. Projects can include work on the core Python language, programmer utilities, libraries, packages, frameworks related to Python, or other Python implementations like Jython, PyPy, or IronPython. Please add your project ideas to the existing set at http://wiki.python.org/moin/SummerOfCode Mentoring instructions are also on this page. The deadline is soon, so please sign up as a mentor early. If you are a student considering a project, you should start deciding now. Feel free to ask questions on python-dev at python.org The main page for the Summer of Code is http://code.google.com/summerofcode.html At the bottom are links to StudentFAQ, MentorFAQ, and TermsOfService. The first two have the timeline. Note that student applications are due between May 1, 17:00 PST and May 8, 17:00 PST. People interested in mentoring a student though PSF are encouraged to contact me, Neal Norwitz at nnorwitz at gmail.com. People unknown to Guido or myself should find a couple of people known within the Python community who are willing to act as references. Feel free to contact me if you have any questions. I look forward to meeting many new mentors and students. Let's improve Python! Cheers, n From martin at v.loewis.de Thu Apr 20 09:42:20 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 09:42:20 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> Message-ID: <44473B5C.3000409@v.loewis.de> >> I don't understand it. > > Have you read the manuals? You mean, http://peak.telecommunity.com/DevCenter/setuptools Yes, I did. However, this would only enable me to use it (should I ever find a need). What I really need to understand is how all this stuff works inside. Now, I haven't actually *tried* to learn it so far. Fredrik's question was not "who thinks he could learn it" (I certainly could), I read his question as "who does understand it, as of today". I can't faithfully answer "yes" to that question. So far, I believe nobody has said "yes, I do understand it, and could maintain it if Phillip Eby became unavailable". > Please define "magic". "Functionality with no obvious implementation, or functionality implemented in an unobvious way". > Better, please point to the API functions or > classes, or the setup commands documented in the manual, to show me what > these things are that appear to be magic. Here are some that I either already knew, or discovered just now: 1. setuptools manages to replace pydoc on earlier Python versions. MAL assumed it does so by overwriting the files. This would be bad practice, but is the obvious solution. You then told him that this is not what actually happens (so it's already magic); instead, you arrange that Python finds your version before it finds its own. Again, this is still magic: How does it do that? If you append to sys.path, Python would still find its own version before it finds yours. So perhaps you insert another path at sys.path[0]? This would also be bad practice, but it would be the next-obvious approach. But I fully expect that you are now telling me that this is still not how it works. 2. setuptools manages to install code deep inside a subdirectory of site-python, and manages to install multiple versions simultaneously. How does that work? The first part (packages outside sys.path) can be solved with a .pth file (which I already consider bad practice, as it predates the package concept of Python); but that can't work for multiple versions: magic. I (think I) know the answer: there is a single .pth file, and that is *edited* on package installation time. This is a completely non-obvious approach. If Python needs to support multiple versions of a package (and there is apparently a need), that should be supported "properly", with a clear design, in a PEP. 3. namespace packages. From the documentation, I see that you pass namespace_packages= to setup, and that you put __import__('pkg_resources').declare_namespace(__name__) into the namespace package. How does that work? The documentation only says "This code ensures that the namespace package machinery is operating and that the current package is registered as a namespace package." Also, why not import pkg_resources pkg_resources.declare_namespace(__name__) What does declare_namespace actually do? 4. zip_safe. Documentation says "If this argument is not supplied, the bdist_egg command will have to analyze all of your project's contents for possible problems each time it buids an egg." How? Sounds like it is solving the halting problem, if it manages to find possible problems. > There do exist implementation details that are not included in user > documentation, but this is going to be true of any system. These details > are sometimes necessarily complex due to distutils limitations, behavioral > quirks of deployed packages using distutils, and the sometimes baroque > variations in sys.path resolution across platforms, Python versions, and > invocation methods. I trust that there is a good reason for each of them. However, for inclusion in the standard library, some of these should go away: - if you have distutils limitations, remove them - differences in Python versions shouldn't matter: you always know what the Python version is - if there are baroque variations in sys.path resolution across platforms, they should be removed, or modernized. Not sure what "invocation methods" are. >> This magic might do the "right thing" in many cases, and it might >> indeed help the user that the magic is present, yet I still do believe >> that magic is inherently evil: explicit is better than implicit. > > Are documented defaults "implicit" or "magic"? Documented defaults are explicit. > With respect to you and MAL, I think that some of your judgments regarding > setuptools may have perhaps been largely formed at a time last year when, > among other things: > > * That documentation section I just referenced didn't exist yet I don't think I ever complained about the lack of documentation. I would (personally) not trust the documentation, anyway, but always look at the code to find out how it actually works. I read documentation primarily to find out what the *intent* was. > These are significant changes that are directly relevant to the objections > that you and he raised (regarding startup time, path length, tools > compatibility, etc.), and which I added because of those objections. I > think that they are appropriate responses to the issues raised, in that > they allow the audience that cares about them (you, MAL, and system > packagers in general) to avoid the problems that those features' absence > caused. And I like these changes indeed - my (current) complaints are not about the functionality provided. Instead, I'm concerned about maintainability of this code. > It would probably helpful if you would both be as specific as possible in > your objections so that they can be addressed individually. If you don't > want setuptools in 2.5, let's be clear either on the specific > objections. If the objection is to the very *idea* of setuptools, that's > fine too, as long as we're clear that's what the objection is. setuptools combines several independent aspects, as far as I can tell: 1. automatic download of packages (actually ez_setup). This is a nice feature, and should be included. +1 2. enhancements to the development process, and improvements to the extensibility of distutils. Also nice, but should be part of distutils itself. +0 in its current form, +1 if it were distutils features. 3. "package resources". I dislike the term resources (it is not about natural gas, water, main memory, or processor cycles, right?); instead, this seems to provide access to "embedded" data files. Apparently, there is a need for it, so be it. -0 4. .egg files. -1 5. entry points. I don't understand that concept 6. package dependencies. +1 7. namespace packages. Apparently, there is a need, although I believe that flat is better than nested. -0 However, these are all besides my original objections: - I disliked the fact that nobody explicitly approved inclusion of setuptools. Now that Anthony Baxter did, I'm fine. - I still fear that the code of setuptools will turn out to be a maintenance nightmare. Of course, this doesn't have to concern me - I could just ignore all bug reports about that, and ignore it in the next revision of my book. However, I know that I can't take that position, hence my concerns. > So I would ask that you please make a list of what you would have to see > changed in setuptools before you would consider it viable for stdlib > inclusion, or simply admit that there are no changes that would satisfy > you, or that you don't know enough about it to say, or that you'd like it > to be kicked back to distutils-sig for more discussion ad infinitum, or > whatever your actual objections are. See above: now that Anthony approved it, I can accept it. There is nothing you could possibly do to ease my concerns about maintainability, or my dislike of a Python-specific package format. > Meanwhile, this discussion has used > up the time that I otherwise would have spent writing 2.5 documentation > today (e.g., for the pkgutil additions). Same here. I spent several hours now typing setuptools email. Still, I don't think this is wasted time (or else I would not have typed this message). Regards, Martin From fredrik at pythonware.com Thu Apr 20 09:43:07 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 09:43:07 +0200 Subject: [Python-Dev] New-style icons, .desktop file References: <443F7B09.50605@doxdesk.com> Message-ID: <e27e2e$uug$1@sea.gmane.org> Andrew Clover wrote: > Morning! > > I've done some tweaks to the previously-posted-about icon set, taking > note of some of the comments here and on -list. you do wonderful stuff, and then you post the announcement as a followup to a really old message, to make sure that people using a threaded reader only stumbles upon this by accident... this should be on the python.org frontpage! > included PNG and SVG version of icons. The SVG unfortunately doesn't > preserve all the effects of the original Xara files, partly because > a few effects (feathering, bevels) can't be done in SVG 1.1, and > partly because the current conversion process is bobbins (but we're > working on that!). So the nice gradients and things are gone, but it > should still be of use as a base for anyone wanting to hack on the > icons. hack SVG? what am I waiting for ? > Files and preview here: > > http://doxdesk.com/img/software/py/icons2.zip > http://doxdesk.com/img/software/py/icons2.png brilliant! </F> From martin at v.loewis.de Thu Apr 20 09:51:13 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 09:51:13 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201652.51448.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> <44472A9F.7050004@v.loewis.de> <200604201652.51448.anthony@interlink.com.au> Message-ID: <44473D71.6060107@v.loewis.de> Anthony Baxter wrote: > Not without a lot of the other stuff that's in setuptools. > >> That is says from itself that it is version 0.7a1dev-r45536 >> doesn't help to reduce that fear. > > It's had _two_ _years_ of quite active development, so the version > number of 0.7 is hardly a good indication. As far as all the other > stuff on the end of the version string - well, right now Python's SVN > trunk really could be considered 2.5a2dev-r45575. Right - I'm not concerned about the "a1dev-r45536" part. The 0.7 part bothers me; this is two years of development from a single person, still. I have projects that are much older than that that I wouldn't suggest for inclusion in Python :-) >> Requiring Phillip to rewrite distutils instead is certainly >> unfair - but I'm still unhappy with the path events take. > > He's written code on _top_ _of_ _distutils_. How is this bad? It makes distutils an implementation detail of setuptools. What little development distutils has seen will stop; all fixes will go into setuptools directly. Users will be told that they should switch to setuptools. Please face it: setuptools *is* the death of distutils. This might not be that bad in the long run, but it does have the risk of repeating, when setuptools eventually is where distutils is today: complete, and unmaintained. > See, I don't get the hostility thing. While I have some concerns about > the state of distutils today, I still admire Greg Ward's efforts in > producing the code, and Python is in a much better place than had he > not done the work. Responding to an effort like Greg's, or Phillip's, > with hostility is a fantastic way to discourage people from working > further on Python or on the code in question. Well, I appreciate other contributions from other people, and I have always encouraged people to contribute to Python. It's just that I dislike this *specific* package, for several reasons, some of which I consider objective. Regards, Martin From martin at v.loewis.de Thu Apr 20 09:58:42 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 09:58:42 +0200 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <e27e2e$uug$1@sea.gmane.org> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> Message-ID: <44473F32.3020700@v.loewis.de> Fredrik Lundh wrote: > you do wonderful stuff, and then you post the announcement as a > followup to a really old message, to make sure that people using a > threaded reader only stumbles upon this by accident... this should > be on the python.org frontpage! I also wonder what the actions should be for the Windows release. Are these "contributed" to Python? With work of art, I'm particular cautious to include them without a specific permission of the artist, and licensing terms under which to use them. And then, technically: I assume The non-vista versions should be included the subversion repository, and the vista versions ignored? Or how else should that work? Regards, Martin From anthony at interlink.com.au Thu Apr 20 10:01:27 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 18:01:27 +1000 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <44473D71.6060107@v.loewis.de> References: <200604201456.13148.anthony@interlink.com.au> <200604201652.51448.anthony@interlink.com.au> <44473D71.6060107@v.loewis.de> Message-ID: <200604201801.29943.anthony@interlink.com.au> On Thursday 20 April 2006 17:51, Martin v. L?wis wrote: > > He's written code on _top_ _of_ _distutils_. How is this bad? > > It makes distutils an implementation detail of setuptools. What > little development distutils has seen will stop; all fixes will > go into setuptools directly. Users will be told that they should > switch to setuptools. > Please face it: setuptools *is* the death of distutils. > This might not be that bad in the long run, but it does have > the risk of repeating, when setuptools eventually is where > distutils is today: complete, and unmaintained. I don't see why you assume it will be unmaintained. I plan to spend some time in the near future wrapping my brain around the code - heck, I was going to do it today, but spent time writing emails instead. :-) I also don't think it will be the "death" of distutils. I think that over time the two pieces of code will become closer together - hopefully for Python 3.0 we can formally merge the two. Augmenting or layering a more sophisticated piece of code over the top of the more simple code means that we can do more things, hopefully easier. And as people do more things, we will find the need to fix the bugs in the underlying code. > Well, I appreciate other contributions from other people, and I > have always encouraged people to contribute to Python. It's just > that I dislike this *specific* package, for several reasons, some > of which I consider objective. And you posed a number of concrete questions to PJE in the previous message, which is good - these are also some of the questions I had (but hadn't had the time to investigate yet). Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From martin at v.loewis.de Thu Apr 20 10:02:06 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 10:02:06 +0200 Subject: [Python-Dev] Raising objections In-Reply-To: <5.1.1.6.0.20060419214144.01ef9630@mail.telecommunity.com> References: <4446776E.5090702@v.loewis.de> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <5.1.1.6.0.20060419214144.01ef9630@mail.telecommunity.com> Message-ID: <44473FFE.1050107@v.loewis.de> Phillip J. Eby wrote: > How much any of that could have also applied to the distutils at one > time, I don't know. My point is merely that setuptools has enough > commercial value to make me believe that sponsorship for features > shouldn't be that hard to come by, and there are certainly worse things > I could do with my work days. I'm glad to hear that you seem to be finding support for ongoing work on setuptools (I'm *really* glad to hear that; it's always good when people get funded to work on open source). I'm also happy to hear that Neal wants to study the complete source base of setuptools. Perhaps that means I won't have to :-) Regards, Martin From theller at python.net Thu Apr 20 10:16:45 2006 From: theller at python.net (Thomas Heller) Date: Thu, 20 Apr 2006 10:16:45 +0200 Subject: [Python-Dev] valgrind reports In-Reply-To: <ee2a432c0604151737tdcad9c0t25d84eadc11e7ff3@mail.gmail.com> References: <ee2a432c0604151737tdcad9c0t25d84eadc11e7ff3@mail.gmail.com> Message-ID: <4447436D.8060701@python.net> Neal Norwitz wrote: > This was run on linux amd64. It would be great to run purify on > windows and other platforms. If you do, please report back here, even > if nothing was found. That would be a good data point. > > Summary of tests with problems: > > test_codecencodings_jp (1 invalid read) > test_coding (1 invalid read) > test_ctypes (crashes valgrind) > test_socket_ssl (many invalid reads) > test_sqlite (1 invalid read, 1 memory leak) I've tried to run python under valgrind on linux x86 (Ubuntu 5.10). Here's what I get - python was configured with '--without-pymalloc --with-pydebug' after uncommenting the Py_USING_MEMORY_DEBUGGER in Objects/obmalloc.c. Is there something missing in the suppressions file? Thanks, Thomas theller at tubu32:~/devel/trunk$ valgrind --suppressions=Misc/valgrind-python.supp ./python -c "" ==13632== Memcheck, a memory error detector. ==13632== Copyright (C) 2002-2005, and GNU GPL'd, by Julian Seward et al. ==13632== Using LibVEX rev 1367, a library for dynamic binary translation. ==13632== Copyright (C) 2004-2005, and GNU GPL'd, by OpenWorks LLP. ==13632== Using valgrind-3.0.1, a dynamic binary instrumentation framework. ==13632== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward et al. ==13632== For more details, rerun with: -v ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F40FD: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F410C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F411B: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F4126: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F4263: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F426E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9DBA: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F4263: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E856C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9E33: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8F426E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E856C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E9E33: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E484D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8EE875: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E491C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E7421: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8EC21C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E6338: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8EC24C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E6338: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8EC0C4: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E637C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8EC113: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E637C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) ==13632== ==13632== Conditional jump or move depends on uninitialised value(s) ==13632== at 0x1B8EC24C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E637C: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8F1A6D: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E4D0E: (within /lib/ld-2.3.5.so) ==13632== by 0x1B8E47C6: (within /lib/ld-2.3.5.so) [8426 refs] ==13632== ==13632== ERROR SUMMARY: 27 errors from 13 contexts (suppressed: 0 from 0) ==13632== malloc/free: in use at exit: 298255 bytes in 2363 blocks. ==13632== malloc/free: 16990 allocs, 14627 frees, 1927352 bytes allocated. ==13632== For counts of detected errors, rerun with: -v ==13632== searching for pointers to 2363 not-freed blocks. ==13632== checked 636672 bytes. ==13632== ==13632== LEAK SUMMARY: ==13632== definitely lost: 0 bytes in 0 blocks. ==13632== possibly lost: 9216 bytes in 64 blocks. ==13632== still reachable: 289039 bytes in 2299 blocks. ==13632== suppressed: 0 bytes in 0 blocks. ==13632== Reachable blocks (those to which a pointer was found) are not shown. ==13632== To see them, rerun with: --show-reachable=yes theller at tubu32:~/devel/trunk$ From fredrik at pythonware.com Thu Apr 20 10:18:32 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 10:18:32 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au><200604201652.51448.anthony@interlink.com.au><44473D71.6060107@v.loewis.de> <200604201801.29943.anthony@interlink.com.au> Message-ID: <e27g4q$5be$1@sea.gmane.org> Anthony Baxter wrote: > I also don't think it will be the "death" of distutils. I think that > over time the two pieces of code will become closer together - > hopefully for Python 3.0 we can formally merge the two. I was hoping that for Python 3.0, we could get around to unkludge the sys.path/meta_path/path_hooks/path_importer_cache big ball of hacks, possibly by replacing sys.path with something a bit more intelligent than a plain list. </F> From walter at livinglogic.de Thu Apr 20 10:35:54 2006 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Thu, 20 Apr 2006 10:35:54 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e27g4q$5be$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au><200604201652.51448.anthony@interlink.com.au><44473D71.6060107@v.loewis.de> <200604201801.29943.anthony@interlink.com.au> <e27g4q$5be$1@sea.gmane.org> Message-ID: <444747EA.1010707@livinglogic.de> Fredrik Lundh wrote: > Anthony Baxter wrote: > >> I also don't think it will be the "death" of distutils. I think that >> over time the two pieces of code will become closer together - >> hopefully for Python 3.0 we can formally merge the two. > > I was hoping that for Python 3.0, we could get around to unkludge the > sys.path/meta_path/path_hooks/path_importer_cache big ball of hacks, > possibly by replacing sys.path with something a bit more intelligent than > a plain list. +1000 I'd like to be able to simply import a file if I have the filename. And I'd like to be able to load sourcecode from an Oracle database and have useful "filename" and line number information if I get a traceback. Servus, Walter From greg.ewing at canterbury.ac.nz Thu Apr 20 10:40:57 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 20:40:57 +1200 Subject: [Python-Dev] 2.5a1 Performance In-Reply-To: <e26i67$pg3$1@sea.gmane.org> References: <4434081E.1000806@benjiyork.com> <200604061108.30416.anthony@interlink.com.au> <4445247F.1010903@egenix.com> <1f7befae0604181116jecd70bfwd3d0cebb1b50d006@mail.gmail.com> <444538C1.90608@egenix.com> <444603D9.1060600@egenix.com> <4446709D.9030507@egenix.com> <e26i67$pg3$1@sea.gmane.org> Message-ID: <44474919.6030503@canterbury.ac.nz> Terry Reedy wrote: > I took a look. The only thing that puzzles me is 'warp factor', which > appears exactly once. It's been put there via time machine in connection with the dilithium crystal support in that will be added in Python 7.0. You don't need to worry about it yet. -- Greg From greg.ewing at canterbury.ac.nz Thu Apr 20 10:50:33 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 20:50:33 +1200 Subject: [Python-Dev] Raising objections In-Reply-To: <e264b2$ega$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <e264b2$ega$1@sea.gmane.org> Message-ID: <44474B59.3000807@canterbury.ac.nz> Fredrik Lundh wrote: > (distutils and setuptools are over 15000 lines of code, according to sloc- > count. Ye cats! That's a *seriously* big ball of mud. I just checked, and the whole of Pyrex is only 17000 lines. -- Greg From talin at acm.org Thu Apr 20 10:31:30 2006 From: talin at acm.org (Talin) Date: Thu, 20 Apr 2006 08:31:30 +0000 (UTC) Subject: [Python-Dev] PEP 355 (object-oriented paths) Message-ID: <loom.20060420T100537-849@post.gmane.org> I didn't have a chance to comment earlier on the Path class PEP, but I'm dealing with an analogous situation at work and I'd like to weigh in on it. The issue is - should paths be specialized objects or regular strings? PEP 355 does an excellent job, I think, of presenting the case for paths being objects. And I certainly think that it would be a good idea to clean up some of the path-related APIs that are in the stdlib now. However, all that being said, I'd like to make some counter-arguments. There are a lot of programming languages out there today that have custom path classes, while many others use strings. A particularly useful comparison is Java vs. C#, two languages which have many aspects in common, but take diametrically opposite approaches to the handling of paths. Java uses a Path class, while C# uses strings as paths, and supplies a set of function-based APIs for manipulating them. Having used both languages extensively, I think that I prefer strings. One of the main reasons for this is that the majority of path manipulations are single operations - such as, take a base path and a relative path and combine them. Although you do occasionally see cases where a complex series of operations is performed on a path, such cases tend to be (a) rare, and (b) not handled well by the standard API, in the sense that what is being done to the path is not something that was anticipated by the person who wrote the path API in the first place. Given that the ultimate producers and consumers of paths (that is, the filesystem APIs, the input fields of dialog boxes, the argv array) know nothing about Path objects, the question is, is it worth converting to a special object and back again just to do a simple concatenate? I think that I would prefer to have a nice orthogonal set of path manipulation functions. Another reason why I am a bit dubious about a class-based approach is that it tends to take anything that is related to a filepath and lump them into a single module. I think that there are some fairly good reasons why different path- related functions should be in different modules. For example, one thing that irks me (and others) about the Path class in Java is that it makes no distinction between methods that are merely textual conversions, and methods which actually go out and touch the disk. I would rather that functions that invoke filesystem activity to be partitioned away from functions that merely involve string manipulation. Creating a tempfile, for example, or determining whether a file is writeable should not be in the same bucket as determining the file extension, or whether a path is relative or absolute. What I would like to see, instead, is for the various path-related functions to be organized into a clear set of categories. For example, if "os.path" is the module for pure operations on paths, without reference to the filesystem, then the current path separator character should be a member of that module, not the "os" module. -- Talin From greg.ewing at canterbury.ac.nz Thu Apr 20 11:00:36 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 21:00:36 +1200 Subject: [Python-Dev] Raising objections In-Reply-To: <5.1.1.6.0.20060419162308.01e20718@mail.telecommunity.com> References: <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> <4445D80E.8050008@canterbury.ac.nz> <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <4445D80E.8050008@canterbury.ac.nz> <5.1.1.6.0.20060419144306.01e1a870@mail.telecommunity.com> <5.1.1.6.0.20060419162308.01e20718@mail.telecommunity.com> Message-ID: <44474DB4.3070507@canterbury.ac.nz> Phillip J. Eby wrote: > If they have Pyrex installed, setuptools uses > Pyrex to rebuild the .c from the .pyx. I hope it would only do this if the .pyx was newer than the .c. It's probably not a good idea to assume that just because Pyrex is around, the user wants to use it in all cases. He might be installing a package distributed by someone with a different version of Pyrex that wasn't quite compatible. If he's not modifying the .pyx files, it would be safer to just use the provided .c files. -- Greg From rganesan at myrealbox.com Thu Apr 20 11:02:45 2006 From: rganesan at myrealbox.com (Ganesan Rajagopal) Date: Thu, 20 Apr 2006 14:32:45 +0530 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> Message-ID: <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> >>>>> Neal Norwitz <nnorwitz at gmail.com> writes: > Please add your project ideas to the existing set at > http://wiki.python.org/moin/SummerOfCode I'd like to see ctypes supported on GCC ARM platforms. ARM is the only major Debian architecture that's not supported by ctypes. The underlying issue is lack of closure API support for ARM in libffi. I see a patch available at http://handhelds.org/~pb/arm-libffi.dpatch, that should be hopefully a good starting point. ctypes CVS has a libffi_arm_wince directory, which also seems to support closure API. Ganesan -- Ganesan Rajagopal (rganesan at debian.org) From fredrik at pythonware.com Thu Apr 20 11:15:42 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 11:15:42 +0200 Subject: [Python-Dev] Raising objections References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com><200604191457.24195.anthony@interlink.com.au><4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de><e264b2$ega$1@sea.gmane.org> <44474B59.3000807@canterbury.ac.nz> Message-ID: <e27jfv$fgj$1@sea.gmane.org> Greg Ewing wrote: > Fredrik Lundh wrote: > > > (distutils and setuptools are over 15000 lines of code, according to sloc- > > count. > > Ye cats! That's a *seriously* big ball of mud. I just checked, > and the whole of Pyrex is only 17000 lines. correction: it's actually only 14000 lines, but it's still the largest hand- written component we have in the standard library (when combined). ~90 modules, ~120 classes. (the mac library is larger, and so is encodings, but large parts of those are autogenerated from character databases and header files. and idle is larger than distutils, but smaller than distutils+setuptools) </F> From greg.ewing at canterbury.ac.nz Thu Apr 20 11:35:00 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 21:35:00 +1200 Subject: [Python-Dev] Raising objections In-Reply-To: <200604201039.29138.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> Message-ID: <444755C4.7040304@canterbury.ac.nz> Anthony Baxter wrote: >>http://www.joelonsoftware.com/articles/fog0000000069.html From what I remember, he didn't actually say that you should never rewrite anything, but that if you do, you need to be prepared for a very long period of time when nothing new is working, and that *if you are a commercial company* this is a bad idea. But Python is not dependent on getting new features out the door before its competitors in order to survive, so this doesn't really apply. It doesn't matter if people have to make do with the old distutils for a while until something better is available. -- Greg From theller at python.net Thu Apr 20 12:21:53 2006 From: theller at python.net (Thomas Heller) Date: Thu, 20 Apr 2006 12:21:53 +0200 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code In-Reply-To: <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> Message-ID: <444760C1.3010401@python.net> Ganesan Rajagopal wrote: >>>>>> Neal Norwitz <nnorwitz at gmail.com> writes: > >> Please add your project ideas to the existing set at > >> http://wiki.python.org/moin/SummerOfCode > > I'd like to see ctypes supported on GCC ARM platforms. ARM is the only major > Debian architecture that's not supported by ctypes. The underlying issue is > lack of closure API support for ARM in libffi. I see a patch available at > http://handhelds.org/~pb/arm-libffi.dpatch, that should be hopefully a good > starting point. ctypes CVS has a libffi_arm_wince directory, which also > seems to support closure API. I guess you should add this idea to one of the wiki pages (or did you already, and I missed it?) Thomas From guido at python.org Thu Apr 20 12:33:30 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 11:33:30 +0100 Subject: [Python-Dev] Raising objections In-Reply-To: <444755C4.7040304@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444755C4.7040304@canterbury.ac.nz> Message-ID: <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> On 4/20/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > Anthony Baxter wrote: > > >>http://www.joelonsoftware.com/articles/fog0000000069.html > > From what I remember, he didn't actually say that you > should never rewrite anything, but that if you do, you > need to be prepared for a very long period of time > when nothing new is working, and that *if you are a > commercial company* this is a bad idea. > > But Python is not dependent on getting new features > out the door before its competitors in order to > survive, so this doesn't really apply. It doesn't > matter if people have to make do with the old > distutils for a while until something better is > available. I think it's more a matter of scale. Rewriting small pieces of code is fine. Joel's argument applies to attempts at rewriting *entire systems* that cause massive loss of information on why things were the way they were. I don't think distutils, huge "ball of mud" as it is, hass quite crossed the size threshold that Joel was thinking of; but it *is* an extremely valuable repository of outlandish information about the bizarre details of many different platform, and as such I'd be hesitant undertaking a wholesale rewrite. I'd rather recommend the approach that Joel suggests for truly large systems: refactoring smaller components while keeping the overall structure intact. I find 100% backwards compatibility perhaps less interesting (not uninteresting, just less) than preserving the body of knowledge that is uniquely encapsulated in distutils. Unfortunately, this is mixed in with some stuff that isn't part of distutils' "core competency", like text utilities, process spawning, and option parsing. These should (eventually, when the 2.1 compatibility requirement is lifted) be refactored to use (or be merged into) the available stdlib facilities for such functionaliy. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From anthony at interlink.com.au Thu Apr 20 10:22:43 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 18:22:43 +1000 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44473B5C.3000409@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> Message-ID: <200604201822.45318.anthony@interlink.com.au> On Thursday 20 April 2006 17:42, Martin v. L?wis wrote: > 4. .egg files. -1 As far as I understand it, an egg file is just a zipimport format zip file with some additional metadata. You can also install the egg files in an unpacked way, if you prefer that. I don't understand the objection here - it's no better or worse than countless packages in site-packages, and if it gives us multiple versions of the same code, all the better. -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Thu Apr 20 12:49:58 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 20 Apr 2006 20:49:58 +1000 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e27g4q$5be$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <200604201801.29943.anthony@interlink.com.au> <e27g4q$5be$1@sea.gmane.org> Message-ID: <200604202049.59601.anthony@interlink.com.au> On Thursday 20 April 2006 18:18, Fredrik Lundh wrote: > I was hoping that for Python 3.0, we could get around to unkludge > the sys.path/meta_path/path_hooks/path_importer_cache big ball of > hacks, possibly by replacing sys.path with something a bit more > intelligent than a plain list. Oh please, yes. Replacing the current import code is one of the things I really really want to see in 3.0. Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From guido at python.org Thu Apr 20 13:08:16 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 12:08:16 +0100 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44473B5C.3000409@v.loewis.de> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> Message-ID: <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> On 4/20/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > 3. "package resources". I dislike the term resources (it is not about > natural gas, water, main memory, or processor cycles, right?); > instead, this seems to provide access to "embedded" data files. > Apparently, there is a need for it, so be it. -0 The "resources" name is actually quite a common meme; e.g. the Google internal API for this is in a module named "resources.py" (Google has its own set of functionality not unlike eggs). I'm also surprised you're -0 on the functionality. Lots of code has associated data files that must be distributed together with it, and IMO this is an absolutely essential feature for any packaging and distribution tool. This is another area where API standardization is important; as soon as modules are loaded out of a zip file, the traditional approach of looking relative to os.path.dirname(__file__) no longer works. > - I disliked the fact that nobody explicitly approved inclusion > of setuptools. Now that Anthony Baxter did, I'm fine. That's my fault. I told Phillip in no unclear terms that I wanted it. That was enough of an approval for me (and for Neal and Anthony, who were aware AFAIK). But I forgot to clarify this to python-dev. (Like with the proposal to turn print into a function, I assumed it was obvious that it was a good idea. :-) > - I still fear that the code of setuptools will turn out to be > a maintenance nightmare. AFAICT Phillip has explicitly promised he will maintain it (or if he doesn't, I expect that he would gladly do so if you asked). This has always been sufficient as a "guarantee" that a new module isn't orphaned. Beyond that, this objection is FUD; the circumstances of the original distutils aren't likely to be replicated here. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Thu Apr 20 13:09:33 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 23:09:33 +1200 Subject: [Python-Dev] Raising objections In-Reply-To: <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444755C4.7040304@canterbury.ac.nz> <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> Message-ID: <44476BED.9090806@canterbury.ac.nz> Guido van Rossum wrote: > I'd rather recommend the approach that Joel suggests for truly large > systems: refactoring smaller components while keeping the overall > structure intact. That's fine as long as the overall structure isn't the very thing that's wrong and needs to be fixed. Incremental refactoring is a hill-climbing approach, and there's a well-known problem with hill-climbing algorithms -- they have a tendency to get stuck on the top of a small hill and fail to find the big hill. I appreciate the loss-of-knowledge problem, I really do. But maybe there is a way of dealing with it in those cases where a change of structure really is needed. Instead of throwing away the old code, go over it line by line, sucking out the knowledge, digesting it, and incorporating it into the relevant places in the new structure. When you've finished, and all that's left of the old code is a dry, empty husk, then you can throw it away. Perhaps this process could be called "pupation" -- breaking down the old structure and building something new out of the parts, without wasting any of the original material. I'm not claiming that this necessarily has to be done with distutils, but if it did need to be done, I think it could be done, with sufficient effort. I do think that the fact nobody but the now-absent author seems to understand the internals of distutils is a very serious problem, and something needs to be done to make it easier to understand and maintain. -- Greg From guido at python.org Thu Apr 20 13:09:31 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 12:09:31 +0100 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604202049.59601.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> <200604201801.29943.anthony@interlink.com.au> <e27g4q$5be$1@sea.gmane.org> <200604202049.59601.anthony@interlink.com.au> Message-ID: <ca471dc20604200409r7890ba45k41ea127fec6e57f7@mail.gmail.com> On 4/20/06, Anthony Baxter <anthony at interlink.com.au> wrote: > On Thursday 20 April 2006 18:18, Fredrik Lundh wrote: > > I was hoping that for Python 3.0, we could get around to unkludge > > the sys.path/meta_path/path_hooks/path_importer_cache big ball of > > hacks, possibly by replacing sys.path with something a bit more > > intelligent than a plain list. > > Oh please, yes. Replacing the current import code is one of the things > I really really want to see in 3.0. Please no more "metoos" here; I've already forwarded this to the py3k list with an enthusiastic recommendation. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From rganesan at myrealbox.com Thu Apr 20 13:02:12 2006 From: rganesan at myrealbox.com (Ganesan Rajagopal) Date: Thu, 20 Apr 2006 16:32:12 +0530 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> <444760C1.3010401@python.net> Message-ID: <xkvhu08oa4sr.fsf@grajagop-lnx.cisco.com> >>>>> Thomas Heller <theller at python.net> writes: > Ganesan Rajagopal wrote: >>>>>>> Neal Norwitz <nnorwitz at gmail.com> writes: >> >>> Please add your project ideas to the existing set at >> >>> http://wiki.python.org/moin/SummerOfCode >> >> I'd like to see ctypes supported on GCC ARM platforms. ARM is the only major >> Debian architecture that's not supported by ctypes. The underlying issue is >> lack of closure API support for ARM in libffi. I see a patch available at >> http://handhelds.org/~pb/arm-libffi.dpatch, that should be hopefully a good >> starting point. ctypes CVS has a libffi_arm_wince directory, which also >> seems to support closure API. > I guess you should add this idea to one of the wiki pages (or did you already, > and I missed it?) I started editing the page, then I thought I'd first post here to get your feedback :-). Ganesan -- Ganesan Rajagopal (rganesan at debian.org) From guido at python.org Thu Apr 20 13:19:29 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 12:19:29 +0100 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code In-Reply-To: <xkvhu08oa4sr.fsf@grajagop-lnx.cisco.com> References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> <444760C1.3010401@python.net> <xkvhu08oa4sr.fsf@grajagop-lnx.cisco.com> Message-ID: <ca471dc20604200419xc637b4bx4ec0956156380a20@mail.gmail.com> On 4/20/06, Ganesan Rajagopal <rganesan at myrealbox.com> wrote: > I started editing the page, then I thought I'd first post here to get your > feedback :-). That approach doesn't scale; please use the wiki for feedback, not the mailing list (because people reading the wiki won't easily have access to the relevant feedback if it's done on the list). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Thu Apr 20 13:34:13 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 20 Apr 2006 23:34:13 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> Message-ID: <444771B5.4050504@canterbury.ac.nz> Guido van Rossum wrote: > The "resources" name is actually quite a common meme; I believe it goes back to the original Macintosh, which was the first and only computer in the world to have files with something called a "resource fork". The resource fork contained pieces of data called "resources". Then Microsoft stole the name, and before you knew, everyone was using it. It's all been downhill from there. :-) -- Greg From rganesan at myrealbox.com Thu Apr 20 13:30:34 2006 From: rganesan at myrealbox.com (Ganesan Rajagopal) Date: Thu, 20 Apr 2006 17:00:34 +0530 Subject: [Python-Dev] Python Software Foundation seeks mentors and students for Google Summer of Code References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> <xkvhbquwbowa.fsf@grajagop-lnx.cisco.com> <444760C1.3010401@python.net> <xkvhu08oa4sr.fsf@grajagop-lnx.cisco.com> <ca471dc20604200419xc637b4bx4ec0956156380a20@mail.gmail.com> Message-ID: <xkvhlku0a3hh.fsf@grajagop-lnx.cisco.com> >>>>> "Guido" == Guido van Rossum <guido at python.org> writes: > On 4/20/06, Ganesan Rajagopal <rganesan at myrealbox.com> wrote: >> I started editing the page, then I thought I'd first post here to get your >> feedback :-). > That approach doesn't scale; please use the wiki for feedback, not the > mailing list (because people reading the wiki won't easily have access > to the relevant feedback if it's done on the list). Makes sense. Done. Ganesan -- Ganesan Rajagopal (rganesan at debian.org) From ncoghlan at gmail.com Thu Apr 20 13:49:43 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 20 Apr 2006 21:49:43 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> References: <20060419142105.GA32149@rogue.amk.ca> <20060418185518.0359E1E407C@bag.python.org> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> Message-ID: <44477557.8090707@gmail.com> Phillip J. Eby wrote: > At 11:41 PM 4/19/2006 +1000, Nick Coghlan wrote: >> Given that naming though, I think contextlib.contextmanager should be >> renamed >> to contextlib.context. > > The name is actually correct under this terminology arrangement. > @contextmanager *returns* a context manager. That's like saying we should describe the result of calling a generator function as a generator-iterable because it has an __iter__ method. The longer name made sense when "context manager" was the term for the object with __enter__ and __exit__ methods (the way I originally wrote it when updating the PEP after the addition of the __context__ method). While I agree that flipping the terminology the other way around makes sense, it *also* flips the effect of calling a decorated generator function so that what it returns is now appropriately termed a generator-context (since it provides __enter__ and __exit__ methods directly). These contexts are context managers only in the same sense that iterators are also iterables. We shouldn't break the parallels between iterable-iterator and context manager-context without a good reason - and so far I haven't heard one. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From fredrik at pythonware.com Thu Apr 20 13:56:19 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 13:56:19 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) References: <20060418005956.156301E400A@bag.python.org><5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com><5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com><444537BC.7030702@egenix.com><5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com><e23n9h$pqi$1@sea.gmane.org><5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com><44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> Message-ID: <e27st4$dnf$1@sea.gmane.org> "Guido van Rossum wrote: > > - I still fear that the code of setuptools will turn out to be > > a maintenance nightmare. > > AFAICT Phillip has explicitly promised he will maintain it (or if he > doesn't, I expect that he would gladly do so if you asked). This has > always been sufficient as a "guarantee" that a new module isn't > orphaned. I'm not that worried about maintenance of the tool itself, but supporting "it's magic! (as long as you use the correct combination of one-letter options)" stuff is no fun at all. </F> From nidoizo at yahoo.com Thu Apr 20 14:28:47 2006 From: nidoizo at yahoo.com (Nicolas Fleury) Date: Thu, 20 Apr 2006 08:28:47 -0400 Subject: [Python-Dev] PEP 359: The "make" Statement In-Reply-To: <d11dcfba0604171815g5966be6cj9af692a2be9cc9ed@mail.gmail.com> References: <d11dcfba0604130858g42bac3d6gda10505c70c4eb2d@mail.gmail.com> <rowen-B6C55B.17194317042006@sea.gmane.org> <d11dcfba0604171815g5966be6cj9af692a2be9cc9ed@mail.gmail.com> Message-ID: <e27upv$kkn$1@sea.gmane.org> Steven Bethard wrote: > On 4/17/06, Russell E. Owen <rowen at cesmail.net> wrote: >> At some point folks were discussing use cases of "make" where it was >> important to preserve the order in which items were added to the >> namespace. >> >> I'd like to suggest adding an implementation of an ordered dictionary to >> standard python (e.g. as a library or built in type). It's inherently >> useful, and in this case it could be used by "make". > > Not to argue against adding an ordered dictionary somewhere to Python, > but for the moment I've been convinced that it's not worth the > complication to allow the dict in which the make-statement's block is > executed to be customized. The original use case had been > XML-building, and there are much nicer solutions using the > with-statement than there would be with the make-statement. If you > think you have a better (non-XML) use case though, I'm willing to > reconsider it. I use that to define C/C++ structures to be generated in Python. I have to make some hacks to make that work with metaclasses, and I can't validate a copy-paste error that would produce two members of the same name (only the last one is kept). Please reconsider it. Regards, Nicolas From amk at amk.ca Thu Apr 20 15:33:43 2006 From: amk at amk.ca (A.M. Kuchling) Date: Thu, 20 Apr 2006 09:33:43 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <444721F3.6070704@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> Message-ID: <20060420133343.GA2111@rogue.amk.ca> On Thu, Apr 20, 2006 at 07:53:55AM +0200, "Martin v. L?wis" quoted: > > It is flatly not possible to "fix" distutils and preserve backwards > > compatibility. Would it be possible to figure what parts are problematic, and introduce PendingDeprecationWarnings or DeprecationWarnings so that we can fix things? > > Sometimes you _have_ to rewrite. I point to > > urllib->urllib2, asyncore->twisted, rfc822/mimelib/&c->email. > > If I had the time, I would question each of these. Yes, it is > easier for the author of the new package to build "in the > green", but it is (nearly) never necessary, and almost always > bad for the project. I don't mind rewriting much, but hate leaving the original code in place; this is confusing to new users, even if it's convenient for existing users. How many HTML parsers are in the core now? (My gut feeling is that that Python's adoption curve has flattened, so it's probably now more important to keep existing users happy, so the time for jettisoning modules may be past.) --amk From amk at amk.ca Thu Apr 20 15:40:29 2006 From: amk at amk.ca (A.M. Kuchling) Date: Thu, 20 Apr 2006 09:40:29 -0400 Subject: [Python-Dev] Distutils for Python 2.1 (was "Raising objections") In-Reply-To: <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444755C4.7040304@canterbury.ac.nz> <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> Message-ID: <20060420134029.GB2111@rogue.amk.ca> On Thu, Apr 20, 2006 at 11:33:30AM +0100, Guido van Rossum wrote: > Unfortunately, this is mixed in with some stuff that isn't part of > distutils' "core competency", like text utilities, process spawning, > and option parsing. These should (eventually, when the 2.1 > compatibility requirement is lifted) be refactored to use (or be > merged into) the available stdlib facilities for such functionaliy. The 2.1 requirement was originally imposed because Greg Ward would make standalone Distutil releases. Greg is too busy working at his job, going camping on weekends, and being best man [1] to make new releases these days. I don't see anyone else wanting to make new releases, so Distutils can be like the rest of the stdlib: it'll only be guaranteed to work with the Python version it's shipped with, and can therefore use new modules. I can remove the comment from the PEP if no one objects. --amk [1] http://www.flickr.com/photos/airynothing/53483215/ From guido at python.org Thu Apr 20 14:41:59 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 13:41:59 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <44477557.8090707@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> Message-ID: <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> On 4/20/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > Phillip J. Eby wrote: > > At 11:41 PM 4/19/2006 +1000, Nick Coghlan wrote: > >> Given that naming though, I think contextlib.contextmanager should be > >> renamed to contextlib.context. > > > > The name is actually correct under this terminology arrangement. > > @contextmanager *returns* a context manager. > > That's like saying we should describe the result of calling a generator > function as a generator-iterable because it has an __iter__ method. > > The longer name made sense when "context manager" was the term for the object > with __enter__ and __exit__ methods (the way I originally wrote it when > updating the PEP after the addition of the __context__ method). > > While I agree that flipping the terminology the other way around makes sense, > it *also* flips the effect of calling a decorated generator function so that > what it returns is now appropriately termed a generator-context (since it > provides __enter__ and __exit__ methods directly). > > These contexts are context managers only in the same sense that iterators are > also iterables. We shouldn't break the parallels between iterable-iterator and > context manager-context without a good reason - and so far I haven't heard one. Sorry Nick, but you've still got it backwards. The name of the decorator shouldn't indicate the type of the return value (of calling the decorated generator) -- it should indicate how we think about the function. Compare @staticmethod and @classmethod -- the return type doesn't enter into it. I think of the function/generator itself as the context manager; it returns a context. (Yet another analogy: if generators had to be decorated explicitly, we'd name it @generator, even though what it returns when called is properly an iterator.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Thu Apr 20 15:19:09 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 20 Apr 2006 15:19:09 +0200 Subject: [Python-Dev] 3rd party extensions hot-fixing the stdlib (setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060419150705.045d2ce0@mail.telecommunity.com> Message-ID: <44478A4D.7020304@egenix.com> Phillip J. Eby wrote: >>>>>> Why should a 3rd party extension be hot-fixing the standard >>>>>> Python distribution ? >>>>> Because setuptools installs things in zip files, and older versions of >>>>> pydoc don't work for packages zip files. >>>> That doesn't answer my question. >>> That is the answer to the question you asked: "why hot-fix?" Because >>> setuptools uses zip files, and older pydocs crash when trying to display >>> the documentation for a package (not module; modules were fine) that is in >>> a zip file. >> I fail to see the relationship between setuptools and pydoc. > > People blame setuptools when pydoc doesn't work on packages in zip > files. Rather than refer to some theoretical argument why it's not my > fault and I shouldn't be the one to fix it, I prefer to fix the problem. And people come to python-dev to complain about why a Python patch level release doesn't seem to fix some other problem in that same code that is documented to be fixed in the release. Don't you realize the general problem with this kind of approach ? BTW: if setuptools would stop defaulting to using .egg files as means of installation and instead default to using their unzipped form, the problem would go away and setuptools would again be in line with the Python standard of installing modules and packages. (For the interested: see my reply to Anthony for details or the fruitless discussions on distutils-sig on this matter.) >> The setuptools distribution is not the authoritative source for >> these kinds of fixes and that should be made clear by separating >> those parts out into a different package and making the >> installation an explicit user option. > > ...which nobody will use, following which they will still blame setuptools > for pydoc's failure. > > But I do agree that it might be a good idea to: > > 1) tell people about the issue > 2) tell people that the fix is being installed > 3) make it easy to remove the fix > > However, maybe I should just create 'pydoc-hotfix' and 'doctest-hotfix' > packages on PyPI and then tell people to "easy_install pydoc-hotfix > doctest-hotfix". Yes, please ! Please use a different name for these hot-fixes, though, so that it's clear that they only apply to specific Python versions and are in fact, back-ports of Python 2.5 versions of those modules. > Or perhaps just create a "pep302-hotfixes" package that > includes all of the PEP 302 support changes for linecache, traceback, and > so on, although I'm not sure how many of those can be installed in such a > way that the fixes would be active prior to the stdlib versions being imported. > > I suppose I could handle the "nobody will use it" problem by having > easy_install nag about the hotfixes not being installed when running under > 2.3 or 2.4. Fair enough. >> You should also note that users won't benefit from bug fixed >> versions of e.g. such modules in patch level releases. > > The pydoc fixes to support zip files are too extensive to justify > backporting, as they rely on features provided elsewhere. So if bug fixes > occur on the 2.5 trunk, I would have to update the hotfix versions directly. Right. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 20 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From fredrik at pythonware.com Thu Apr 20 15:38:38 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 15:38:38 +0200 Subject: [Python-Dev] Distutils for Python 2.1 (was "Raising objections") References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com><4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de><200604201039.29138.anthony@interlink.com.au><444755C4.7040304@canterbury.ac.nz><ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> <20060420134029.GB2111@rogue.amk.ca> Message-ID: <e282su$3rc$1@sea.gmane.org> A.M. Kuchling wrote: > The 2.1 requirement was originally imposed because Greg Ward would > make standalone Distutil releases. Greg is too busy working at his > job, going camping on weekends, and being best man [1] to make new > releases these days. from python-announce: From: Greg Ward Subject: ANNOUNCE: Optik 1.5.1 Date: 2006-04-20 01:46:20 GMT Optik 1.5.1 is now available, just 16 months after I first planned to release it (sigh). ... </F> From mal at egenix.com Thu Apr 20 15:47:47 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 20 Apr 2006 15:47:47 +0200 Subject: [Python-Dev] setuptools in 2.5. (summary) In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <44479103.4040102@egenix.com> Anthony Baxter wrote: > In an attempt to help this thread reach some sort of resolution, > here's a collection of arguments against and in favour of setuptools > in 2.5. My conclusions are at the end. Thanks for the summary. I'd like to add some important aspects (for me at least) that are missing: - setuptools should not change the standard distutils "install" command to install everything as eggs Eggs are just one distribution format out of many. They do server their purpose, just like RPMs, DEBs or Windows installers do. However, when running "python setup.py install" you are in fact installing from source, so there's no need to wrap things up again. The distutils default of actually installing things in the standard Python is good, has worked for years and should continue to do so. The extra information needed by the dependency checking can easily be added to the package directory of the installed package or stored elsewhere in a repository of installed packages or as separate egg-info directory if we want to stick with setuptools' way of using the path name for getting meta-information on a package. Placing the egg-files into the system as ZIP files should be an option, e.g. as separate "install_egg" command, not the default. - setuptools includes many good additions to distutils We should remember that setuptools will get part of the stdlib, so instead of monkey-patching distutils, a lot of setuptools features can be ported to distutils directly. For things that don't fit distutils, proper hooks should be added, so that extending distutils in the setuptools way can be done by the standard distutils extension mechanisms. I think that if Phillip were more open to reconsidering design decisions in setuptools like the decision to replace the install command instead of just adding a new install_egg command (together with a bdist_egg), quite a lot of code could be directly imported into core distutils. Even for things like his alternative sdist implementation could go in, albeit using a different command name. - arguments like "just works" simply don't count Why not ? Because "just works" is not well-defined. If an implementation makes too many magical decisions for the user, you end up with a tunnel - with only one way in and one way out. In reality there are too many things you'd need magical decisions for. A generic distribution mechanism will never be capable of making all the right decisions for a user. Adding endless lists of options doesn't make things easier either. Let's face it: there isn't the one right way to do software distribution. What's good about it: there doesn't need to be. The package authors will know which particular tweaks are needed to make installing their package a breeze and easy for the user. So the right way to get close the "just works" idea, is to leave a door open for module authors and packages to add such functionality. Freedom of choice is often enough a lot better than any variant of "just works". Explicit is better than implicit. Good defaults help. - setuptools is not ready for prime-time This is not quite right. setuptools works and has been used by quite a few people in the past. However, just dumping setuptools into the stdlib is considered a less than ideal approach. Instead, setuptools should get *integrated* into the stdlib. This process will take longer than the current 2.5 release cycle. We should at least mark setuptools as it is now in the stdlib as experimental and leave open the possibility of making changes to it. > The arguments against: > > - Someone should instead just fix distutils. > > Right. And the amount of yelling if distutils changed in a non-b/w > compat way wouldn't be high. There's also the basic problem that the > distutils code is _horrible_. > > - It monkeypatches pydoc and distutils! > > It only monkeypatches pydoc when the separate setuptools installer > is used on older Pythons. How is this relevant for this discussion of > Python 2.5? The monkeypatching for distutils should be reduced - see > AMK's message for a breakdown of this. > > - Documentation > > beaker% pydoc xmlcore.etree > no Python documentation found for 'xmlcore.etree' > > beaker% pydoc ctypes > no Python documentation found for 'ctypes' > > The documentation (of which there is plenty) can and will be folded > into the standard python docs. Most of the new modules in 2.5 went in > before their docs. > > - Where's the PEP? > > I don't see the need. The stuff that could go into a PEP about > formats and the like should go into the existing Distutils > documentation. It's a far more useful place, and many more people are > likely to find it there than in a PEP. The PEP is needed to document the current discussion, not the setuptools technical documentation (this should of course go into the standard Python documentation). Your summary along with the opinions of the ones who have commented it would be a good start. The PEP should also include the lists of things Martin and Fredrik have put forward. Finally, the goal of the PEP should be to list the steps needed to *integrate* setuptools into the stdlib. > - It's a huge amount of code (or "ball of mud"), or, it adds too many > features. > > Most of these have been added over the last 2 years in response to > feedback and requests from people on distutils-sig. There's been an > obvious pent-up demand for a bunch of this work, and now that > someone's working on it, these can get done. > > - It will break existing setup.py scripts! > > No it won't. If you don't type the letters 'import setuptools' into > your setup.py, it won't be affected. > > - Rewriting from scratch is bad > > This isn't a rewrite - it's built on top of distutils. > (An aside, I don't buy the "never rewrite" argument. As I mentioned in > an earlier message, look at urllib2, twisted and email for starters. > In addition, look at Firefox, Windows XP, and Mac OSX. Hell, Linux > could be considered a rewrite of Minix, once upon a time.) > > - Eggs are inferior to distribution-specific packaging > > Not all operating systems have a decent packaging system. The ones > that do, don't support multiple versions of the same library. In > addition, there's no reason why existing packaging systems can't just > bundle up the code as they do now - if they also add a .egg-info file > to the packages, that would be even better! Finally, these don't > support user installation of software. This is particularly useful in > a hosting environment. I think you're misrepresenting the argument here: eggs are not considered inferior. What packagers and application builders gripe about is the fact that setuptools forces them to use eggs instead of the standard Python way of installing packages. I don't think anyone would be bothered by adding dependency information to their packages and thus make easy_install and friends work with regular Python packages as well. > And now let's look at some of the stuff that setuptools gives us: > > - We have a CPAN-type system > > I do quite a number of Python talks, and this is _always_ one of > the most requested features. There's been many attempts to write this, > none have been completed until now. If you honestly don't see that > this is a big thing for Python, then I am very, very suprised. I > suspect that this will be the #1 new feature of Python 2.5 that the > users will notice and be happy about. > > - Multiple installs of different versions of the same package, > including per-user installs. > > Again, as Python gets more widely used, this becomes a big issue. > Sure, it's not necessarily a killer argument for python-dev, but stuff > that's added to Python shouldn't just be just for the use of > python-dev. The multiple installed versions feature also avoids the > CPAN dependency hell problem - back when I used to work with Perl, > this was a constant source of nightmarish problems. I've never had a need for having multiple versions of a single package installed, so can't comment. User installation is certainly possible with stock distutils. The main problem with distutils is the lack of user documentation, not so much the lack of features. > - The "develop" mode > > This makes life that bit less painful all-round. Does it ? I usually point PYTHONPATH at my development tree and that's all it takes to do setup "develop" mode. > - The plugin/extension support > > Extending distutils currently is a total pain in the arse. Views differ on this one and most of this perception is due to missing distutils documentation. > - Backwards compatibility > > easy_install even works with existing packages that use traditional > distutils, so long as they're in the Cheeseshop. Damn, this is nice. > If you don't want to do the work to change your installation code, > don't bother - it will still be useful. > > The conclusions: > > I'm a little suprised by the amount of fear and loathing this has > generated. To me, there are such obvious benefits that I don't see > why people are so vehemently against setuptools. I haven't seen any > arguments that have convinced me that this isn't the right thing to > do. Yes, there's still work to be done - but hell, we've only > released the first alpha so far. > > For inclusion in the standard library, the usual benchmark is that the > code offers useful functionality, and that it be the "best of breed". > setuptools clearly meets these two criteria. (Ok, it's really "only of > breed", but that also makes it "best", by default <wink>). It's also > been under development for over 2 years - according to svn, 0.0.1 was > checked into svn back in March 2004. > > I'm also suprised by how much some people seem to think that the > current state of distutils functionality is acceptable or desirable. > It's not - it's a mess. > > Finally, I'd like to point out that I think some of the hostility > towards Phillip's work has been excessive. He's done an amazing > amount of work on this (look at the distutils-sig archive for the > last two years for more), and produced something that's very very > useful. > > He deserves far more credit for this than he seems to have been > getting here. He does and there's no doubt about it. It would help a lot, though, if Phillip would be willing to reconsider some defaults he's using in setuptools to make them compatible to Python's default import mechanism. A lot of the perceived hostility would go away if he'd play nice with standards that have existed in Python for years. If Phillip thinks that we should change Python to always import packages from egg-files, then this would be a major change in the way Python works and thus require a complete PEP process. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 20 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From aahz at pythoncraft.com Thu Apr 20 16:17:40 2006 From: aahz at pythoncraft.com (Aahz) Date: Thu, 20 Apr 2006 07:17:40 -0700 Subject: [Python-Dev] Raising objections In-Reply-To: <20060420133343.GA2111@rogue.amk.ca> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> Message-ID: <20060420141740.GB3431@panix.com> On Thu, Apr 20, 2006, A.M. Kuchling wrote: > > I don't mind rewriting much, but hate leaving the original code in > place; this is confusing to new users, even if it's convenient for > existing users. How many HTML parsers are in the core now? (My gut > feeling is that that Python's adoption curve has flattened, so it's > probably now more important to keep existing users happy, so the time > for jettisoning modules may be past.) There's a lot of truth for that with Python 2.x, but that's what 3.0 is for. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From barry at python.org Thu Apr 20 16:43:03 2006 From: barry at python.org (Barry Warsaw) Date: Thu, 20 Apr 2006 10:43:03 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <444721F3.6070704@v.loewis.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> Message-ID: <1145544183.10176.45.camel@resist.wooz.org> On Thu, 2006-04-20 at 07:53 +0200, "Martin v. L?wis" wrote: > > Sometimes you _have_ to rewrite. I point to > > urllib->urllib2, asyncore->twisted, rfc822/mimelib/&c->email. > > If I had the time, I would question each of these. Yes, it is > easier for the author of the new package to build "in the > green", but it is (nearly) never necessary, and almost always > bad for the project. Actually, it's not always easier to rewrite from scratch. In fact, I think it's rarely easier to do so than to just patch up bugs here and there in something that already exists, or wedge in hacks to minimally extend it to keep up with new standards or uses. But sometimes the fundamental model of how something works just will no longer cut it, and something better must be designed, argued about, written, tested, documented, beta released, feedback gathered, bugs accepted, patches accepted, re-released, maintained, backward compatibility supported forever, etc. etc. Oh wait, yes definitely easier to rewrite <wink>. way-off-topic-now-ly y'rs, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060420/1a55f4ff/attachment.pgp From gustavo at niemeyer.net Thu Apr 20 14:59:21 2006 From: gustavo at niemeyer.net (Gustavo Niemeyer) Date: Thu, 20 Apr 2006 09:59:21 -0300 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <20060420125921.GA7791@localhost.localdomain> > The arguments against: One more: - Installing a package means dropping files in the system without any kind of record keeping. It should learn from the techniques applied in other well-known package managers (RPM, DPKG, whatever) to keep track of what's happening in the system. -- Gustavo Niemeyer http://niemeyer.net From tomerfiliba at gmail.com Thu Apr 20 17:20:01 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Thu, 20 Apr 2006 17:20:01 +0200 Subject: [Python-Dev] proposal: evaluated string Message-ID: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> many times, templating a string is a tidious task. using the % operator, either with tuples or dicts, is difficult to maintain, when the number of templated arguments is large. and string.Template, although more easy to read, is less intutive and cumbersome: import string t = string.Template("hello $name") print t.substitute({"name" : "john"}) as you can see, it is is redundant, as you must repeat the dict keys in two places, and imagine maintaining such a template with 20 parameters! if you change the one argument's name in the template, you must go and fix all locations that use that template. not nice at all. i'm suggesting something like boo's string interpolation: http://boo.codehaus.org/String+Interpolation but i chose to call it "evaluated string". like raw strings (r""), which are baiscally a syntactic sugar, evaluated strings will be marked by 'e', for instance, e"", which may be combined with the 'r' or 'u', that exist today. the evaluated string will be evaluated based on the current scope (locals and globals), just like normal expressions. the difference is, the results of the expressions will be str()ed into the evaluated string directly. these expressions will be embedded into the string surrounded by special delimiters, and an unclosed delimited or a syntax error will be reported at just like "\x??" raises "ValueError: invalid \x escape". i'm not sure which delimiters to use, but i think only { } is sufficient (no need for ${ } like in boo) some examples: =============== name = "john" print e"hello {name}" a = 3 b = 7 print e"the function is y = {a}x + {b}" for x in range(10): print e"y({x}) = {a*x+b}" import time, sys print e"the time is {time.asctime()} and you are running on {sys.platform}" =============== in order to implement it, i suggest a new type, estr. doing a = e"hello" will be equivalent to a = estr("hello", locals(), globals()), just like u"hello" is equivalent to unicode("hello") (if we ignore \u escaping for a moment) and just like unicode literals introduce the \u escape, estr literals would introduce \{ and \} to escape delimiters. the estr object will be evaluated with the given locals() and globals() only at __repr__ or __str__, which means you can work with it like a normal string: a = e"hello {name} " b = e"how nice to meet you at this lovely day of {time.localtime().tm_year}" c = a + b c is just the concatenation of the two strings, and it will will be evaluated as a whole when you str()/repr() it. of course the internal representation of the object shouldnt not as a string, rather a sequence of static (non evaluated) and dynamic (need evaluation) parts, i.e.: ["hello", "name", "how nice to meet you at this lovely day of", " time.localtime().tm_year"], so evaluating the string will be fast (just calling eval() on the relevant parts) also, estr objects will not support getitem/slicing/startswith, as it's not clear what the indexes are... you'd have to first evaluate it and then work with the string: str(e"hello")[2:] estr will have a counterpart type called eunicode. some rules: estr + str => estr estr + estr => estr estr + unicode => eunicode estr + eunicode => eunicode eunicode + eunicode => eunicode there are no backwards compatibility issues, as e"" is an invalid syntax today, and as for clarity, i'm sure editors like emacs and the like can be configured to highlight the strings enclosed by {} like normal expressions. i know it may cause the perl-syndrome, where all the code of the program is pushed into strings, but templating/string interpolation is really a fundamental requirement of scripting languages, and the perl syndrome can be prevented with two precautions: * compile the code with "eval" flag instead of "exec". this would prevent abominations like e"{import time\ndef f(a):\n\tprint 'blah'}" * do not allow the % operator to work on estr's, to avoid awful things like e"how are %s {%s}" % ("you", "name") one templating mechanism at a time, please :) perhaps there are other restrictions to impose, but i couldnt think of any at the moment. here's a proposed implementation: class estr(object): # can't derive from basestring! def __init__(self, raw, locals, globals): self.elements = self._parse(raw) self.locals = locals self.globals = globals def _parse(self, raw): i = 0 last_index = 0 nesting = 0 elements = [] while i < len(raw): if raw[i] == "{": if nesting == 0: elements.append((False, raw[last_index : i])) last_index = i + 1 nesting += 1 elif raw[i] == "}": nesting -= 1 if nesting == 0: elements.append((True, raw[last_index : i])) last_index = i + 1 if nesting < 0: raise ValueError("too many '}' (at index %d)" % (i,)) i += 1 if nesting > 0: raise ValueError("missing '}' before end") if last_index < i: elements.append((False, raw[last_index : i])) return elements def __add__(self, obj): if type(obj) == estr: elements = self.elements + obj.elements else: elements = self.elements + [(False, obj)] # the new object inherits the current one's namespace (?) newobj = estr("", self.locals, self.globals) newobj.elements = elements return newobj def __mul__(self, count): newobj = estr("", self.locals, self.globals) newobj.elements = self.elements * count return newobj def __repr__(self): return repr(self.__str__()) def __str__(self): result = [] for dynamic, elem in self.elements: if dynamic: result.append(str(eval(elem, self.locals, self.globals))) else: result.append(str(elem)) return "".join(result) myname = "tinkie winkie" yourname = "la la" print estr("{myname}", locals(), globals()) print estr("hello {myname}", locals(), globals()) print estr("hello {yourname}, my name is {myname}", locals(), globals()) a = 3 b = 7 print estr("the function is y = {a}x + {b}", locals(), globals()) for x in range(10): print estr("y({x}) = {a*x+b}", locals(), globals()) a = estr("hello {myname}", locals(), globals()) b = estr("my name is {myname} ", locals(), globals()) c = a + ", " + (b * 2) print c ====== the difference is that when __str__ is called, it will figure out by itself the locals() and globals() using the stack frame or whatever. -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060420/b9212994/attachment.htm From barry at python.org Thu Apr 20 18:04:02 2006 From: barry at python.org (Barry Warsaw) Date: Thu, 20 Apr 2006 12:04:02 -0400 Subject: [Python-Dev] Raising objections In-Reply-To: <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444755C4.7040304@canterbury.ac.nz> <ca471dc20604200333u3f1b5de7n4492aa9612e197a4@mail.gmail.com> Message-ID: <1145549042.356.9.camel@resist.wooz.org> On Thu, 2006-04-20 at 11:33 +0100, Guido van Rossum wrote: > Unfortunately, this is mixed in with some stuff that isn't part of > distutils' "core competency", like text utilities, process spawning, > and option parsing. These should (eventually, when the 2.1 > compatibility requirement is lifted) be refactored to use (or be > merged into) the available stdlib facilities for such functionaliy. With today's svn repo, I don't think there's any need to force Python 2.5's version to pay for the backward compatibility with earlier versions. This is what I do with the email package for example. In the sandbox, I have externals to Python 2.3's version for email 2.5, Python 2.4's version for email 3.0, and Python 2.5's version for email 4.0. Yes, it's more work to maintain three branches, but it's manageable. So it's not due to lack of structural support that we have to keep forward porting a package's baggage, but only the distutils/setuptools maintainers can decide whether it's too much work to maintain multiple branches. And there's a lot of joy to be derived from removing lots of backward compatibility crap. (It's always more fun to remove code than add new code :). At some point you have to move on. Python 2.1 and 2.2 are getting increasingly difficult to support, and I've found less and less call for it from my users. OTOH, Python 2.3 is still very popular, so that's now my minimum requirement. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060420/c876afc8/attachment.pgp From jason.orendorff at gmail.com Thu Apr 20 18:04:12 2006 From: jason.orendorff at gmail.com (Jason Orendorff) Date: Thu, 20 Apr 2006 12:04:12 -0400 Subject: [Python-Dev] PEP 355 (object-oriented paths) In-Reply-To: <loom.20060420T100537-849@post.gmane.org> References: <loom.20060420T100537-849@post.gmane.org> Message-ID: <bb8868b90604200904l624f6118ra57def08ad5c3d0f@mail.gmail.com> Talin, everything you wrote is really compelling. If path.py weren't so ridiculously useful to me, I would be completely convinced. :) For example, I agree 100% with this: > Another reason why I am a bit dubious about a class-based approach > is that it tends to take anything that is related to a filepath and lump > them into a single module. ...and this: > one thing that irks me (and others) about the Path class in Java is > that it makes no distinction between methods that are merely textual > conversions, and methods which actually go out and touch the disk. ...until I remember that in practice, d.parent and d.files('*.txt') on the same object; or f.ext and f.isfile(); are things I do all the time without thinking. I think I can see why. Separate modules only make sense for separate use cases. In real-world code where you're "doing stuff with files and directories", you're going to randomly need os.remove(), shutil.copyfile(), os.path.isdir(), and/or glob.glob(). I have one big mental junk drawer with all this stuff in it. The way the stdlib partitions them does not fit my brain. I have trouble believing some other theoretical partition would be much better, though I'd love to see someone try. Lastly-- Is nontrivial path manipulation really rare? Practically every program I write "does stuff with files and directories". Scripts often do little else; in larger programs, main() often does 5 or 50 lines of this kind of stuff, while the rest of the program is mostly filesystem-unaware. -j From fredrik at pythonware.com Thu Apr 20 18:03:31 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 18:03:31 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com><bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com><4442B2D8.8090003@v.loewis.de> <bbaeab100604161500l11e46864n6e08af667e9ff470@mail.gmail.com> Message-ID: <e28bcm$50h$1@sea.gmane.org> Brett Cannon wrote: > > Not sure whether Fredrik Lundh has responded, but I believe he once > > said that he would prefer if ElementTree isn't directly modified, but > > that instead patches are filed on the SF tracker and assigned to him. > > Nope, Fredrik never responded. I am cc:ing him directly to see if he > has a preference along with Gerhard to see if he has one as well for > pysqlite. martin's memory is flawless, as usual. applying emergency patches (stuff that blocks the build, and security fixes) is of course okay, but I would prefer if everything else is handled via the ET master repository. </F> From theller at python.net Thu Apr 20 18:10:15 2006 From: theller at python.net (Thomas Heller) Date: Thu, 20 Apr 2006 18:10:15 +0200 Subject: [Python-Dev] SVN question Message-ID: <e28bp7$5se$1@sea.gmane.org> I'm about to import the 0.9.9.6 tag of ctypes into Python svn. Should this be done in exact the same way as before, so first commit it into external/ctypes-0.9.9.6, and then 'svn copy' the two relevant directories into the trunk, and afterwards set all the svn props again, or is this done in another way? Thanks, Thomas From fredrik at pythonware.com Thu Apr 20 18:24:46 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 18:24:46 +0200 Subject: [Python-Dev] SVN question References: <e28bp7$5se$1@sea.gmane.org> Message-ID: <e28ckg$9qi$1@sea.gmane.org> Thomas Heller wrote: > I'm about to import the 0.9.9.6 tag of ctypes into Python svn. > > Should this be done in exact the same way as before, so first > commit it into external/ctypes-0.9.9.6, and then 'svn copy' > the two relevant directories into the trunk, and afterwards set > all the svn props again, or is this done in another way? what props? if you prefer, you could switch to using "vendor drop" approach for ctypes. see this page for an explanation: http://svnbook.red-bean.com/en/1.0/ch07s04.html </F> From skip at pobox.com Thu Apr 20 18:54:06 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 20 Apr 2006 11:54:06 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <17479.48302.734002.387099@montanaro.dyndns.org> Anthony> Finally, I'd like to point out that I think some of the Anthony> hostility towards Phillip's work has been excessive. He's done Anthony> an amazing amount of work on this (look at the distutils-sig Anthony> archive for the last two years for more), and produced Anthony> something that's very very useful. Anthony> He deserves far more credit for this than he seems to have been Anthony> getting here. I agree. I'll go one step further and suggest that not only should setuptools be included in the stdlib but that distutils should be deprecated, at least as a user-visible package. I've not looked at the setuptools code, but I have tried at times to use the distutils code. Those were trying times. I've also seen some of the rapid acceptance of setuptools in the broader Python community. Maybe they know something we don't. Thank you Phillip... Skip From jakamkon at gmail.com Thu Apr 20 19:03:59 2006 From: jakamkon at gmail.com (Kuba Konczyk) Date: Thu, 20 Apr 2006 19:03:59 +0200 Subject: [Python-Dev] Google Summer of Code proposal: Pdb improvments Message-ID: <2e947fbb0604201003j7bfc5910lbb892f5d21011319@mail.gmail.com> I've found thread on python-dev related to pdb's weaknesses: http://www.mail-archive.com/python-dev at python.org/msg05115.html The opinions are that pdb is 'one of the more unPythonic modules' and must be 'seriously fixed'.I have similar experience with pdb's internals but I want to know others opinion. I'm currently trying to answer following questions: What should be fixed first in pdb? What we could do to make pdb better debugging utility? If you have any suggestions related to pdb's improvements please put them to the wiki page: http://wiki.python.org/moin/PdbImprovments Is this kind of proposal to the SoC would be accepted by some of the psf mentors? jkk From skip at pobox.com Thu Apr 20 19:09:37 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 20 Apr 2006 12:09:37 -0500 Subject: [Python-Dev] Raising objections In-Reply-To: <200604201600.27845.anthony@interlink.com.au> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <444721FC.9010609@v.loewis.de> <200604201600.27845.anthony@interlink.com.au> Message-ID: <17479.49233.660536.961901@montanaro.dyndns.org> Anthony> I don't think it's fair to say that Phillip just checked this Anthony> in off on his own. In addition, since he did the development in the Python sandbox, his checkins all along have been there for everyone to see. It's not like he did the work in Outer Mongolia then showed up one day checked it in while everyone was sleeping. I see mention of "setuptools" in the distutils-sig archive at least as far back as May 2005 and mention of "eggs" back to at least April 2005. Skip From skip at pobox.com Thu Apr 20 19:25:32 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 20 Apr 2006 12:25:32 -0500 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <44473002.4070208@v.loewis.de> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060419150224.045c73e0@mail.telecommunity.com> <44473002.4070208@v.loewis.de> Message-ID: <17479.50188.739862.771982@montanaro.dyndns.org> Martin> c) ask for consent in advance to making a potentially-breaking Martin> change. Doesn't that potentially extend the release time for an enhanced distutils across multiple Python releases? With both distutils and setuptools available simultaneously, setuptools can be designed and implemented to suit the needs of its constituency while distutils remains avilable and compatible for those people using it. Skip From jcarlson at uci.edu Thu Apr 20 19:13:12 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Thu, 20 Apr 2006 10:13:12 -0700 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> Message-ID: <20060420100719.A841.JCARLSON@uci.edu> "tomer filiba" <tomerfiliba at gmail.com> wrote: > the evaluated string will be evaluated based on the current scope (locals > and globals), just like > normal expressions. the difference is, the results of the expressions will > be str()ed into the > evaluated string directly. these expressions will be embedded into the > string surrounded by -1 You are basically suggesting that e"..." be a replacement for "..."%locals() . That doesn't seem to me to be sufficiently compelling (yes, I do use string interpolation). - Josiah From fredrik at pythonware.com Thu Apr 20 19:31:09 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 19:31:09 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> Message-ID: <e28ggv$o5n$1@sea.gmane.org> skip at pobox.com wrote: > Maybe they know something we don't. oh, please. it's not like people like myself and MAL don't know anything about package distribution... (why is it that people who *don't* distribute stuff are a lot more im- pressed by a magic tool than people who've spent the last decade distributing stuff ? has it ever occurred to you that we know some- thing that you don't ?) </F> From skip at pobox.com Thu Apr 20 19:44:59 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 20 Apr 2006 12:44:59 -0500 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <e28bcm$50h$1@sea.gmane.org> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <bbaeab100604161349i46928d26rb536d19a1c84e5dc@mail.gmail.com> <4442B2D8.8090003@v.loewis.de> <bbaeab100604161500l11e46864n6e08af667e9ff470@mail.gmail.com> <e28bcm$50h$1@sea.gmane.org> Message-ID: <17479.51355.339040.185889@montanaro.dyndns.org> Fredrik> applying emergency patches (stuff that blocks the build, and Fredrik> security fixes) is of course okay, but I would prefer if Fredrik> everything else is handled via the ET master repository. Could that reference be placed in a comment near the top of xmlcore/etree/ElementTree.py or in a README file in the etree package? Thx, Skip From tomerfiliba at gmail.com Thu Apr 20 19:50:46 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Thu, 20 Apr 2006 19:50:46 +0200 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <20060420100719.A841.JCARLSON@uci.edu> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <20060420100719.A841.JCARLSON@uci.edu> Message-ID: <1d85506f0604201050s2efde9d5lf0e1a82e02ddb057@mail.gmail.com> just like r"" does the escaping for you. but estr types must be implemented so the evaluate with the current scope (locals and globals), not the score they were defined it, so unless you want to do nasty tricks with sys._getframe, which doesn't work on all implementations of python, you'll need it as a builtin -tomer On 4/20/06, Josiah Carlson <jcarlson at uci.edu> wrote: > > > "tomer filiba" <tomerfiliba at gmail.com> wrote: > > the evaluated string will be evaluated based on the current scope > (locals > > and globals), just like > > normal expressions. the difference is, the results of the expressions > will > > be str()ed into the > > evaluated string directly. these expressions will be embedded into the > > string surrounded by > > -1 You are basically suggesting that e"..." be a replacement for > "..."%locals() . That doesn't seem to me to be sufficiently compelling > (yes, I do use string interpolation). > > > - Josiah > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060420/cc525a0c/attachment.html From rhettinger at ewtllc.com Thu Apr 20 19:58:28 2006 From: rhettinger at ewtllc.com (Raymond Hettinger) Date: Thu, 20 Apr 2006 10:58:28 -0700 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> Message-ID: <4447CBC4.3080503@ewtllc.com> tomer filiba wrote: > many times, templating a string is a tidious task. using the % > operator, either with tuples or dicts, > is difficult to maintain, when the number of templated arguments is > large. and string.Template, > although more easy to read, is less intutive and cumbersome: > > import string > t = string.Template("hello $name") > print t.substitute({"name" : "john"}) Using the key twice is basic to templating (once of specify where to make the substitution and once to specify its value). This is no different from using variable names in regular code: a=1; ... ; b = a+2 # variable-a is used twice. Also, the example is misleading because real-apps are substitute variables, not constants. IOW, the above code fragment is sematically equivalent to: print "hello john". > i'm suggesting something like boo's string interpolation: > http://boo.codehaus.org/String+Interpolation We already have a slew of templating utilities (see Cheetah for example). > but i chose to call it "evaluated string". > > like raw strings (r""), which are baiscally a syntactic sugar, > evaluated strings will be marked by 'e', > for instance, e"", which may be combined with the 'r' or 'u', that > exist today. > > the evaluated string will be evaluated based on the current scope > (locals and globals), just like > normal expressions. the difference is, the results of the expressions > will be str()ed into the > evaluated string directly. these expressions will be embedded into the > string surrounded by > special delimiters, and an unclosed delimited or a syntax error will > be reported at just like "\x??" > raises "ValueError: invalid \x escape". > > i'm not sure which delimiters to use, but i think only { } is > sufficient (no need for ${ } like in boo) > > some examples: > =============== > name = "john" > print e"hello {name}" The string.Template() tool already does this right out of the box: >>> print Template('hello $name').substitute(globals()) hello john So, essentially the proposal is to make the globals() access implicit and to abbreviate the Template constructor with new syntax. The latter is not desirable because 1) it is less readable, 2) string prefixing is already out-of-control (used for raw strings and unicode), 3) it is less flexible than the class constructor which can be subclassed and extended as needed. If you don't like the $name style of template markup and prefer delimiters instead, then check-out the recipe at: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > a = 3 > b = 7 > print e"the function is y = {a}x + {b}" > for x in range(10): > print e"y({x}) = {a*x+b}" > > import time, sys > print e"the time is {time.asctime()} and you are running on { > sys.platform} If you need this, then consider using a third-party templating module. Be sure to stay aware of the security risks if the fill-in values are user specified. Raymond From rhettinger at ewtllc.com Thu Apr 20 20:00:20 2006 From: rhettinger at ewtllc.com (Raymond Hettinger) Date: Thu, 20 Apr 2006 11:00:20 -0700 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <4447CBC4.3080503@ewtllc.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> Message-ID: <4447CC34.8060909@ewtllc.com> > >If you don't like the $name style of template markup and prefer >delimiters instead, then check-out the recipe at: > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > > The link should have been: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305306 From tomerfiliba at gmail.com Thu Apr 20 20:23:04 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Thu, 20 Apr 2006 20:23:04 +0200 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <4447CC34.8060909@ewtllc.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com> Message-ID: <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> > > We already have a slew of templating utilities (see Cheetah for example). > first of all -- i know there's a bunch of templating engines, but i think it should be a built-in feature of the language. like boo does. and estr is stronger than simple $name substitution, like Template does. Be sure to stay aware of the security risks if the fill-in values are user > specified. > that's one major benefit of having it as a builtin type -- you dont have security risks, as the expression itself is embedded in your code, not something you get from the outside: name = raw_input("what's you name?") print e"hello {name}" does not get the *expression* from the user, only the *values*, so unless the user causes a buffer overflow with a huge string, he can't run code. the estr object is part of *your* code, which you trust. If you need this, then consider using a third-party templating module. > that 50-liner estr class i presented does just that. Using the key twice is basic to templating (once of specify where to > make the substitution and once to specify its value). This is no > different from using variable names in regular code: a=1; ... ; b = > a+2 # variable-a is used twice. > but when it's defined once as an argument to a function, once in the template, and once in the dict, that's three times, where it can be only two. def f(name): print e"hello {name}" Also, the example is misleading because real-apps are substitute > variables, not constants. IOW, the above code fragment is sematically > equivalent to: print "hello john". what do you mean by that? 3) it is less > flexible than the class constructor which can be subclassed and > extended as needed. > do you often subclass str? it's a built-in type, part of the language, subclassing it doesnt make much sense. after all it's the language compiler that instanciates these types, i.e., when you do "hello", the compiler creates an instance of str() with that value, not you directly, and that's the case here. -tomer On 4/20/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote: > > > > > >If you don't like the $name style of template markup and prefer > >delimiters instead, then check-out the recipe at: > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > > > > > The link should have been: > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305306 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060420/4b316d68/attachment.htm From guido at python.org Thu Apr 20 20:24:58 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 19:24:58 +0100 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com> <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> Message-ID: <ca471dc20604201124r10d78b04gbddfa8ceff42235a@mail.gmail.com> Tomer, please stop. We've seen your proposal. We've said "-1". Please take it instead of wasting your time trying to argue for it. On 4/20/06, tomer filiba <tomerfiliba at gmail.com> wrote: > > > We already have a slew of templating utilities (see Cheetah for example). > > > first of all -- i know there's a bunch of templating engines, but i think it > should be a > built-in feature of the language. like boo does. and estr is stronger than > simple > $name substitution, like Template does. > > > > Be sure to stay aware of the security risks if the fill-in values are user > specified. > > > that's one major benefit of having it as a builtin type -- you dont have > security risks, > as the expression itself is embedded in your code, not something you get > from the > outside: > > name = raw_input("what's you name?") > print e"hello {name}" > > does not get the *expression* from the user, only the *values*, so unless > the user > causes a buffer overflow with a huge string, he can't run code. the estr > object is part > of *your* code, which you trust. > > > > If you need this, then consider using a third-party templating module. > > > that 50-liner estr class i presented does just that. > > > > Using the key twice is basic to templating (once of specify where to > > make the substitution and once to specify its value). This is no > > different from using variable names in regular code: a=1; ... ; b = > > a+2 # variable-a is used twice. > > > but when it's defined once as an argument to a function, once in the > template, > and once in the dict, that's three times, where it can be only two. > > def f(name): > print e"hello {name}" > > > > Also, the example is misleading because real-apps are substitute > > variables, not constants. IOW, the above code fragment is sematically > > equivalent to: print "hello john". > > > what do you mean by that? > > > > 3) it is less > > flexible than the class constructor which can be subclassed and > > extended as needed. > > > do you often subclass str? it's a built-in type, part of the language, > subclassing it doesnt > make much sense. after all it's the language compiler that instanciates > these types, i.e., > when you do "hello", the compiler creates an instance of str() with that > value, not you > directly, and that's the case here. > > > -tomer > > > On 4/20/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote: > > > > > > > >If you don't like the $name style of template markup and prefer > > >delimiters instead, then check-out the recipe at: > > > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > > > > > > > > The link should have been: > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305306 > > > > > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Thu Apr 20 20:24:58 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 19:24:58 +0100 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com> <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> Message-ID: <ca471dc20604201124r10d78b04gbddfa8ceff42235a@mail.gmail.com> Tomer, please stop. We've seen your proposal. We've said "-1". Please take it instead of wasting your time trying to argue for it. On 4/20/06, tomer filiba <tomerfiliba at gmail.com> wrote: > > > We already have a slew of templating utilities (see Cheetah for example). > > > first of all -- i know there's a bunch of templating engines, but i think it > should be a > built-in feature of the language. like boo does. and estr is stronger than > simple > $name substitution, like Template does. > > > > Be sure to stay aware of the security risks if the fill-in values are user > specified. > > > that's one major benefit of having it as a builtin type -- you dont have > security risks, > as the expression itself is embedded in your code, not something you get > from the > outside: > > name = raw_input("what's you name?") > print e"hello {name}" > > does not get the *expression* from the user, only the *values*, so unless > the user > causes a buffer overflow with a huge string, he can't run code. the estr > object is part > of *your* code, which you trust. > > > > If you need this, then consider using a third-party templating module. > > > that 50-liner estr class i presented does just that. > > > > Using the key twice is basic to templating (once of specify where to > > make the substitution and once to specify its value). This is no > > different from using variable names in regular code: a=1; ... ; b = > > a+2 # variable-a is used twice. > > > but when it's defined once as an argument to a function, once in the > template, > and once in the dict, that's three times, where it can be only two. > > def f(name): > print e"hello {name}" > > > > Also, the example is misleading because real-apps are substitute > > variables, not constants. IOW, the above code fragment is sematically > > equivalent to: print "hello john". > > > what do you mean by that? > > > > 3) it is less > > flexible than the class constructor which can be subclassed and > > extended as needed. > > > do you often subclass str? it's a built-in type, part of the language, > subclassing it doesnt > make much sense. after all it's the language compiler that instanciates > these types, i.e., > when you do "hello", the compiler creates an instance of str() with that > value, not you > directly, and that's the case here. > > > -tomer > > > On 4/20/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote: > > > > > > > >If you don't like the $name style of template markup and prefer > > >delimiters instead, then check-out the recipe at: > > > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > > > > > > > > The link should have been: > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305306 > > > > > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tomerfiliba at gmail.com Thu Apr 20 21:06:17 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Thu, 20 Apr 2006 21:06:17 +0200 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <ca471dc20604201124r10d78b04gbddfa8ceff42235a@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com> <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> <ca471dc20604201124r10d78b04gbddfa8ceff42235a@mail.gmail.com> Message-ID: <1d85506f0604201206t2e99e3fdo95999aea31ebebb0@mail.gmail.com> > > We've said "-1" > i don't mean to be rude, but only josiah strictly said -1. by you saying "we", i'd assume you -1'ed it as well, but you couldn't have expected me to know that before you said it. so i feel a little cornered here... is it my fault i don't support quantom superposition? -tomer On 4/20/06, Guido van Rossum <guido at python.org> wrote: > > Tomer, please stop. We've seen your proposal. We've said "-1". Please > take it instead of wasting your time trying to argue for it. > > On 4/20/06, tomer filiba <tomerfiliba at gmail.com> wrote: > > > > > We already have a slew of templating utilities (see Cheetah for > example). > > > > > first of all -- i know there's a bunch of templating engines, but i > think it > > should be a > > built-in feature of the language. like boo does. and estr is stronger > than > > simple > > $name substitution, like Template does. > > > > > > > Be sure to stay aware of the security risks if the fill-in values are > user > > specified. > > > > > that's one major benefit of having it as a builtin type -- you dont have > > security risks, > > as the expression itself is embedded in your code, not something you get > > from the > > outside: > > > > name = raw_input("what's you name?") > > print e"hello {name}" > > > > does not get the *expression* from the user, only the *values*, so > unless > > the user > > causes a buffer overflow with a huge string, he can't run code. the estr > > object is part > > of *your* code, which you trust. > > > > > > > If you need this, then consider using a third-party templating module. > > > > > that 50-liner estr class i presented does just that. > > > > > > > Using the key twice is basic to templating (once of specify where to > > > make the substitution and once to specify its value). This is no > > > different from using variable names in regular code: a=1; ... ; b = > > > a+2 # variable-a is used twice. > > > > > but when it's defined once as an argument to a function, once in the > > template, > > and once in the dict, that's three times, where it can be only two. > > > > def f(name): > > print e"hello {name}" > > > > > > > Also, the example is misleading because real-apps are substitute > > > variables, not constants. IOW, the above code fragment is sematically > > > equivalent to: print "hello john". > > > > > > what do you mean by that? > > > > > > > 3) it is less > > > flexible than the class constructor which can be subclassed and > > > extended as needed. > > > > > do you often subclass str? it's a built-in type, part of the language, > > subclassing it doesnt > > make much sense. after all it's the language compiler that instanciates > > these types, i.e., > > when you do "hello", the compiler creates an instance of str() with that > > value, not you > > directly, and that's the case here. > > > > > > -tomer > > > > > > On 4/20/06, Raymond Hettinger <rhettinger at ewtllc.com> wrote: > > > > > > > > > > >If you don't like the $name style of template markup and prefer > > > >delimiters instead, then check-out the recipe at: > > > > > > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/3053 > > > > > > > > > > > The link should have been: > > > > > > > > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/305306 > > > > > > > > > > > > > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060420/c101ba18/attachment.html From ianb at colorstudy.com Thu Apr 20 21:41:36 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 20 Apr 2006 14:41:36 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e28ggv$o5n$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> Message-ID: <4447E3F0.3080708@colorstudy.com> Fredrik Lundh wrote: > skip at pobox.com wrote: >>Maybe they know something we don't. > > oh, please. it's not like people like myself and MAL don't know anything > about package distribution... > > (why is it that people who *don't* distribute stuff are a lot more im- > pressed by a magic tool than people who've spent the last decade > distributing stuff ? has it ever occurred to you that we know some- > thing that you don't ?) There's many people packaging and distributing code with setuptools, but very few of them are on this list. Setuptools has gotten quite a lot of use and a lot of pushback from people, but only from the people who have been using it. People who have complained about Setuptools without using it have had little influence, because most of their suggestions are only resolvable by having Setuptools simply cease to exist, which is not very constructive. But even then, I've seen Phillip make many changes to Setuptools based on feedback from people who just wanted Setuptools to get out of their way; but again, you have to at least be using it to give this kind of feedback. And somewhat tangential, but related to much of this discussion: if Phillip had been doing developments for distutils all this time instead of making a package that was separately installable under a different name, Setuptools would never have gotten the use and feedback that is has so far received. We could turn this into a discussion about how to handle updates to the standard library, but given the constraints I think Setuptools was developed in the best way possible. If development had happened here on python-dev and released to the large community only with the next Python release, Setuptools would be far inferior to its current state. And now for a little pushback the other way -- as of this January TurboGears has served up 100,000 egg files (I'm not sure what the window for all those downloads is, but it hasn't been very long). Has it occurred to you that they know something you don't about distribution? ElementTree would be among those egg files, so you should also consider how many people *haven't* asked you about problems related to the installation process. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From martin at v.loewis.de Thu Apr 20 21:53:00 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 21:53:00 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <200604201822.45318.anthony@interlink.com.au> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> Message-ID: <4447E69C.7060903@v.loewis.de> Anthony Baxter wrote: >> 4. .egg files. -1 > > As far as I understand it, an egg file is just a zipimport format zip > file with some additional metadata. You can also install the egg > files in an unpacked way, if you prefer that. I don't understand the > objection here - it's no better or worse than countless packages in > site-packages, and if it gives us multiple versions of the same code, > all the better. It is worse: each .egg file makes an additional sys.path entry (if I understand correctly); every import statement will traverse every every package. I'm not sure precisely how all this works, but I wouldn't be surprised if the zip directory is read over and over again. Compare that to countless packages in site-packages: "import foo" will *just* look for foo.py, foo.so, and the directory foo. I understand there is a second API to importing, some kind of "require()" call. I consider that even worse: it obsoletes the language's mechanism for modules, and defines its own modularization. However, this isn't really my objection to .egg files. I dislike them because they compete with platform packages: .rpm, .msi, .deb. Package authors will refuse to produce them, putting the burden of package maintenance (what packages are installed, what are their dependencies, when should I remove a package) onto the the end user/system administrator. Regards, Martin From martin at v.loewis.de Thu Apr 20 22:07:06 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 22:07:06 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> Message-ID: <4447E9EA.10106@v.loewis.de> Guido van Rossum wrote: > This is another area where API standardization is > important; as soon as modules are loaded out of a zip file, the > traditional approach of looking relative to os.path.dirname(__file__) > no longer works. Certainly. However, I get two conclusions out of this: 1. don't load packages out of .zip files. It's not that bad if software on the user's disk occupies multiple files, as long as there is a convenient way to get rid of them at once. Many problems go away if you just say no to zipimport. 2. standardize on file names, not API. If I want to deploy read-only data files, I put them into /usr/share. If I need read-write files, I put them into /var. I did not have such a problem yet on other systems, but I would also try to follow the conventions of these systems. With these combined, I can use any API I like to operate on the files. distutils already has support for that. Some libraries (not necessarily in Python) have gone the path of providing a "unified" API for all kinds of file stream access, e.g. in KDE, any tool can open files over many protocols, without the storage being mounted locally. I consider this approach flawed: once I leave the realm of KDE programs, this all stops working. It is the operating system's job to provide unified access to storage, not the programming language's or the job of a library. >> - I still fear that the code of setuptools will turn out to be >> a maintenance nightmare. > > AFAICT Phillip has explicitly promised he will maintain it (or if he > doesn't, I expect that he would gladly do so if you asked). This has > always been sufficient as a "guarantee" that a new module isn't > orphaned. He has, and it is. Still, for whatever reason, I feel particularly uneasy here (and yes, that is my fear, my uncertainty, and my doubt). Regards, Martin From martin at v.loewis.de Thu Apr 20 22:22:08 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 22:22:08 +0200 Subject: [Python-Dev] setuptools in the stdlib ([Python-checkins] r45510 - python/trunk/Lib/pkgutil.py python/trunk/Lib/pydoc.py) In-Reply-To: <17479.50188.739862.771982@montanaro.dyndns.org> References: <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060419150224.045c73e0@mail.telecommunity.com> <44473002.4070208@v.loewis.de> <17479.50188.739862.771982@montanaro.dyndns.org> Message-ID: <4447ED70.6040301@v.loewis.de> skip at pobox.com wrote: > Martin> c) ask for consent in advance to making a potentially-breaking > Martin> change. > > Doesn't that potentially extend the release time for an enhanced distutils > across multiple Python releases? Yes, but your alternative doesn't "scale" over time. At some point, modifying setuptools will not be acceptable anymore because of the risk of breaking packages that rely on intimate details of setuptools. What are we going to do then? The "natural" expansion is this: Invent a new library, say, setuplib, which sits on top of setuptools, and then (copying) "setuplib can be designed and implemented to suit the needs of its constituency while setuptools remains available and compatible for those people using it." People will just have to replace from setuptools import setup with from setuplib import setup The new setuplib will be completely transparent: if you don't explicitly use it, nothing will change. Sarcasm aside, this isn't a good approach to software versioning, IMO. Regards, Martin From martin at v.loewis.de Thu Apr 20 22:25:11 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 22:25:11 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <C4600955-7A13-41D2-BC17-9049A42202DC@free.fr> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <4443526C.8040907@v.loewis.de> <4443E597.9000407@v.loewis.de> <C4600955-7A13-41D2-BC17-9049A42202DC@free.fr> Message-ID: <4447EE27.7060601@v.loewis.de> J?r?me Laheurte wrote: > Sorry I'm late, but something like "os.popen('taskkill.exe /F /IM > python_d.exe')" would have worked; we use this at work. Thanks, I didn't know about it. Is it available on Windows 2000, too? (because the system in question is Windows 2000, and I see it on a "What's new in Windows XP" page) Regards, Martin From ronaldoussoren at mac.com Thu Apr 20 22:45:59 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Thu, 20 Apr 2006 22:45:59 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447E69C.7060903@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> Message-ID: <1225473E-39CA-4840-AF61-DEA403D734F8@mac.com> On 20-apr-2006, at 21:53, Martin v. L?wis wrote: > > However, this isn't really my objection to .egg files. I dislike them > because they compete with platform packages: .rpm, .msi, .deb. As far as I understand the issues they compete up to a point, but should also make it easier to create platform packages that contain proper the proper dependencies because those are part of machine-readable meta-data instead of being written down in readme files. Oddly enough that was also the objection from one linux distribution maintainer: somehow his opinion was that the author of a package couldn't possibly know the right depedencies for it. As for platform packages: not all platforms have useable packaging systems. MacOSX is one example of those, the system packager is an installer and doesn't include an uninstaller. Eggs make it a lot easier to manage python software in such an environment (and please don't point me to Fink or DarwinPorts on OSX, those have serious problems of their own). > Package > authors will refuse to produce them, putting the burden of package > maintenance (what packages are installed, what are their dependencies, > when should I remove a package) onto the the end user/system > administrator. Philip has added specific support for them: it is possible to install packages in the tradition way but with some additional files that tell setuptools about installed packages. Maybe 'python setup.py install' should default to installing in that mode (as someone else already suggest), with either on option or a seperate command to install as eggs. Ronald From tjreedy at udel.edu Thu Apr 20 22:49:58 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 20 Apr 2006 16:49:58 -0400 Subject: [Python-Dev] proposal: evaluated string References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com><4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com><1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com><ca471dc20604201124r10d78b04gbddfa8ceff42235a@mail.gmail.com> <1d85506f0604201206t2e99e3fdo95999aea31ebebb0@mail.gmail.com> Message-ID: <e28s5l$1tv$1@sea.gmane.org> "tomer filiba" <tomerfiliba at gmail.com >i don't mean to be rude, Then don't be. Your worst was the silly suggestion in an ad hominen barb that a contributor of many years does not belong on this list. But this time-wasting quibble post, in response to a request to quit wasting everyone's time, comes close. > but only josiah strictly said -1. By including 'strictly', you seem to me to be effectively admitting that you are smart enough to have noticed that Raymond 'effectively' said -1 in his explanation of why he rejected your idea. Since Guido was responding to your argumentative response to Raymond, he was obviously including Raymond as someone who has *effectively* said -1. > is it my fault i don't support quantom superposition? Yes and no, or maybe maybe ;-) Terry Jan Reedy From martin at v.loewis.de Thu Apr 20 22:59:21 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 22:59:21 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> Message-ID: <4447F629.8000500@v.loewis.de> Bob Ippolito wrote: > They DO NOT compete any more than source packages do. eggs are packages > plus metadata, nothing more. What eggs do and what rpm/msi/deb does are > orthogonal. It's entirely reasonable that in the future rpm/msi/deb > will simply be a delivery mechanism for eggs. That might be your view, but it apparently isn't the view of the inventor(s). From http://peak.telecommunity.com/DevCenter/setuptools Create Python Eggs - a single-file importable distribution format http://peak.telecommunity.com/DevCenter/PythonEggs '"Eggs are to Pythons as Jars are to Java..."' 'There are several binary formats that embody eggs, but the most common is '.egg' zipfile format, because it's a convenient one for distributing projects.' '.egg files are a "zero installation" format for a Python package;' So the .egg inventors do view .egg files (i.e. the .egg zipfile format) as a distribution format, just like rpm/msi/deb *are* distribution formats (none of them "zero installation", though, you always have to perform some deployment activity). Regards, Martin From ianb at colorstudy.com Thu Apr 20 23:00:44 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 20 Apr 2006 16:00:44 -0500 Subject: [Python-Dev] setuptools in 2.5. (summary) In-Reply-To: <44479103.4040102@egenix.com> References: <200604201456.13148.anthony@interlink.com.au> <44479103.4040102@egenix.com> Message-ID: <4447F67C.6030409@colorstudy.com> M.-A. Lemburg wrote: > Anthony Baxter wrote: > >>In an attempt to help this thread reach some sort of resolution, >>here's a collection of arguments against and in favour of setuptools >>in 2.5. My conclusions are at the end. > > > Thanks for the summary. I'd like to add some important aspects > (for me at least) that are missing: > > - setuptools should not change the standard distutils "install" > command to install everything as eggs > > Eggs are just one distribution format out of many. They do > server their purpose, just like RPMs, DEBs or Windows installers do. I think Eggs can be a bit confusing. They really serve two purposes, but using the same format. They are a distribution mechanism, which is probably one of the less important aspects, and there's the installation format. So you don't have to use them as a distribution format to still use them as an installation format. As an installation format they overlap with OS-level metadata, but that OS-level metadata has always been completely unavailable to Python programs so a little duplication has to be put up with. And anyway, the packaging systems can manage the system integrity well enough to keep that information in sync. Even though eggs overlap, they don't have to compete. > However, when running "python setup.py install" you are in fact > installing from source, so there's no need to wrap things up > again. > > The distutils default of actually installing things in the > standard Python is good, has worked for years and should continue > to do so. > > The extra information needed by the dependency checking can > easily be added to the package directory of the installed package > or stored elsewhere in a repository of installed packages or as > separate egg-info directory if we want to stick with setuptools' > way of using the path name for getting meta-information on a > package. Phillip can clarify this more, but I believe he's planning on Python 2.5 setuptools to install similar to distutils, but with a sibling .egg-info directory. There's already an option to do this, it's just a matter of whether it will be the default. A package with a sibling .egg-info directory is a "real" egg, but that it's a real egg probably highlights that eggness can be a bit confusing. > Placing the egg-files into the system as ZIP files should > be an option, e.g. as separate "install_egg" command, > not the default. I would prefer this too; even though Phillip has fixed the traceback problems for 2.5 I personally just prefer files I can view in other tools as well (my text editor doesn't like zip files, for instance). I typically make this change in distutils.cfg for my own systems. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From guido at python.org Thu Apr 20 23:04:49 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 22:04:49 +0100 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447E9EA.10106@v.loewis.de> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> Message-ID: <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> On 4/20/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > 1. don't load packages out of .zip files. It's not that bad if > software on the user's disk occupies multiple files, as long as > there is a convenient way to get rid of them at once. > Many problems go away if you just say no to zipimport. You forget that importing from a zip file is often much faster than importing from the filesystem. That's why zipimport was added in the first place. > 2. standardize on file names, not API. If I want to deploy > read-only data files, I put them into /usr/share. If I need > read-write files, I put them into /var. What about Windows? Putting data files in the same directory as source files solves a lot of problems of making the data follow the source no matter how/where it is installed (or if it's not installed at all, like when running code straight out of svn). Anyway, perhaps it's a matter of choice. It's clear to me that many developers prefer to do it this way. You don't. This is an area that has enough external constraints that I'm uncomfortable telling developers they can't do it that way. A standard API to access resources by name (with perhaps a registry for defining additional ways to find them) makes a lot of sense to me, and I don't think we're exactly inventing something novel here. [...] > Some libraries (not necessarily in Python) have gone the path of > providing a "unified" API for all kinds of file stream access, > e.g. in KDE, any tool can open files over many protocols, without > the storage being mounted locally. I consider this approach > flawed: once I leave the realm of KDE programs, this all stops > working. It is the operating system's job to provide unified > access to storage, not the programming language's or the job > of a library. I don't see it that way. All operating system access is mediated via the Python library anyway; and in many cases the Python library provides additional abstractions over what the operating system natively provides (e.g. many APIs in the os and os.path modules). You can't blame KDE for providing mechanisms that only work in the KDE world. It's fine for Python to provide Python-specific solutions for issues that have no cross-platform native solution. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Thu Apr 20 23:08:34 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 23:08:34 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <1225473E-39CA-4840-AF61-DEA403D734F8@mac.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <1225473E-39CA-4840-AF61-DEA403D734F8@mac.com> Message-ID: <4447F852.8070502@v.loewis.de> Ronald Oussoren wrote: > As far as I understand the issues they compete up to a point, but should > also make it easier to create platform packages that contain proper the > proper dependencies because those are part of machine-readable meta-data > instead of being written down in readme files. Oddly enough that was > also the objection from one linux distribution maintainer: somehow his > opinion was that the author of a package couldn't possibly know the > right depedencies for it. What he can't possibly know is the *name* of the package he depends on. For example, a distutils package might be called 'setuptools', so developers of additional packages might depend on 'setuptools'. However, on Debian, the dependency should be different: The package should depend on either 'python-setuptools' or 'python2.3-setuptools', depending on details which are off-topic here. > As for platform packages: not all platforms have useable packaging systems. > MacOSX is one example of those, the system packager is an installer and > doesn't include an uninstaller. Eggs make it a lot easier to manage python > software in such an environment (and please don't point me to Fink or > DarwinPorts on OSX, those have serious problems of their own). Isn't uninstallation just a matter of deleting a directory? If I know that I want to uninstall the Python package 'foo', I just delete its directory. Now, with setuptools, I might have multiple versions installed, so I have to chose (in Finder) which of them I want to delete. That doesn't require Eggs to be single-file zipfiles; deleting a directory would work just as will (and I believe it will work with ez_install, too). >> Package >> authors will refuse to produce them, putting the burden of package >> maintenance (what packages are installed, what are their dependencies, >> when should I remove a package) onto the the end user/system >> administrator. > > Philip has added specific support for them: it is possible to install > packages in the tradition way but with some additional files that tell > setuptools about installed packages. As a system administrator, I don't *want* to learn how to install Python packages. I know how to install RPMs (or MSIs, or whatever system I manage); this should be good enough. If "this Python junk" comes with its own installer procedure, I will hate it. Regards, Martin From fredrik at pythonware.com Thu Apr 20 23:10:22 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 20 Apr 2006 23:10:22 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org><e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> Message-ID: <e28tc1$62n$1@sea.gmane.org> Ian Bicking wrote: > And now for a little pushback the other way -- as of this January > TurboGears has served up 100,000 egg files (I'm not sure what the window > for all those downloads is, but it hasn't been very long). Has it > occurred to you that they know something you don't about distribution? oh, effbot.org is shipping lots of packages too. 100,000 may sound a lot to you, but there are surprisingly many python users out there these days... (if I had a $ for every download, etc, etc). > ElementTree would be among those egg files, so you should also consider > how many people *haven't* asked you about problems related to the > installation process. as you may have noticed, I tend to hang out on the TurboGears list, so yes, I'm well aware of their use of eggs. a couple of observations: - turbogears is hosting all eggs on their own site. - they're currently discussing whether to use stricter version requirements for individual components, to increase the chance that people end up using a combination that someone else has actually tested. - people keep reporting download problems and compatibility problems. it's not obvious to me that they hadn't been better off if they'd just shipped everything in a good old tarball... </F> From martin at v.loewis.de Thu Apr 20 23:13:09 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 23:13:09 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> Message-ID: <4447F965.5070606@v.loewis.de> Guido van Rossum wrote: > Anyway, perhaps it's a matter of choice. It's clear to me that many > developers prefer to do it this way. You don't. This is an area that > has enough external constraints that I'm uncomfortable telling > developers they can't do it that way. Hence my -0. I see the practical need for it, and practicality beats purity, but I still would like to see it done in what I think is the right way. Regards, Martin From martin at v.loewis.de Thu Apr 20 23:26:32 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 23:26:32 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <444771B5.4050504@canterbury.ac.nz> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <444771B5.4050504@canterbury.ac.nz> Message-ID: <4447FC88.8040101@v.loewis.de> Greg Ewing wrote: >> The "resources" name is actually quite a common meme; > > I believe it goes back to the original Macintosh, which > was the first and only computer in the world to have files > with something called a "resource fork". The resource fork > contained pieces of data called "resources". I can believe that history. Still, I thought a resource is something you can exhaust; the fork should have been named "data fork" or just "second fork". > Then Microsoft stole the name, and before you knew, > everyone was using it. It's all been downhill from > there. :-) Right. I'm not asking that the name is changed in setuptools - I'm just complaining about the state of the world, and showing my lack of intuition for the English language. Regards, Martin From ianb at colorstudy.com Thu Apr 20 23:41:53 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 20 Apr 2006 16:41:53 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e28tc1$62n$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org><e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> Message-ID: <44480021.6030009@colorstudy.com> Fredrik Lundh wrote: > - they're currently discussing whether to use stricter version requirements > for individual components, to increase the chance that people end up using > a combination that someone else has actually tested. I don't think setuptools version requirements are enough to ensure the integrity of all pieces of a complex system will work together. Figuring out a self-consistent set of packages to work together is something that is rather independent of any particular package, and Setuptools doesn't have a facility for that. But it does provide the tools to build that kind of facility, and egg-based installations provide the sufficient metadata to report on what has been built. So I think it is a step in the right direction. Integrating packages from a wide variety of sources is hard. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From martin at v.loewis.de Thu Apr 20 23:46:47 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 23:46:47 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <0EFC5F7D-43D1-4F07-B8BF-A5FB763AFD81@redivi.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <4447F629.8000500@v.loewis.de> <0EFC5F7D-43D1-4F07-B8BF-A5FB763AFD81@redivi.com> Message-ID: <44480147.6080709@v.loewis.de> Bob Ippolito wrote: >> 'There are several binary formats that embody eggs, but the most common >> is '.egg' zipfile format, because it's a convenient one for distributing >> projects.' >> >> '.egg files are a "zero installation" format for a Python package;' > > single modules are also such a "zero installation" format too. So what? > > You're simply reading things between the lines that aren't there. How > about you describe exactly what parts of the documentation that lead you > to believe that eggs are designed to compete with solutions like > rpm/msi/deb so that it can be clarified? It's not just the documentation: I firmly believe that many people consider .egg files to be a distribution and package management format. People have commented that some systems (e.g. OSX) doesn't have a usable native packager, so setuptools fills a need here. This shows that people do believe that .egg files are to OSX what .deb files are to Debian. As .egg files work on Debian, too, it is natural that they compete with .deb. Phillip Eby once said (IIRC) that he doesn't want package authors to learn all the different bdist_* commands (which also require access to the target systems sometimes), and that they their life gets easier as they now only have to ship the "native" Python binary packages, namely .egg files. In this view, rpm/msi/deb have no place anymore, and are obsolete. I can readily believe that package authors indeed see this as a simplification, but I also see an increased burden on system admins in return. So if this attitude (Python Eggs are the preferred binary distribution format) is wrong, it is the attitude that has to change first. Changes to the documentation follow from that. If the attitude is right, I'll have to accept that I have a minority opinion. Regards, Martin From martin at v.loewis.de Thu Apr 20 23:51:05 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 20 Apr 2006 23:51:05 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <44480021.6030009@colorstudy.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org><e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> <44480021.6030009@colorstudy.com> Message-ID: <44480249.6050906@v.loewis.de> Ian Bicking wrote: > I don't think setuptools version requirements are enough to ensure the > integrity of all pieces of a complex system will work together. > Figuring out a self-consistent set of packages to work together is > something that is rather independent of any particular package, and > Setuptools doesn't have a facility for that. But it does provide the > tools to build that kind of facility, and egg-based installations > provide the sufficient metadata to report on what has been built. So I > think it is a step in the right direction. Integrating packages from a > wide variety of sources is hard. Indeed. Microsoft also thought that side-by-side installation of .NET assemblies will solve all versioning problems, as you can have as many versions installed simultaneously as you want to. It turns out that they also where wrong: at best, side-by-side installation can *help* to solve versioning problems. It might also contribute to increase them, as people stop worrying about backwards compatibility, trusting that side-by-side installation can replace proper engineering. Regards, Martin From mal at egenix.com Thu Apr 20 23:53:48 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 20 Apr 2006 23:53:48 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <4447E3F0.3080708@colorstudy.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> Message-ID: <444802EC.2000302@egenix.com> Ian Bicking wrote: > Fredrik Lundh wrote: >> skip at pobox.com wrote: >>> Maybe they know something we don't. >> oh, please. it's not like people like myself and MAL don't know anything >> about package distribution... >> >> (why is it that people who *don't* distribute stuff are a lot more im- >> pressed by a magic tool than people who've spent the last decade >> distributing stuff ? has it ever occurred to you that we know some- >> thing that you don't ?) > > There's many people packaging and distributing code with setuptools, but > very few of them are on this list. Setuptools has gotten quite a lot of > use and a lot of pushback from people, but only from the people who have > been using it. People who have complained about Setuptools without > using it have had little influence, because most of their suggestions > are only resolvable by having Setuptools simply cease to exist, which is > not very constructive. Ian, this is simply not true. The folks who have complained had good reasons to do so and most of them have been in the distribution field for a long time. There have been several attempts to get setuptools back on the track with Python standards, but all requests were ignored. Only after endless discussions, Phillip added some weird --long-option-name-that-no-one-can-remember to the setuptools "install" command that restores the standard distutils behavior which should be the default anyway. The suggestions that were made were constructive. Just look at the distutils-sig archive. Only very few of the suggestions were taken into account. Instead, setuptools fans insisted on their right to have everything "just work" in their sense of the word. Most of them probably don't even know what the distutils or Python standards are for installing packages, where the motivation to have zip imports came from and have only ever seen and used the setuptools way. It's natural to then see any suggestion to change their view on things as being counter-productive or hostile. The people who try to get setuptools back on track receive the same kind of hostility that you perceive to be getting from them. This is the reason I stopped discussing these things on distutils-sig. The two camps need to come together again, but that will require understanding some of the history and standards that we've had in Python ever since Python packages were introduced. It will also require the setuptools folks to get some feeling of respect for those who have worked in the field for years. You may not know it, but having worked with distutils for a long time, I have great respect for Phillip's work - it's just that I find some of his design decisions wrong, since they don't follow the standards. > But even then, I've seen Phillip make many > changes to Setuptools based on feedback from people who just wanted > Setuptools to get out of their way; but again, you have to at least be > using it to give this kind of feedback. > > And somewhat tangential, but related to much of this discussion: if > Phillip had been doing developments for distutils all this time instead > of making a package that was separately installable under a different > name, Setuptools would never have gotten the use and feedback that is > has so far received. We could turn this into a discussion about how to > handle updates to the standard library, but given the constraints I > think Setuptools was developed in the best way possible. If development > had happened here on python-dev and released to the large community only > with the next Python release, Setuptools would be far inferior to its > current state. I think that the project would have been even better than it is now, had it started out in a different way, namely Python standards compliant. But like Anthony said: we're still in alpha, so there's still some hope that we can get those few warts removed from setuptools and a true integration in place for everybody to enjoy. > And now for a little pushback the other way -- as of this January > TurboGears has served up 100,000 egg files (I'm not sure what the window > for all those downloads is, but it hasn't been very long). Has it > occurred to you that they know something you don't about distribution? > ElementTree would be among those egg files, so you should also consider > how many people *haven't* asked you about problems related to the > installation process. I wonder why people always seem to imply that installing packages has never worked before there was setuptools. There's really nothing wrong with the standard distutils two step process: 1. download and unzip the source file 2. run "python setup.py install" I've been publishing the mx Tools since 1997. Back then we only had the old Makefile.pre.in mechanism and still most people managed to get the tools working (step 2 then being: "make -f Makefile.pre.in boot; make; make install") without having to ask for help. Since I've started using distutils in 2001, the number of support questions related to compiling and installing the tools dropped to near zero. I think this is quite a success story for distutils. Unfortunately, this fact is rarely mentioned in all these setuptools discussions, probably because it's just not the latest greatest and shiniest tool in the world anymore as it was perceived in the early days. BTW: thanks to Greg Ward for creating distutils in the first place :-) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 20 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From pje at telecommunity.com Thu Apr 20 23:57:41 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 17:57:41 -0400 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447F629.8000500@v.loewis.de> References: <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> Message-ID: <5.1.1.6.0.20060420173932.0379fe90@mail.telecommunity.com> At 10:59 PM 4/20/2006 +0200, Martin v. L?wis wrote: >Bob Ippolito wrote: > > They DO NOT compete any more than source packages do. eggs are packages > > plus metadata, nothing more. What eggs do and what rpm/msi/deb does are > > orthogonal. It's entirely reasonable that in the future rpm/msi/deb > > will simply be a delivery mechanism for eggs. > >That might be your view, but it apparently isn't the view of the >inventor(s). For the record, Bob *is* the co-inventor of eggs; he was the only person who responded to my call on the distutils-sig for the creation of a format for distribution of plugins, and it is largely due to his efforts that the egg runtime exists in the first place. In addition, it was he who convinced me that they would be good for installing arbitrary Python libraries, not just plugins for extensible applications. It's true that a lot of our initial discussions and even some of the documentation seem to blur the line a bit as to what eggs are. Clarity often takes time to develop. In fact, it wasn't until the Debian packaging flap on distutils-sig last fall that we were forced to clarify our thoughts enough to realize that distribution packaging was always just a means to an end - a way of reifying the idea of a library or plugin as a self-contained and self-describing unit. Distribution and installation are certainly a part of the overall picture, but the essential nature of eggs is to make *project releases* truly first-class objects in Python. A lot of your arguments (and some of MAL's) against eggs as "unnecessary" have been based on the (incorrect) theory that Python packages (importable unit) constitute the appropriate grain of modularity for system packages (distributable unit). But ever since the distutils existed, it has been clear that distribution units are not a 1:1 mapping with packages in the Python sense of the word. The stdlib itself gives lie to that concept, since there are many packages in it, yet it is distributed as a single project release. Eggs, however represent project releases as tangible objects: the "1.1 release of FooBar" is a thing that can be installed, put on sys.path, searched for, depended on, and so forth. This is a valuable concept to have access to from Python code, independent of how these project releases actually end up on a system, or how they are physically arranged when they get there. It is intimately related to delivery and installation, of course. But to claim that they're competing with system packages is to miss the point. If you run "bdist_rpm" on a setuptools-based setup script, for example, you will get an RPM that looks just like any other RPM for a Python package, merely with the addition of an .egg-info directory that contains introspectable data. (And the same thing happens for bdist_wininst and any other bdist_* tool that uses "setup.py install --root" to build its target image.) In other words, if the delivery and installation part is handled by some other tool, then only the true "egg" remains: a project release as a discoverable and introspectable unit. That's not to say that the eggs didn't *originate* as a way to solve all of the problems in one fell swoop, but the point is that eggs do *NOT* compete with existing system packaging tools within their domain of competence. What I'm opposed to in making setuptools install things the distutils way by default is that there is no easy path to clean upgrade or installation in the absence of a system packaging tool like RPM or deb or what-have-you. I am not opposed to doing the "classic" style of installation by default *forever*, but only until setuptools can handle uninstallation and upgrades in that format. Right now, it's a lot easier to uninstall things if they are all in one directory. From pje at telecommunity.com Fri Apr 21 00:02:20 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 18:02:20 -0400 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447FC88.8040101@v.loewis.de> References: <444771B5.4050504@canterbury.ac.nz> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <444771B5.4050504@canterbury.ac.nz> Message-ID: <5.1.1.6.0.20060420175843.042035a8@mail.telecommunity.com> At 11:26 PM 4/20/2006 +0200, Martin v. L?wis wrote: >Greg Ewing wrote: > >> The "resources" name is actually quite a common meme; > > > > I believe it goes back to the original Macintosh, which > > was the first and only computer in the world to have files > > with something called a "resource fork". The resource fork > > contained pieces of data called "resources". > >I can believe that history. Still, I thought a resource >is something you can exhaust; the fork should have been >named "data fork" or just "second fork". I suspect that the distinction is that "data" sounds too much like something belonging to the *user* of the program, whereas "resources" can reasonably be construed to be something that belongs to the program itself. By now, however, the term is popularly used with GUI toolkits of all kinds to mean essentially read-only data files that are required by a program or library to function, but which are not directly part of the code. Interestingly enough, there is another "resource" tool for Python out there, that actually works by converting resource files to strings in .py files, so that you can then just import them. From ianb at colorstudy.com Fri Apr 21 00:05:33 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 20 Apr 2006 17:05:33 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <444802EC.2000302@egenix.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> Message-ID: <444805AD.3020906@colorstudy.com> >>And now for a little pushback the other way -- as of this January >>TurboGears has served up 100,000 egg files (I'm not sure what the window >>for all those downloads is, but it hasn't been very long). Has it >>occurred to you that they know something you don't about distribution? >>ElementTree would be among those egg files, so you should also consider >>how many people *haven't* asked you about problems related to the >>installation process. Really, I just shouldn't have made this argument; the discussion was going back towards a calmer and more constructive discussion and I pushed it the other way. Sorry. Please ignore. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From guido at python.org Fri Apr 21 00:14:47 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 20 Apr 2006 23:14:47 +0100 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447FC88.8040101@v.loewis.de> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <444771B5.4050504@canterbury.ac.nz> <4447FC88.8040101@v.loewis.de> Message-ID: <ca471dc20604201514j3e6a97a8g9177e27907e21b4a@mail.gmail.com> On 4/20/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > I thought a resource > is something you can exhaust; the fork should have been > named "data fork" or just "second fork". Hmm... I don't think that the English word "resource" necessarily implies that it can be exhausted. In US businesses, people are often also referred to as resources. I've come to think of it as simply "what you need". The definition at http://www.answers.com/resource has this: """ 1. Something that can be used for support or help: The local library is a valuable resource. 2. An available supply that can be drawn on when needed. Often used in the plural. 3. The ability to deal with a difficult or troublesome situation effectively; initiative: a person of resource. 4. Means that can be used to cope with a difficult situation. Often used in the plural: needed all my intellectual resources for the exam. 5. 1. resources The total means available for economic and political development, such as mineral wealth, labor force, and armaments. 2. resources The total means available to a company for increasing production or profit, including plant, labor, and raw material; assets. 3. Such means considered individually. """ #1 and #2 especially explain the usage we're considering. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Fri Apr 21 00:20:31 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 21 Apr 2006 00:20:31 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> Message-ID: <4448092F.1050807@egenix.com> Guido van Rossum wrote: > On 4/20/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: >> 1. don't load packages out of .zip files. It's not that bad if >> software on the user's disk occupies multiple files, as long as >> there is a convenient way to get rid of them at once. >> Many problems go away if you just say no to zipimport. > > You forget that importing from a zip file is often much faster than > importing from the filesystem. That's why zipimport was added in the > first place. This is true if you only have a single ZIP file, e.g. python24.zip on sys.path. It is not if you have dozens of ZIP files on sys.path, since you have to take the initial scan time plus the memory for caching the ZIP file contents into account. Each ZIp file adds to Python startup time, since the ZIP files on sys.path are scanned for every Python startup. Here's an example of the effect of having just 20 eggs on sys.path: tmp/eggs> time python -S -c '0' 0.014u 0.006s 0:00.02 50.0% 0+0k 0+0io 0pf+0w tmp/eggs> unsetenv PYTHONPATH tmp/eggs> time python -S -c '0' 0.006u 0.003s 0:00.01 0.0% 0+0k 0+0io 0pf+0w (Reference: http://mail.python.org/pipermail/distutils-sig/2005-December/005739.html) Startup time effectively doubles, even if you don't use any of the installed eggs in your script. The main reason for adding zip imports was to simplify deploying (complete) applications to users, e.g. have the complete Python stdlib in one file. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 20 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From mal at egenix.com Fri Apr 21 00:32:01 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 21 Apr 2006 00:32:01 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060420173932.0379fe90@mail.telecommunity.com> References: <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <5.1.1.6.0.20060420173932.0379fe90@mail.telecommunity.com> Message-ID: <44480BE1.5090107@egenix.com> Phillip J. Eby wrote: > What I'm opposed to in making setuptools install things the distutils way > by default is that there is no easy path to clean upgrade or installation > in the absence of a system packaging tool like RPM or deb or > what-have-you. I am not opposed to doing the "classic" style of > installation by default *forever*, but only until setuptools can handle > uninstallation and upgrades in that format. Right now, it's a lot easier > to uninstall things if they are all in one directory. If that's all it takes, have a look at the uninstall implementation in mxSetup.py (e.g. from egenix-mx-base). This will work with any half decent distutils setup.py file, since it uses the original install process as basis for finding the files to uninstall. (Not my idea: credits go to Thomas Heller.) BTW, if there's interest, I can look into porting the stuff in mxSetup.py into stock distutils. That'll give it uninstall, some autoconf and support for building Unix C libs as part of the process (plus a bunch of other things). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 21 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From p.f.moore at gmail.com Fri Apr 21 00:49:05 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 20 Apr 2006 23:49:05 +0100 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <444802EC.2000302@egenix.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> Message-ID: <79990c6b0604201549t3e947bcat90b61c3cf39b8ac3@mail.gmail.com> On 4/20/06, M.-A. Lemburg <mal at egenix.com> wrote: > I wonder why people always seem to imply that installing > packages has never worked before there was setuptools. > > There's really nothing wrong with the standard distutils > two step process: > > 1. download and unzip the source file > > 2. run "python setup.py install" [I'm very reluctant to join in this discussion, as (a) I have no packaging experience (being a pure "user" of other people's packages, and (b) I have no personal need for the fancy features in setuptools. Apologies in advance, therefore, if I misrepresent anything.] As a Windows user, I generally have 2 problems with this process. 1. There's no uninstall. 2. There is no list of what is installed. (Unless I manage these 2 manually, but that's always an option). However, if I do "python setup.py bdist_wininst" all is rosy again. The standard "Add/Remove Programs" listing becomes both my uninstaller and my list of installed packages. Initially, eggs were a problem for me, as they didn't conform to this standard. This was, as I understand it, because the package layout installed by distutils didn't include the metadata needed for the setuptools features like version checking. So, packages which used those features *couldn't* be distributed as bdist_wininst format. However, Philip addressed this. The new format with a .egg-info directory allows for egg metadata. This is automatically used in commands like bdist_wininst under setuptools. I'm left (I believe) with a couple of issues, though. 1. Only packages which import setuptools in their setup.py will generate egg metadata. So, for example, PIL doesn't have egg metadata by default. 2. Distributors will supply .egg files rather than bdist_wininst installers (this is already happening). (1) is an issue for packages which "require" specific versions of other packages. For example, turbogears requires elementtree - but the standard elementtree installer distributed by Fredrik doesn't include egg metadata, so it "doesn't count". I either need to install the egg as well (which doesn't go in Add/Remove Programs), or build my own elementtree installer (not always easy, as C compilers aren't available on every Windows box, for example). (2) is an issue as it means I'll end up less likely to be able to get "native" installers. It's possible that (1) can be solved by making *all* distutils installs create egg metadata. That may be what's meant by "integrating setuptools functionality into distutils". If so, I'd support this. I'm not sure how to solve (2). I have no interest in, or use for, ez_setup. I prefer to manage my version issues manually, and I don't like tools that automatically download stuff off the internet (no matter how much it's an on-request only process). That's fine - it's my choice. But if I become a minority, the demand for bdist_wininst installers diminishes to the point that no-one produces them. And *that's* a step backwards. A key benefit of distutils for me was that a standard Windows binary installer format became common. I'd hate to lose that. (And I don't feel that eggs are a comparable option - unless a complete package management solution gets added). Maybe a .egg -> .bdist_wininst converter is feasible. I don't know. If so, that may be a way out. > BTW: thanks to Greg Ward for creating distutils in the > first place :-) Agreed. And thanks to Philip for going the next step. But let's keep working to ensure that in solving a new set of problems, setuptools doesn't make things harder for people who never *had* that set of problems... Paul. From pje at telecommunity.com Fri Apr 21 01:05:41 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 19:05:41 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <79990c6b0604201549t3e947bcat90b61c3cf39b8ac3@mail.gmail.co m> References: <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> Message-ID: <5.1.1.6.0.20060420185915.01f979b8@mail.telecommunity.com> At 11:49 PM 4/20/2006 +0100, Paul Moore wrote: >It's possible that (1) can be solved by making *all* distutils >installs create egg metadata. This is already planned to be addressed in Python 2.5, via the install_egg_info added to the distutils. The implementation is already in the trunk, but not documented. (Guido approved the patch, for anybody concerned about "who said" it should go in.) It installs a standard distutils PKG-INFO file as Project-Version.egg-info alongside the installed package(s). >A key benefit of distutils for me was that >a standard Windows binary installer format became common. I'd hate to >lose that. (And I don't feel that eggs are a comparable option - >unless a complete package management solution gets added). Side note: The "nest" command framework is supposed to be that solution for platforms without system package managers, i.e. full list/delete/etc. support. Of course, that's a command-line tool, but it should be a lot easier to build a GUI on top of nest than on top of easy_install. >Maybe a .egg -> .bdist_wininst converter is feasible. I don't know. If >so, that may be a way out. easy_install contains the reverse: a bdist_wininst -> .egg converter. Going the other way should actually be easier, if somebody wants to take a whack at it. Might make a good test case for the not-yet-written "formats overview" document on eggs, actually. From thomas at python.org Fri Apr 21 01:06:30 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 21 Apr 2006 01:06:30 +0200 Subject: [Python-Dev] Stream codecs and _multibytecodec Message-ID: <9e804ac0604201606j3db70591o5a89495b05ecd0a2@mail.gmail.com> While merging the trunk changes into the p3yk branch, I discovered what I think is a bug in the stream codecs. It's easily reproduced in the trunk: in Lib/codecs.py, make the 'Codec' class new-style. Then, suddenly, test_codecs will crash with an exception like this: ====================================================================== ERROR: test_basics (test.test_codecs.BasicUnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/thomas/python/python/trunk/Lib/test/test_codecs.py", line 1089, in test_basics writer = codecs.getwriter(encoding)(q) File "/home/thomas/python/python/trunk/Lib/codecs.py", line 296, in __init__ self.stream = stream TypeError: readonly attribute Looking at the class, it's hard to tell how it's suddenly a read-only attribute, until you figure out which codec it breaks with: big5. Which is defined in encodings.big5, as a subclass of codecs.Codec, _multibytecodec.MultibyteStreamWriter and codecs.StreamWriter. And _multibytecodec.MultibyteStreamWriter is a new-style class (as it's defined in C) with a read-only 'stream' attribute. The conflicting attribute is silently masked by Python, since classic classes ignore two-thirds of the descriptor protocol (setting and deleting), but it jumps right out when all classes become new-style. I'm not sure whether this attribute conflict is on purpose or not, but since it will break in the future, I suggest it gets fixed. It *looks* like renaming the _MultibyteStreamWriter attribute is the easiest solution, but I don't know which API has precedence. (One of the two was added fairly recently, at least, since it didn't break in the p3yk branch until I merged in the last few months' worth of changes :) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060421/cf9e1aa1/attachment.htm From dh at triple-media.com Wed Apr 19 05:12:54 2006 From: dh at triple-media.com (Dennis Heuer) Date: Wed, 19 Apr 2006 05:12:54 +0200 Subject: [Python-Dev] python 2.5alpha and naming schemes Message-ID: <20060419051254.9b1318b4.dh@triple-media.com> I read the python 2.5alpha release notes and found that some module and class names should not make it into the official release. For example, the name of the ctypes module is ok because the module is only of special interest, but calls like py_object or create_string_buffer should definetly be changed to something more python-like. Module names like hashlib are not python-like too (too c/lowlevel-like). Please think about it... Dennis From williams13 at llnl.gov Wed Apr 19 17:41:33 2006 From: williams13 at llnl.gov (Dean N. Williams) Date: Wed, 19 Apr 2006 08:41:33 -0700 Subject: [Python-Dev] unrecognized command line option "-Wno-long-double" Message-ID: <44465A2D.90900@llnl.gov> Dear Python and Mac Community, I have just successfully built gcc version 4.1.0 for my Mac OS X 10.4.6. gcc -v Using built-in specs. Target: powerpc-apple-darwin8.6.0 Configured with: /usr/local/src/gcc-4.1.0/configure Thread model: posix gcc version 4.1.0 When I try to build Python 2.4.3, I get the following error below: gcc -c -fno-strict-aliasing -no-cpp-precomp -mno-fused-madd -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -DPy_BUILD_CORE -o Modules/python.o Modules/python.c gcc: unrecognized option '-no-cpp-precomp' gcc: unrecognized option '-no-cpp-precomp' cc1: error: unrecognized command line option "-Wno-long-double" make: *** [Modules/python.o] Error 1 Python make failed. What is the best solution for this unrecognized command line option? Thanks for your help in advance, Dean From dh at triple-media.com Thu Apr 20 12:39:44 2006 From: dh at triple-media.com (Dennis Heuer) Date: Thu, 20 Apr 2006 12:39:44 +0200 Subject: [Python-Dev] extended bitwise operations Message-ID: <20060420123944.4d3cea56.dh@triple-media.com> I often experiment with touring machine algorithms and play around with alternative arithmetics. I'd like to do that with python but it offers only the standard bitwise operators. They're fine if one wants to do manipulations on the full integer. However, I'd like to have direct (single) bit access to be able to implement own rules, like how to deal with the carry. There is a very nice and very very small implementation of a BitArray that can do this. It is under the LGPL. However, it provides only a handful of functions. Possibly you like to implement an own version as a python module (I'd rather like to see a default syntax that can restrict the implemented bitwise operators to single bits or groups of bits.) Please have a look at bitarray.c: http://michael.dipperstein.com/bitlibs/bitlibs.html There are several other versions on the net. This one is more comfortable: http://www.koders.com/c/fid7F57E63EAE4B6028AAF3DCDB6414AD240CFB6CDD.aspx Thanks, Dennis From fraca7.python at free.fr Thu Apr 20 20:51:28 2006 From: fraca7.python at free.fr (=?ISO-8859-1?Q?J=E9r=F4me_Laheurte?=) Date: Thu, 20 Apr 2006 20:51:28 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <4443E597.9000407@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <4443526C.8040907@v.loewis.de> <4443E597.9000407@v.loewis.de> Message-ID: <C4600955-7A13-41D2-BC17-9049A42202DC@free.fr> Le 17 avr. 06 ? 20:59, Martin v. L?wis a ?crit : >> OTOH, we could just as well check in an executable that >> does all that, e.g. like the one in > > I did something like this: I checked the source of a > kill_python.exe application which looks at all running processes > and tries to kill python_d.exe. After several rounds of > experimentation, this now was able to unstick Trent's build > slave. Sorry I'm late, but something like "os.popen('taskkill.exe /F /IM python_d.exe')" would have worked; we use this at work. From pje at telecommunity.com Fri Apr 21 01:14:11 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 19:14:11 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <444802EC.2000302@egenix.com> References: <4447E3F0.3080708@colorstudy.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> Message-ID: <5.1.1.6.0.20060420181648.03f2b3d8@mail.telecommunity.com> At 11:53 PM 4/20/2006 +0200, M.-A. Lemburg wrote: >Only after endless discussions, Phillip added some weird >--long-option-name-that-no-one-can-remember to the setuptools >"install" command that restores the standard distutils behavior >which should be the default anyway. And later, I also made it the default behavior for installation with --root, without any further prompting from you or anyone, in fact. >The suggestions that were made were constructive. Just look >at the distutils-sig archive. Only very few of the suggestions >were taken into account. Actually, *all* of your suggestions were *taken into account*, by which I mean I have done all I can to understand them and to understand the concerns behind them -- as I do with all requests and feedback. Some, however, will be a long time coming in implementation, or are things I simply can't agree with. Those suggestions may or may not be "right" in some sense, but they are not right *for setuptools*. But I take all suggestions into account, and sometimes I find ways to implement them. When I say "no", it simply means, "I don't see how I can do that without breaking some other constraint." It has happened repeatedly, however, that a way of doing something occurs to me months after the request, and then it is added with little fanfare. Meanwhile, I'm always going to give a higher priority to the requests that are in strong alignment with the goals of setuptools, and come from: * developers actually using setuptools, * users who are installing packages that use setuptools, or * system packagers who would like to package things that use setuptools It also helps if they are things I can actually come up with a way to implement, without violating setuptools' goals. > Instead, setuptools fans insisted on >their right to have everything "just work" in their sense >of the word. Yeah, but they're not as bad as those Python fans with their crazy insistence on significant whitespace. ;) Seriously, do you really think this is a *bad* thing? >Most of them probably don't even know what the distutils >or Python standards are for installing packages, where the >motivation to have zip imports came from and have only >ever seen and used the setuptools way. Um, yeah, because Jim Fulton is such a newbie to Python, surely *he's* never had to use Makefile.pre.in to build exotic C exten... Oh, wait, never mind. :) >The two camps need to come together again, but that will require >understanding some of the history and standards that we've >had in Python ever since Python packages were introduced. >It will also require the setuptools folks to get some >feeling of respect for those who have worked in the field >for years. I'm sorry, but this is an example of the highly patronizing attitude you've been giving on the distutils-sig since day one. As it happens, I too remember the world before distutils. With due respect, Marc, not every Python developer is you or Fredrik or Jim. There are plenty of people with things to contribute to Python who find the distutils upsetting, confusing, or worse. When I started doing work for OSAF, I was very surprised to find out how many developers new to Python found the distutils not merely inconvenient (which was my personal perspective) but simply unusable. Some let forth streams of profanity at the mere *mention* of the distutils. So, please consider this: to experts such as you and I, the distutils may merely be inconvenient or tedious, something to throw together an mxSetup.py or zpkgutil or some such convenient utilities to handle all your personal distribution needs and avoid the repetition and tedium. And in fact, setuptools originally began as just such a few extensions for my personal use. But to Python newbies, the distutils is a confusing, broken, undocumented, arcane, archaic, unpredictable useless piece of ****, an embarrassment to Python, far worse than Java, and many many other things. And yes, those are all things that people have actually said to me. They really, really want things to Just Work, and they say so in so many words. So when I work on setuptools, I imagine those developers, and I try to imagine what they will think and what they will expect. To be frank, you haven't shown in our discussions that you respect or value *those* developers' perspective. You seem to believe that there are other things more important than making things Just Work for this audience. I don't know what those things are, but all I can say is, to me they don't exist. I just can't see it. Because all I can see is the frustration on the faces of developers who were enthusiastic about Python until they encountered the distutils. Distutils *is* great. It's also unusable for developers who have better things to do with their time. The two are not mutually exclusive, as Fredrik has often enough pointed out regarding other problems with Python's "public face". >You may not know it, but having worked with >distutils for a long time, I have great respect for >Phillip's work - it's just that I find some of his design >decisions wrong, since they don't follow the standards. And that's where we disagree - standards exist to serve users, not the other way 'round. Standards change when times change. The choice to follow a standard is an economic decision for me, not a religious one. Where distutils works for setuptools' goals, I embraced it, and where it does not, I extended it. Call me Bill Gates if you must, but it is the *reason* that setuptools is successful, not blogosphere hype (you'll find almost as many anti-setuptools blog articles as pro- ones; the majority are actually just neutral) or screencasts and pundits (there are none). It has "fans" because its design choices are strictly Darwinian: whatever makes setuptools be adopted by more packages is the right choice for setuptools. Any other considerations have been secondary. As Guido pointed out earlier, "acceptance is critical" for a system like this. That's why the rallying cry is that things should "Just Work" -- because that's what people want. >There's really nothing wrong with the standard distutils >two step process: > >1. download and unzip the source file > >2. run "python setup.py install" First, that's three steps. "Download and unzip" doesn't come in one-step form so far as I know. Second, you're leaving out the part where you upgrade a package that renames or moves or removes a module. Your response on the distutils-sig was that the package author should write a distutils extension to do the necessary cleanup, which again highlights a complete lack of respect for package authors' time. Not everyone who can contribute something valuable to Python's library base is willing to or even *can* become a distutils expert. Simple use cases like upgrading or even uninstalling a library *should be simple*. You should not have to be a packaging expert to share your package with the world. I'm not saying setuptools is perfect for this either -- hence the 0.6 version number for the stable release. But it's the only thing that's even *trying*. >I've been publishing the mx Tools since 1997. Back then we >only had the old Makefile.pre.in mechanism and still most people >managed to get the tools working (step 2 then being: >"make -f Makefile.pre.in boot; make; make install") without >having to ask for help. Since I've started using distutils >in 2001, the number of support questions related to compiling >and installing the tools dropped to near zero. > >I think this is quite a success story for distutils. > >Unfortunately, this fact is rarely mentioned in all these >setuptools discussions, probably because it's just not >the latest greatest and shiniest tool in the world >anymore as it was perceived in the early days. I wish people would stop interpreting setuptools as some kind of a dig at the distutils. Does the existence of Python 2.5 mean that Python 2.4 is bad? Does the existence of third-party libraries make the standard library incomplete? From ianb at colorstudy.com Fri Apr 21 01:14:09 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 20 Apr 2006 18:14:09 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <79990c6b0604201549t3e947bcat90b61c3cf39b8ac3@mail.gmail.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <79990c6b0604201549t3e947bcat90b61c3cf39b8ac3@mail.gmail.com> Message-ID: <444815C1.70601@colorstudy.com> Paul Moore wrote: > 2. Distributors will supply .egg files rather than bdist_wininst > installers (this is already happening). Really people should at least be uploading source packages in addition to eggs; it's certainly not hard to do so. Perhaps a distributor quick intro needs to be written for the standard library. Something short; both distutils and setuptools documentation are pretty long, and for someone who just has some simple Python code to get out it's overwhelming. Fredrik also asked for a document, but I don't think it is this document; it wasn't clear to me what exactly he wanted documented. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From baptiste13 at altern.org Fri Apr 21 01:18:31 2006 From: baptiste13 at altern.org (Baptiste Carvello) Date: Fri, 21 Apr 2006 01:18:31 +0200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <200604201456.13148.anthony@interlink.com.au> References: <200604201456.13148.anthony@interlink.com.au> Message-ID: <e29560$sk5$1@sea.gmane.org> Hello, just one more datapoint to help Phillip change his mind on the default install behavior and the monkeypatching: this is actually hurting adoption of setuptools. example: the matplotlib folks wanted to add *optional* setuptool support, but this is impossible because simply importing setuptools has too much side-effects. So instead, they ask the setuptool users to use python -c "import setuptools; execfile('setup.py')" install That's just too bad, and much more confusing than setup.py install_egg ! Oh, and don't get me wrong, I *love* eggs. They are great for many use cases, even for a Debian user! From the "user" point of view, they definitely belong in the stdlib, the sooner, the better. They just need a little bit more polishing. Phillip, thank you a lot for making this happen :-) I feel that the hostility is undeserved. Baptiste From pje at telecommunity.com Fri Apr 21 01:31:36 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 19:31:36 -0400 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44480BE1.5090107@egenix.com> References: <5.1.1.6.0.20060420173932.0379fe90@mail.telecommunity.com> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <5.1.1.6.0.20060420173932.0379fe90@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060420191443.04240998@mail.telecommunity.com> At 12:32 AM 4/21/2006 +0200, M.-A. Lemburg wrote: >Phillip J. Eby wrote: > > What I'm opposed to in making setuptools install things the distutils way > > by default is that there is no easy path to clean upgrade or installation > > in the absence of a system packaging tool like RPM or deb or > > what-have-you. I am not opposed to doing the "classic" style of > > installation by default *forever*, but only until setuptools can handle > > uninstallation and upgrades in that format. Right now, it's a lot easier > > to uninstall things if they are all in one directory. > >If that's all it takes, have a look at the uninstall implementation >in mxSetup.py (e.g. from egenix-mx-base). I have. As far as I can tell, it requires you to save the original setup.py, and for any dynamic considerations of that setup.py to still be in effect. What's actually needed is to save the list of outputs at *installation* time, so it's not practical to use your implementation for setuptools' purposes. I thought I'd explained this when you suggested this on the distutils-sig previously, but perhaps I forgot to. As well, there's another consideration for easy_install, which "virtualizes" the installation process for packages that don't use setuptools. Such packages may have customized their installation process by extending the distutils, *without* overriding get_outputs(). Since few people actually use the --record option for anything important, nobody notices when it breaks. It's often hard to implement correctly in the first place, so this is probably why most tools that wrap distutils prefer to use --root instead of --record anyway... which means that there's even less incentive to write correct get_outputs() methods. >BTW, if there's interest, I can look into porting the stuff in >mxSetup.py into stock distutils. That'll give it uninstall, >some autoconf and support for building Unix C libs as part of >the process (plus a bunch of other things). I imagine quite a lot of those things would be useful, except that I have personally noticed some issues through these extensions not following distutils standards. (Oh, the irony! :)) Specifically, I have noticed that your autoconf command does not use the 'build' command as the basis for obtaining its default '--compiler', which caused me quite a bit of trouble when trying to easy_install some of your packages. At least your mx_build_ext command handles this correctly. In general, distutils commands take their defaults from either the "build" or "install" master commands, as this allows users to have a single place to configure default options. But I digress. I do think there are a lot of useful things in your work and it was on my list to steal as many of your ideas as possible as soon as I got to working on setuptools things that needed them. :) I certainly wouldn't object to them being in the distutils, though I'm likely to nitpick certain little things like the aforementioned compiler defaults. From barry at python.org Fri Apr 21 01:33:18 2006 From: barry at python.org (Barry Warsaw) Date: Thu, 20 Apr 2006 19:33:18 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <444802EC.2000302@egenix.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> Message-ID: <1145575998.13311.38.camel@resist.wooz.org> On Thu, 2006-04-20 at 23:53 +0200, M.-A. Lemburg wrote: > There's really nothing wrong with the standard distutils > two step process: > > 1. download and unzip the source file > > 2. run "python setup.py install" The only thing I'll add (other than put me in the "just make it work and tell me what to do" camp ;) is that I really want a simple way to say "do not install into the system Python, put everything over here and I will fiddle with my environment to make it work". I may be way out of date with the state of the art these days, but in the past, I've had a difficult time making this work for Mailman. For example, at various times we've had to distribute our own email package and Asian codecs packages. The only way I've gotten things to work is by specifying --install-lib --install-data and --install-purelib switches, which was pretty difficult (IIRC) to figure out. Again, maybe there's an easy way to do this with modern distutils, but I just want to make sure this is a use case that's on the radar. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060420/559b0d24/attachment.pgp From jcarlson at uci.edu Fri Apr 21 01:39:47 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Thu, 20 Apr 2006 16:39:47 -0700 Subject: [Python-Dev] extended bitwise operations In-Reply-To: <20060420123944.4d3cea56.dh@triple-media.com> References: <20060420123944.4d3cea56.dh@triple-media.com> Message-ID: <20060420163014.F1D7.JCARLSON@uci.edu> Dennis Heuer <dh at triple-media.com> wrote: > I often experiment with touring machine algorithms and play around with > alternative arithmetics. I'd like to do that with python but it offers > only the standard bitwise operators. They're fine if one wants to do > manipulations on the full integer. However, I'd like to have direct > (single) bit access to be able to implement own rules, like how to deal > with the carry. There is a very nice and very very small implementation > of a BitArray that can do this. It is under the LGPL. However, it > provides only a handful of functions. Possibly you like to implement an > own version as a python module (I'd rather like to see a default syntax > that can restrict the implemented bitwise operators to single bits or > groups of bits.) Please have a look at bitarray.c: Bitwise access to integers has been discussed in python-dev and python-list for quite a while, each time being refused for one reason or another (though it has been a while since someone has brought it up here). Since bit arrays are arguably trivial to implement in Python (I once did it in something like 20 lines), and because there hasn't been a significant push from the Python user community, I can't help but be -1. - Josiah From pje at telecommunity.com Fri Apr 21 01:43:25 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 19:43:25 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <1145575998.13311.38.camel@resist.wooz.org> References: <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> Message-ID: <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> At 07:33 PM 4/20/2006 -0400, Barry Warsaw wrote: >The only thing I'll add (other than put me in the "just make it work and >tell me what to do" camp ;) is that I really want a simple way to say >"do not install into the system Python, put everything over here and I >will fiddle with my environment to make it work". > >I may be way out of date with the state of the art these days, but in >the past, I've had a difficult time making this work for Mailman. For >example, at various times we've had to distribute our own email package >and Asian codecs packages. The only way I've gotten things to work is >by specifying --install-lib --install-data and --install-purelib >switches, which was pretty difficult (IIRC) to figure out. I'm surprised you needed --install-purelib; it seems like install-lib should have been sufficient. As for --install-data, just put your data in the packages and use Python 2.4's ability to install package data, or one of the pre-existing distutils extensions that beat install_data over the head to make it install the data with the packages. >Again, maybe there's an easy way to do this with modern distutils, but I >just want to make sure this is a use case that's on the radar. In the easy_install case, "easy_install -d wherever" puts everything (and I do mean *everything*, including scripts) under the 'wherever' directory. If you want scripts to go someplace different, there's a separate option. But if you don't specify these things, easy_install gets its defaults from those defined for distutils, specifically the --install-lib and --install-scripts options of the "install" command. However, easy_install will gripe if the "-d wherever" isn't in PYTHONPATH and isn't site-packages, and you aren't saying you don't need it to be importable. If you promise to make sure the installed packages will be on sys.path when you want to import them, you use -m or --multi-version and easy_install will happily put the packages anywhere you like and leave it to you to get them on sys.path. From aahz at pythoncraft.com Fri Apr 21 01:46:05 2006 From: aahz at pythoncraft.com (Aahz) Date: Thu, 20 Apr 2006 16:46:05 -0700 Subject: [Python-Dev] unrecognized command line option "-Wno-long-double" In-Reply-To: <44465A2D.90900@llnl.gov> References: <44465A2D.90900@llnl.gov> Message-ID: <20060420234605.GA17128@panix.com> On Wed, Apr 19, 2006, Dean N. Williams wrote: > > When I try to build Python 2.4.3, I get the following error below: Please do not spam multiple lists. Please read this URL before you ask for further help: http://www.catb.org/~esr/faqs/smart-questions.html -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From greg at electricrain.com Fri Apr 21 03:00:49 2006 From: greg at electricrain.com (Gregory P. Smith) Date: Thu, 20 Apr 2006 18:00:49 -0700 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <20060419051254.9b1318b4.dh@triple-media.com> References: <20060419051254.9b1318b4.dh@triple-media.com> Message-ID: <20060421010049.GZ10641@zot.electricrain.com> > Module names like hashlib are not python-like too (too c/lowlevel-like). what is python-like? hashlib was chosen because it is a library of hash functions and hash() is already taken as a builtin function (otherwise i'd leave off the lib). -g From hyeshik at gmail.com Fri Apr 21 03:29:00 2006 From: hyeshik at gmail.com (Hye-Shik Chang) Date: Fri, 21 Apr 2006 10:29:00 +0900 Subject: [Python-Dev] Stream codecs and _multibytecodec In-Reply-To: <9e804ac0604201606j3db70591o5a89495b05ecd0a2@mail.gmail.com> References: <9e804ac0604201606j3db70591o5a89495b05ecd0a2@mail.gmail.com> Message-ID: <4f0b69dc0604201829i2f9442d0lbec7939a185c84ad@mail.gmail.com> On 4/21/06, Thomas Wouters <thomas at python.org> wrote: > While merging the trunk changes into the p3yk branch, I discovered what I > think is a bug in the stream codecs. It's easily reproduced in the trunk: in > Lib/codecs.py, make the 'Codec' class new-style. Then, suddenly, test_codecs > will crash with an exception like this: [snip] > I'm not sure whether this attribute conflict is on purpose or not, but since > it will break in the future, I suggest it gets fixed. It *looks* like > renaming the _MultibyteStreamWriter attribute is the easiest solution, but I > don't know which API has precedence. The readonly attribute "stream" of _MultibyteStreamWriter is exactly equivalent to codecs.StreamWriter's and it can't be removed to work correctly. I intended to override all methods of StreamWriters. The only reason why it inherits from codecs.StreamWriter is not to break isinstance(x, StreamWriter) checks and nothing from StreamWriter is used at all. I'll verify and fix it in p3yk branch soon. Thank you for the inspection! Hye-Shik From barry at python.org Fri Apr 21 05:08:52 2006 From: barry at python.org (Barry Warsaw) Date: Thu, 20 Apr 2006 23:08:52 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> References: <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> Message-ID: <1145588932.26108.7.camel@geddy.wooz.org> On Thu, 2006-04-20 at 19:43 -0400, Phillip J. Eby wrote: > >I may be way out of date with the state of the art these days, but in > >the past, I've had a difficult time making this work for Mailman. For > >example, at various times we've had to distribute our own email package > >and Asian codecs packages. The only way I've gotten things to work is > >by specifying --install-lib --install-data and --install-purelib > >switches, which was pretty difficult (IIRC) to figure out. > > I'm surprised you needed --install-purelib; it seems like install-lib > should have been sufficient. > > As for --install-data, just put your data in the packages and use Python > 2.4's ability to install package data, or one of the pre-existing distutils > extensions that beat install_data over the head to make it install the data > with the packages. I should have been clearer -- these weren't /my/ packages that I was installing, they were packages that others wrote that Mailman needed, but I didn't want to require them to be in site-packages. IIRC, I need all of those switches to get some of those packages to not try to install stuff in the default (i.e. system) locations. Now that could all be because of buggy 3rd party packages, old versions of Python or distutils, or my own stupidity, so I should probably re-examine why ... or if those switches are still needed. > >Again, maybe there's an easy way to do this with modern distutils, but I > >just want to make sure this is a use case that's on the radar. > > In the easy_install case, "easy_install -d wherever" puts everything (and I > do mean *everything*, including scripts) under the 'wherever' > directory. See, now that's cool. I want that! :) > If you want scripts to go someplace different, there's a > separate option. But if you don't specify these things, easy_install gets > its defaults from those defined for distutils, specifically the > --install-lib and --install-scripts options of the "install" command. > > However, easy_install will gripe if the "-d wherever" isn't in PYTHONPATH > and isn't site-packages, and you aren't saying you don't need it to be > importable. If you promise to make sure the installed packages will be on > sys.path when you want to import them, you use -m or --multi-version and > easy_install will happily put the packages anywhere you like and leave it > to you to get them on sys.path. That's exactly what I want. I do add some additional directories to sys.path at run time, but I'm also not opposed to setting $PYTHONPATH in a makefile rule at install time. What I really want is for my Mailman installation to be completely self-contained, including the 3rd-party packages (specific versions of which) Mailman depends on. Question out of total ignorance: say I get a 3rd party package that has a standard distutils setup.py but knows nothing of setuptools. Say I install my own copy of setuptools (or have Python 2.5). Can that 3rd party package still be installed "the setuptools way" without modification? My guess is that the original packager has to do /something/ to utilize setuptools (which is fine really -- I'm mostly just curious). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060420/28b6f634/attachment.pgp From pje at telecommunity.com Fri Apr 21 05:33:12 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 23:33:12 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <1145588932.26108.7.camel@geddy.wooz.org> References: <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060420232014.01e458a8@mail.telecommunity.com> At 11:08 PM 4/20/2006 -0400, Barry Warsaw wrote: >Question out of total ignorance: say I get a 3rd party package that has >a standard distutils setup.py but knows nothing of setuptools. Say I >install my own copy of setuptools (or have Python 2.5). Can that 3rd >party package still be installed "the setuptools way" without >modification? Yes - that's what easy_install does, and that's a big part of what setuptools' monkeypatching is for. The 3rd party package thinks it's running with the distutils, but in fact it's running with setuptools. EasyInstall then just asks the package's setup.py to "bdist_egg", and once that's done, easy_install just dumps the egg into the installation location. Voila et fini. > My guess is that the original packager has to >do /something/ to utilize setuptools (which is fine really -- I'm mostly >just curious). No. All they have to do is refrain from customizing the distutils beyond setuptools' ability to function. A package that thinks it's subclassing the distutils is actually subclassing setuptools, so if they change anything too fundamental, the whole house of cards comes down. For example, Twisted assigns distribution.extensions -- an attribute that's supposed to be a list of Extension objects or tuples -- to *the number 1*. This of course causes setuptools' brain to explode. :) Their choice makes sense in distutils terms, because the distutils only checks for the truth of this attribute before it gets around to invoking the build_ext command -- which they replace. But setuptools does more complex things relating to extensions, because it needs to know how to build wrappers for them when they get put into eggs. So, this is a nice example of how complex it can be to extend the distutils in ways that don't break random popular packages. Unfortunately, I can't yet fix this issue with Twisted, because setuptools doesn't yet support optional building of extensions. That's one piece of Marc's mxSetup framework that I've been looking to copy or steal once I get to really overhauling extension and library building. My OSAF work priorities put that work at a later time than now, however. Anyway, that's a complete digression from the question you asked. As long as Mailman doesn't depend on building something like Numeric or Twisted, you can probably wrap it in easy_install. If it's not something that uses C code, you can just prebuild cross-platform eggs using easy_install. If it uses C code, you'd be able to build it on the target system. I don't know what Python versions you support, though, and setuptools only goes back to Python 2.3. (Gotta give Google *some* incentive to upgrade.. ;) Although seriously it's because zipimport came in as of 2.3.) From pje at telecommunity.com Fri Apr 21 05:36:16 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 20 Apr 2006 23:36:16 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e29560$sk5$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <200604201456.13148.anthony@interlink.com.au> Message-ID: <5.1.1.6.0.20060420193222.03f69e80@mail.telecommunity.com> At 01:18 AM 4/21/2006 +0200, Baptiste Carvello wrote: >So instead, they ask the setuptool users to use >python -c "import setuptools; execfile('setup.py')" install >That's just too bad, and much more confusing than setup.py install_egg ! Actually, the setuptools users just run "easy_install matplotlib", following which they get an error saying to install numpy first. Then they try to easy_install numpy, only there are no download links on PyPI, so they have to track down the URL for that and paste it on the command line, and wait forever for it to download from Sourceforge. However, once they've overcome that hurdle (which would be solved by adding a download URL to the PyPI entry for numpy), they rerun "easy_install matplotlib", and away they go. I don't know why some setuptools users demand that package authors use setuptools. It simply isn't necessary that they do so, for people to be able to use easy_install; they just need a not-too-outlandish setup.py and a PyPI entry that either links to downloads, or to pages that have direct download links. None of that requires them to use setuptools. My hope in the long term is that Grig's Cheesecake project will address this, at least if PyPI gains the ability to automatically display (and periodically update) packages' ratings for installability, etc. If EasyInstall has a hard time finding where to download a package, that's a pretty good indication that it's more difficult for human users as well. From anthony at interlink.com.au Fri Apr 21 05:38:29 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Fri, 21 Apr 2006 13:38:29 +1000 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e28ggv$o5n$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> Message-ID: <200604211338.31105.anthony@interlink.com.au> On Friday 21 April 2006 03:31, Fredrik Lundh wrote: > skip at pobox.com wrote: > > Maybe they know something we don't. > > oh, please. it's not like people like myself and MAL don't know > anything about package distribution... > > (why is it that people who *don't* distribute stuff are a lot more > im- pressed by a magic tool than people who've spent the last > decade distributing stuff ? has it ever occurred to you that we > know some- thing that you don't ?) Er - what? I've been distributing, packaging and installing any number of Python packages for 10 years or more now. Sure, none of them are on the popularity scale of PIL or the mx.* tools, but there's been a lot of them. I also understand distutils pretty well. And I _am_ impressed by some of the features added by setuptools. As far as "knowing something you don't" - well, yes. You and MAL are undoubtedly extremely knowledgable about distutils. But it should not be a requirement for package authors that they become an expert in the packaging software just to make a release! I said it before, and I'll say it again: The audience for Python should not only consist of the people on python-dev. Anthony From kbk at shore.net Fri Apr 21 06:09:17 2006 From: kbk at shore.net (Kurt B. Kaiser) Date: Fri, 21 Apr 2006 00:09:17 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200604210409.k3L49H9n031160@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 371 open (-12) / 3195 closed (+39) / 3566 total (+27) Bugs : 908 open (+22) / 5767 closed ( +8) / 6675 total (+30) RFE : 211 open ( +1) / 212 closed ( +0) / 423 total ( +1) New / Reopened Patches ______________________ Fix test_augassign in p3yk (2006-04-14) CLOSED http://python.org/sf/1470424 opened by Thomas Wouters fix test_exceptions in p3yk (2006-04-14) CLOSED http://python.org/sf/1470459 opened by Thomas Wouters fix test_bisect (2006-04-14) CLOSED http://python.org/sf/1470460 opened by Thomas Wouters partially fix test_class in p3yk (2006-04-14) CLOSED http://python.org/sf/1470504 opened by Thomas Wouters fix test_descrtut in p3yk (2006-04-14) CLOSED http://python.org/sf/1470515 opened by Thomas Wouters fix test_enumerate in p3yk (2006-04-14) CLOSED http://python.org/sf/1470522 opened by Thomas Wouters Fix test_descr in p3yk (2006-04-14) CLOSED http://python.org/sf/1470536 opened by Thomas Wouters fix test_getargs2 in p3yk (2006-04-14) CLOSED http://python.org/sf/1470543 opened by Thomas Wouters Bugfix for #1470540 (XMLGenerator cannot output UTF-16) (2006-04-15) http://python.org/sf/1470548 opened by Nikolai Grigoriev fix test_inspect in p3yk (2006-04-14) CLOSED http://python.org/sf/1470560 opened by Thomas Wouters fix test_itertools in p3yk (2006-04-14) CLOSED http://python.org/sf/1470566 opened by Thomas Wouters urllib2 ProxyBasicAuthHandler broken (2006-04-15) http://python.org/sf/1470846 opened by John J Lee Building Python with MS Free Compiler (2006-04-15) CLOSED http://python.org/sf/1470875 opened by Paul Moore Fix for urllib/urllib2 ftp bugs 1357260 and 1281692 (2006-04-15) http://python.org/sf/1470976 opened by John J Lee Forbid iteration over strings (2006-04-16) http://python.org/sf/1471291 opened by Guido van Rossum start testing strings > 2GB (2006-04-16) http://python.org/sf/1471578 opened by Neal Norwitz test for broken poll at runtime (2006-04-17) http://python.org/sf/1471761 opened by Ronald Oussoren --enable-universalsdk on Mac OS X (2006-04-17) http://python.org/sf/1471883 opened by Ronald Oussoren Weak linking support for OSX (2006-04-17) http://python.org/sf/1471925 opened by Ronald Oussoren pdb 'clear' command doesn't clear selected bp's (2006-04-18) http://python.org/sf/1472184 opened by Kuba Ko??czyk fix for #1472251 (2006-04-18) http://python.org/sf/1472257 opened by Kuba Ko??czyk fix for #1472251 (2006-04-18) http://python.org/sf/1472263 opened by Kuba Ko??czyk Updates to PEP 359 and 3002 (2006-04-18) CLOSED http://python.org/sf/1472459 opened by Steven Bethard make range be xrange (2006-04-19) http://python.org/sf/1472639 opened by Thomas Wouters rlcompleter to be usable without readline (2006-04-19) http://python.org/sf/1472854 opened by kxroberto Improve docs for tp_clear and tp_traverse (2006-04-19) http://python.org/sf/1473132 opened by Collin Winter Add a gi_code attr to generators (2006-04-19) http://python.org/sf/1473257 opened by Collin Winter Patches Closed ______________ fix for 764437 AF_UNIX socket special linux socket names (2004-11-07) http://python.org/sf/1062014 closed by arigo Make -tt the default (2006-04-12) http://python.org/sf/1469190 closed by twouters Alternative to rev 45325 (2006-04-12) http://python.org/sf/1469594 closed by montanaro Compiling and linking main() with C++ compiler (2005-10-12) http://python.org/sf/1324762 closed by loewis Python-ast.h & Python-ast.c generated twice (#1355883) (2005-11-13) http://python.org/sf/1355971 closed by loewis don't add -OPT:Olimit=0 for icc (2005-03-12) http://python.org/sf/1162023 closed by loewis improved configure.in output (2004-10-12) http://python.org/sf/1045620 closed by loewis environment parameter for popen2 (2003-02-28) http://python.org/sf/695275 closed by loewis Fix test_augassign in p3yk (2006-04-14) http://python.org/sf/1470424 closed by twouters Kill off docs for unsafe macros (2003-03-13) http://python.org/sf/702933 closed by loewis fix test_exceptions in p3yk (2006-04-14) http://python.org/sf/1470459 closed by twouters fix test_bisect in p3yk (2006-04-14) http://python.org/sf/1470460 closed by twouters IDNA codec simplification (2006-03-18) http://python.org/sf/1453235 closed by doerwalter partially fix test_class in p3yk (2006-04-14) http://python.org/sf/1470504 closed by twouters fix test_descrtut in p3yk (2006-04-14) http://python.org/sf/1470515 closed by twouters fix test_enumerate in p3yk (2006-04-14) http://python.org/sf/1470522 closed by twouters Fix test_descr in p3yk (2006-04-14) http://python.org/sf/1470536 closed by twouters fix test_getargs2 in p3yk (2006-04-14) http://python.org/sf/1470543 closed by twouters fix test_inspect in p3yk (2006-04-14) http://python.org/sf/1470560 closed by twouters fix test_itertools in p3yk (2006-04-14) http://python.org/sf/1470566 closed by twouters urllib2 dloads failing through HTTP proxy w/ auth (2005-04-18) http://python.org/sf/1185444 closed by loewis Using size_t where appropriate (2005-03-18) http://python.org/sf/1166195 closed by loewis python-config (2005-03-12) http://python.org/sf/1161914 closed by loewis socketmodule.c's recvfrom on OSF/1 4.0 (2005-04-27) http://python.org/sf/1191065 closed by loewis wrong offsets in bpprint() (2005-04-28) http://python.org/sf/1191700 closed by loewis Use GCC4 ELF symbol visibility (2005-04-30) http://python.org/sf/1192789 closed by loewis _ssl.mak Makefile patch (Win32) (2005-05-07) http://python.org/sf/1197150 closed by loewis Bugfix for signal-handler on x64 Platform (2005-05-20) http://python.org/sf/1205436 closed by loewis option to allow reload to affect existing instances (2005-06-01) http://python.org/sf/1212921 closed by loewis More robust MSVC6 finder (2003-07-31) http://python.org/sf/780821 closed by loewis Incremental codecs (2006-02-21) http://python.org/sf/1436130 closed by doerwalter declspec for ssize_t (2006-03-12) http://python.org/sf/1448484 closed by loewis Building Python with MS Free Compiler (2006-04-15) http://python.org/sf/1470875 closed by loewis Tkinter clipboard_get method (new in Tk 8.4) (2004-11-10) http://python.org/sf/1063914 closed by loewis byteorder issue in PyMac_GetOSType (2005-06-10) http://python.org/sf/1218421 closed by ronaldoussoren Direct framework linking for MACOSX_DEPLOYMENT_TARGET < 10.3 (2005-01-07) http://python.org/sf/1097739 closed by jackjansen breakpoint command lists in pdb (2003-08-18) http://python.org/sf/790710 closed by loewis Updates to PEP 359 and 3002 (2006-04-18) http://python.org/sf/1472459 closed by goodger Log gc times when DEBUG_STATS set (2005-01-11) http://python.org/sf/1100294 closed by montanaro New / Reopened Bugs ___________________ mailbox.PortableUnixMailbox fails to parse 'From ' in body (2006-04-14) http://python.org/sf/1470212 opened by Lars Kellogg-Stedman QNX4.25 port (2006-04-14) CLOSED http://python.org/sf/1470300 opened by kbob_ru test_ctypes fails on FreeBSD 4.x (2006-04-14) CLOSED http://python.org/sf/1470353 opened by Andrew I MacIntyre Logging doesn't report the correct filename or line numbers (2006-04-14) http://python.org/sf/1470422 opened by Michael Tsai Mention coercion removal in Misc/NEWS (2006-04-14) CLOSED http://python.org/sf/1470502 opened by Brett Cannon Error in PyGen_NeedsFinalizing() (2006-04-14) CLOSED http://python.org/sf/1470508 opened by Tim Peters XMLGenerator creates a mess with UTF-16 (2006-04-15) http://python.org/sf/1470540 opened by Nikolai Grigoriev Visual Studio 2005 CRT error handler (2006-04-16) http://python.org/sf/1471243 opened by ShireII setup.py --help-commands exception (2006-04-16) http://python.org/sf/1471407 opened by James William Pye reload(ctypes) leaks references (2006-04-16) CLOSED http://python.org/sf/1471423 opened by Thomas Heller tarfile.py chokes on long names (2006-04-16) http://python.org/sf/1471427 opened by Alexander Schremmer IDLE does not start 2.4.3 (2006-04-17) http://python.org/sf/1471806 opened by Erin Python libcrypt build problem on Solaris 8 (2006-04-17) http://python.org/sf/1471934 opened by Paul Eggert curses mvwgetnstr build problem on Solaris 8 (2006-04-17) http://python.org/sf/1471938 opened by Paul Eggert mimetools module getencoding error (2006-04-17) http://python.org/sf/1471985 opened by James G. sack Recursive class instance "error" (2002-03-20) http://python.org/sf/532646 reopened by bcannon sgmllib do_tag description error (2006-04-18) http://python.org/sf/1472084 opened by James G. sack interactive: no cursors ctrl-a/e... in 2.5a1/linux/debian (2006-04-18) http://python.org/sf/1472173 opened by kxroberto pdb 'clear' command doesn't clear selected bp's (2006-04-18) http://python.org/sf/1472191 opened by Kuba Ko??czyk pdb 'run' crashes when the it's first argument is non-string (2006-04-18) http://python.org/sf/1472251 opened by Kuba Ko??czyk import module with .dll extension (2006-04-18) http://python.org/sf/1472566 opened by svenn 32/64bit pickled Random incompatiblity (2006-04-19) http://python.org/sf/1472695 opened by Peter Maxwell Bug in threadmodule.c:local_traverse (2006-04-18) CLOSED http://python.org/sf/1472710 opened by Collin Winter xml.sax.saxutils.XMLGenerator mangles \r\n\t in attributes (2006-04-19) http://python.org/sf/1472827 opened by Mikhail Gusarov Tix: Subwidget names (2006-04-19) http://python.org/sf/1472877 opened by Matthias Kievernagel shutil.copytree debug message problem (2006-04-19) http://python.org/sf/1472949 opened by Christophe DUMEZ valgrind detects problems in PyObject_Free (2006-04-19) CLOSED http://python.org/sf/1473000 opened by Jaime Torres Amate SimpleXMLRPCServer responds to any path (2006-04-19) http://python.org/sf/1473048 opened by A.M. Kuchling urllib2.Request constructor to urllib.quote the url given (2006-04-20) http://python.org/sf/1473560 opened by Nikos Kouremenos cPickle produces locale-dependent dumps (2006-04-20) http://python.org/sf/1473625 opened by Ivan Vilata i Balaguer TempFile can hang on Windows (2006-04-20) http://python.org/sf/1473760 opened by Tim Peters test test_capi crashed -- thread.error: can't allocate lock (2006-04-21) http://python.org/sf/1473979 opened by shashi Bugs Closed ___________ QNX4.25 port (2006-04-14) http://python.org/sf/1470300 closed by loewis test_ctypes fails on FreeBSD 4.x (2006-04-14) http://python.org/sf/1470353 closed by theller make depend/clean issues w/ ast (2005-11-13) http://python.org/sf/1355883 closed by loewis configure incorrectly adds -OPT:Olimit=0 for icc (2005-03-12) http://python.org/sf/1162001 closed by hoffmanm Mention coercion removal in Misc/NEWS (2006-04-14) http://python.org/sf/1470502 closed by nnorwitz Error in PyGen_NeedsFinalizing() (2006-04-14) http://python.org/sf/1470508 closed by tim_one bdist_wininst preinstall script support is broken in 2.5a1 (2006-04-06) http://python.org/sf/1465834 closed by theller reload(ctypes) leaks references (2006-04-16) http://python.org/sf/1471423 closed by theller Bug in threadmodule.c:local_traverse (2006-04-18) http://python.org/sf/1472710 deleted by collinwinter valgrind detects problems in PyObject_Free (2006-04-19) http://python.org/sf/1473000 closed by nnorwitz New / Reopened RFE __________________ "idna" encoding (drawing unicodedata) necessary in httplib? (2006-04-18) http://python.org/sf/1472176 opened by kxroberto From barry at python.org Fri Apr 21 06:20:32 2006 From: barry at python.org (Barry Warsaw) Date: Fri, 21 Apr 2006 00:20:32 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <5.1.1.6.0.20060420232014.01e458a8@mail.telecommunity.com> References: <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> <5.1.1.6.0.20060420232014.01e458a8@mail.telecommunity.com> Message-ID: <1145593232.26109.44.camel@geddy.wooz.org> Thanks for all the great information Phillip. On Thu, 2006-04-20 at 23:33 -0400, Phillip J. Eby wrote: > Anyway, that's a complete digression from the question you asked. As long > as Mailman doesn't depend on building something like Numeric or Twisted, > you can probably wrap it in easy_install. If it's not something that uses > C code, you can just prebuild cross-platform eggs using easy_install. If > it uses C code, you'd be able to build it on the target system. > > I don't know what Python versions you support, though, and setuptools only > goes back to Python 2.3. (Gotta give Google *some* incentive to > upgrade.. ;) Although seriously it's because zipimport came in as of 2.3.) For now, we won't change anything for Mailman 2.1, which supports back to Python 2.1, but that's getting more painful by the day. ;) For Mailman 2.2, we'll require a minimum of Python 2.3 (I wish I could set that at 2.4 but too many distros/OSes do not yet provide 2.4 that that is infeasible). And at the moment the only requirement is the email package, and I know who to bug about that one <wink>. It's very possible we'll require one of the popular templating packages, and maybe other external packages, although I don't think Twisted itself will be needed (perhaps for the mythical Mailman 3). Only Tim would want Numeric in Mailman though as I've heard that he calculates the optimum South Park watching time based on some complex algorithm involving the email he receives. He's explained it to me several times, but I'm too dumb to get it. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060421/9012fb76/attachment.pgp From martin at v.loewis.de Fri Apr 21 07:07:17 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 07:07:17 +0200 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <20060419051254.9b1318b4.dh@triple-media.com> References: <20060419051254.9b1318b4.dh@triple-media.com> Message-ID: <44486885.3000404@v.loewis.de> Dennis Heuer wrote: > Module names > like hashlib are not python-like too (too c/lowlevel-like). I agree with Greg: hashlib is a Pythonic name for a module, just like httplib, mhlib, xmlrpclib, cookielib, contextlib, difflib, ... OTOH, it might be indeed that the ctypes name need to be aligned with naming conventions. Regards, Martin From ronaldoussoren at mac.com Fri Apr 21 08:00:04 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 21 Apr 2006 08:00:04 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447F852.8070502@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <1225473E-39CA-4840-AF61-DEA403D734F8@mac.com> <4447F852.8070502@v.loewis.de> Message-ID: <9EC47CB3-A186-4E13-BB5E-F0527A4A3D2C@mac.com> On 20-apr-2006, at 23:08, Martin v. L?wis wrote: > Ronald Oussoren wrote: >> As far as I understand the issues they compete up to a point, but >> should >> also make it easier to create platform packages that contain >> proper the >> proper dependencies because those are part of machine-readable >> meta-data >> instead of being written down in readme files. Oddly enough that was >> also the objection from one linux distribution maintainer: somehow >> his >> opinion was that the author of a package couldn't possibly know the >> right depedencies for it. > > What he can't possibly know is the *name* of the package he depends > on. > For example, a distutils package might be called 'setuptools', so > developers of additional packages might depend on 'setuptools'. > However, > on Debian, the dependency should be different: The package should > depend > on either 'python-setuptools' or 'python2.3-setuptools', depending > on details which are off-topic here. But when the dependency is written down the platform maintainer, such as the Debian developers, could create a mapping from distutils package names to platform package names. And this could be done using rules instead of a mapping tape (e.g. the platform package is 'python%(pythonver)-% (distutilsname)'). The important bit is that dependencies a present in a machine readable form, and thanks to easy_install you can be pretty sure that these are actually useable. > >> As for platform packages: not all platforms have useable packaging >> systems. >> MacOSX is one example of those, the system packager is an >> installer and >> doesn't include an uninstaller. Eggs make it a lot easier to >> manage python >> software in such an environment (and please don't point me to Fink or >> DarwinPorts on OSX, those have serious problems of their own). > > Isn't uninstallation just a matter of deleting a directory? If I know > that I want to uninstall the Python package 'foo', I just delete > its directory. Now, with setuptools, I might have multiple versions > installed, so I have to chose (in Finder) which of them I want to > delete. Not always. Several distutils packages install multiple toplevel modules or packages and the names of those aren't always obvious. If they'd use extra_path I could still remove a directory and .pth file, but that option isn't used a lot. > > That doesn't require Eggs to be single-file zipfiles; deleting > a directory would work just as will (and I believe it will work > with ez_install, too). Egg directories (which are basically just the same as packages using .pth files) also work for this and that's what I usually install. Setuptools can work with python extension inside .zip files, but I'm not entirely comfortable with that. > >>> Package >>> authors will refuse to produce them, putting the burden of package >>> maintenance (what packages are installed, what are their >>> dependencies, >>> when should I remove a package) onto the the end user/system >>> administrator. >> >> Philip has added specific support for them: it is possible to install >> packages in the tradition way but with some additional files that >> tell >> setuptools about installed packages. > > As a system administrator, I don't *want* to learn how to install > Python packages. I know how to install RPMs (or MSIs, or whatever > system I manage); this should be good enough. If "this Python junk" > comes with its own installer procedure, I will hate it. But setuptools doesn't make it impossible to use system packages. As long as those system packages also install the EGG-INFO files packages that make full use of setuptools will be happy as well. I agree about system packages when I'm in my system administrator role, but when I'm in my developer role it would be very convenient to have cross-platform tools that enable me to quickly install software and tell me what I've got installed. Ronald From simonwittber at gmail.com Fri Apr 21 09:12:20 2006 From: simonwittber at gmail.com (Simon Wittber) Date: Fri, 21 Apr 2006 15:12:20 +0800 Subject: [Python-Dev] [pypy-dev] Python Software Foundation seeks mentors and students for Google Summer of Code In-Reply-To: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> Message-ID: <4e4a11f80604210012t2edc6813q975b5c2db794c904@mail.gmail.com> On 4/20/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > This spring and summer, Google will again provide stipends > for students (18+, undergraduate thru PhD programs) to write > new open-source code. <snip> > People interested in mentoring a student though PSF are encouraged to > contact me, Neal Norwitz at nnorwitz at gmail.com. People unknown to Guido > or myself should find a couple of people known within the Python community > who are willing to act as references. I (and my employer, Digital Ventures) are prepared to allocate a small amount of time to mentor students who are working on any game or multimedia related projects. As a company, this is a new area of business development which we are investigating, and we are therefore very keen to see Python develop further along these lines. -Simon Wittber From theller at python.net Fri Apr 21 09:21:20 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 09:21:20 +0200 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <44486885.3000404@v.loewis.de> References: <20060419051254.9b1318b4.dh@triple-media.com> <44486885.3000404@v.loewis.de> Message-ID: <e2a15h$325$1@sea.gmane.org> Martin v. L?wis wrote: > Dennis Heuer wrote: >> Module names >> like hashlib are not python-like too (too c/lowlevel-like). > > I agree with Greg: hashlib is a Pythonic name for a module, > just like httplib, mhlib, xmlrpclib, cookielib, contextlib, > difflib, ... > > OTOH, it might be indeed that the ctypes name need to be > aligned with naming conventions. > Personally, I *like* the ctypes name. But I'm open for suggestions, and it might have intersting consequences if the Python core package would be renamed to something else. Any suggestions? Thomas From mateusz.rukowicz at vp.pl Fri Apr 21 09:33:29 2006 From: mateusz.rukowicz at vp.pl (Mateusz Rukowicz) Date: Fri, 21 Apr 2006 09:33:29 +0200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. Message-ID: <44488AC9.10304@vp.pl> I wish to participate in Google Summer of Code as a python developer. I have few ideas, what would be improved and added to python. Since these changes and add-ons would be codded in C, and added to python-core and/or as modules,I am not sure, if you are willing to agree with these ideas. First of all, I think, it would be good idea to speed up long int implementation in python. Especially multiplying and converting radix-2^k to radix-10^l. It might be done, using much faster algorithms than already used, and transparently improve efficiency of multiplying and printing/reading big integers. Next thing I would add is multi precision floating point type to the core and fraction type, which in some cases highly improves operations, which would have to be done using floating point instead. Of course, math module will need update to support multi precision floating points, and with that, one could compute asin or any other function provided with math with precision limited by memory and time. It would be also good idea to add function which computes pi and exp with unlimited precision. And last thing - It would be nice to add some number-theory functions to math module (or new one), like prime-tests, factorizations etc. Every of above improvements would be coded in pure C, and without using external libraries, so portability of python won't be cracked, and no new dependencies will be added. I am waiting for your responses about my ideas, which of these you think are good, which poor etc. Main reason I am posting my question here is that, I am not sure, how much are you willing to change the core of the python. At the end, let my apologize for my poor English, I am trying to do my best. Best regards, Mateusz Rukowicz. From mwh at python.net Fri Apr 21 09:38:27 2006 From: mwh at python.net (Michael Hudson) Date: Fri, 21 Apr 2006 08:38:27 +0100 Subject: [Python-Dev] Raising objections In-Reply-To: <20060420133343.GA2111@rogue.amk.ca> (A. M. Kuchling's message of "Thu, 20 Apr 2006 09:33:43 -0400") References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> Message-ID: <2mvet3idjg.fsf@starship.python.net> "A.M. Kuchling" <amk at amk.ca> writes: > On Thu, Apr 20, 2006 at 07:53:55AM +0200, "Martin v. L?wis" quoted: >> > It is flatly not possible to "fix" distutils and preserve backwards >> > compatibility. > > Would it be possible to figure what parts are problematic, and > introduce PendingDeprecationWarnings or DeprecationWarnings so > that we can fix things? Broadly speaking, no. Cheers, mwh (I've looked into the distutils source too, and have the claw marks on my soul to prove it) -- I'm sorry, was my bias showing again? :-) -- William Tanksley, 13 May 2000 From tomerfiliba at gmail.com Fri Apr 21 10:34:44 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Fri, 21 Apr 2006 10:34:44 +0200 Subject: [Python-Dev] bin codec + EOF Message-ID: <1d85506f0604210134p6da01bf3pa3ce2dcdf45775bb@mail.gmail.com> yeah, i came to realize nothing helpful will ever come out from this list, so i might as well stop trying. but i have one last thing to try. i'm really missing a binary codec, just like the hex codec, for doing things like >>> "abc".encode("bin") "011000010110001001100011" >>> "011000010110001001100011".decode("bin") "abc" decode should also support padding, either None, "left", or "right", i.e., >>> "1".decode("bin") # error >>> "1".decode("bin", "left) # 00000001 '\x01' >>> '1'.decode("bin", "right") # 10000000 '\x80' i've written a codec by myself for that, it's not that complicated, but the problem is, because it's not in the standard encodings package, i can't distribute code that uses it easily: * i have to distribute it with every release, or i'll enter a dependency nightmare. * i'd have to provide setup instructions for the codec ("put this file in ../encodings") and rely on user's abilities. it's not that i think my users are incapable of doing so, but my distro shouldn't rely on my users. * upgrading more complicated (need to fix several locations) * i don't want to write intrusive setups. i like to keep my releases as simple as zip files that the user just needs to extract to /site-packages, with no configuration or setup hassle. setups are too dangerous imho, you can't tell they if they'd mess up your configuration. so i resorted to an encode_bin and decode_bin functions that are included with my code. of course this codec should be written in c to make it faster, as it's quite a bottleneck in my code currently. it's a trivial and useful codec that should be included in the stdlib. saying "you can write it for yourself" is true, but the same came be said about "hex", which can even be written as a one-liner: "".join("%02x" % (ord(ch),) for ch in "abc") so now that it's said, feel free to (-1) it by the millions. don't worry, i will not waste my or your time anymore. -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060421/5772205f/attachment.htm From greg.ewing at canterbury.ac.nz Fri Apr 21 10:34:58 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 20:34:58 +1200 Subject: [Python-Dev] Distutils thoughts In-Reply-To: <20060420133343.GA2111@rogue.amk.ca> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> Message-ID: <44489932.20605@canterbury.ac.nz> While we're on the subject of distutils revision, here are a few things I've encountered about distutils which seem less than desirable. * There doesn't seem to be a way of supplying options on the command line for anything except the top-level command. Sometimes, e.g. I want to do an "install" but override some options for the "build_ext" that gets invoked by the install. But distutils won't let me, because those options are not recognised by the "install" command itself. If I try to get around this by doing a "build_ext" explicitly, then "install", the build_ext gets done again the second time around, without my options. I know I can do this using a config file, but the details of that don't seem to fit in my brain, and I have to go looking up the docs. Come to that, almost none of distutils seems to fit in my brain. Whenever I go to write a new setup.py I have to go and read the docs to remind myself what it's supposed to look like. * It seems to be next to impossible to override some of the options that get passed to the C compiler. For Pyrex I'd very much like to be able to turn off the -Wall that it seems to insist on using, but I haven't been able to find a way. Last time I tried, I got lost in a maze of twisty method calls, all alike. * The mechanisms for extending it seem clumsy. Since there's an Extension class, the you'd think the obvious way to add support for Pyrex would be to define a PyrexExtension type that embodied all the necessary knowledge to deal with a .pyx file. But that's not the way it seems to work. The current Pyrex distutils extension (which I didn't write) works by hijacking a pre-existing method designed for processing swig files. If that's really the cleanest way it can be done, something is badly wrong with the design. This is what I meant when I said I'd like a more Scons-like approach. Someone complained that Scons was too magical. I don't necessarily want it to behave exactly like Scons, but just to have the functionality divided into independent parts that can be composed in the manner of Scons, so that I could add a component for processing .pyx -> .c that builds on the existing components for dealing with .c files, and do it in an obvious and straight- forward way (and hopefully without having to be excessively Dutch in order to see the obviousness of it:-). -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 10:43:24 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 20:43:24 +1200 Subject: [Python-Dev] proposal: evaluated string In-Reply-To: <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> References: <1d85506f0604200820m66a96ff8g6ba35b30363acdc1@mail.gmail.com> <4447CBC4.3080503@ewtllc.com> <4447CC34.8060909@ewtllc.com> <1d85506f0604201123x4b0326a0gb73dae04bb7e9511@mail.gmail.com> Message-ID: <44489B2C.4040602@canterbury.ac.nz> tomer filiba wrote: > first of all -- i know there's a bunch of templating engines, but i > think it should be a built-in feature of the language. One fairly serious drawback to this idea is that it inhibits i18n. For security reasons it has to be implemented at compile time and only work on string literals. But then you can't use an i18n package to rewrite the string at run time. And trying to i18n the component parts of the string doesn't work because of grammatical differences between languages. -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 10:57:59 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 20:57:59 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447E9EA.10106@v.loewis.de> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> Message-ID: <44489E97.6060909@canterbury.ac.nz> Martin v. L?wis wrote: > Some libraries (not necessarily in Python) have gone the path of > providing a "unified" API for all kinds of file stream access, > e.g. in KDE, any tool can open files over many protocols, without > the storage being mounted locally. Maybe a compromise would be to provide an extended version of open() that knew about zip files. This could be a stand-alone facility to complement zip importing, and needn't be tied to distutils or setuptools or anything like that. Code which finds things using module __file__ attributes would work exactly the way it does now as long as it used the special open(). -- Greg From guido at python.org Fri Apr 21 11:06:42 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 10:06:42 +0100 Subject: [Python-Dev] [pypy-dev] Python Software Foundation seeks mentors and students for Google Summer of Code In-Reply-To: <4e4a11f80604210012t2edc6813q975b5c2db794c904@mail.gmail.com> References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com> <4e4a11f80604210012t2edc6813q975b5c2db794c904@mail.gmail.com> Message-ID: <ca471dc20604210206p3195ca9v8c76c9f7a7111fad@mail.gmail.com> To all who are volunteering like this: great -- but please add your idea to the wiki and sign up through the Google SoC website! (code.google.com/soc/) --Guido On 4/21/06, Simon Wittber <simonwittber at gmail.com> wrote: > On 4/20/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > > This spring and summer, Google will again provide stipends > > for students (18+, undergraduate thru PhD programs) to write > > new open-source code. > > <snip> > > > People interested in mentoring a student though PSF are encouraged to > > contact me, Neal Norwitz at nnorwitz at gmail.com. People unknown to Guido > > or myself should find a couple of people known within the Python community > > who are willing to act as references. > > I (and my employer, Digital Ventures) are prepared to allocate a small > amount of time to mentor students who are working on any game or > multimedia related projects. As a company, this is a new area of > business development which we are investigating, and we are therefore > very keen to see Python develop further along these lines. > > > > -Simon Wittber > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri Apr 21 11:25:00 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 10:25:00 +0100 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <e2a15h$325$1@sea.gmane.org> References: <20060419051254.9b1318b4.dh@triple-media.com> <44486885.3000404@v.loewis.de> <e2a15h$325$1@sea.gmane.org> Message-ID: <ca471dc20604210225h3abe0c11w2b4fad0b6ef95bfa@mail.gmail.com> I suggest to ignore this thread. The OP seems clueless and the names are fine. Don't waste your time. On 4/21/06, Thomas Heller <theller at python.net> wrote: > Martin v. L?wis wrote: > > Dennis Heuer wrote: > >> Module names > >> like hashlib are not python-like too (too c/lowlevel-like). > > > > I agree with Greg: hashlib is a Pythonic name for a module, > > just like httplib, mhlib, xmlrpclib, cookielib, contextlib, > > difflib, ... > > > > OTOH, it might be indeed that the ctypes name need to be > > aligned with naming conventions. > > > > Personally, I *like* the ctypes name. But I'm open for suggestions, > and it might have intersting consequences if the Python core package > would be renamed to something else. > > Any suggestions? > > Thomas > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri Apr 21 11:30:02 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 10:30:02 +0100 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <44488AC9.10304@vp.pl> References: <44488AC9.10304@vp.pl> Message-ID: <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> On 4/21/06, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote: > I wish to participate in Google Summer of Code as a python developer. I > have few ideas, what would be improved and added to python. Since these > changes and add-ons would be codded in C, and added to python-core > and/or as modules,I am not sure, if you are willing to agree with these > ideas. > > First of all, I think, it would be good idea to speed up long int > implementation in python. Especially multiplying and converting > radix-2^k to radix-10^l. It might be done, using much faster algorithms > than already used, and transparently improve efficiency of multiplying > and printing/reading big integers. That would be a good project, if someone is available to mentor it. (Tim Peters?) > Next thing I would add is multi precision floating point type to the > core and fraction type, which in some cases highly improves operations, > which would have to be done using floating point instead. > Of course, math module will need update to support multi precision > floating points, and with that, one could compute asin or any other > function provided with math with precision limited by memory and time. > It would be also good idea to add function which computes pi and exp > with unlimited precision. I would suggest doing this as a separate module first, rather than as a patch to the Python core. Can you show us some practical applications of these operations? > And last thing - It would be nice to add some number-theory functions to > math module (or new one), like prime-tests, factorizations etc. Probably better a new module. But how many people do you think need these? > Every of above improvements would be coded in pure C, and without using > external libraries, so portability of python won't be cracked, and no > new dependencies will be added. Great -- that's important! > I am waiting for your responses about my ideas, which of these you think > are good, which poor etc. Main reason I am posting my question here is > that, I am not sure, how much are you willing to change the core of the > python. > > At the end, let my apologize for my poor English, I am trying to do my best. No problem. We can understand you fine! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Fri Apr 21 11:31:35 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 21 Apr 2006 19:31:35 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> Message-ID: <4448A677.5090905@gmail.com> Guido van Rossum wrote: > Sorry Nick, but you've still got it backwards. The name of the > decorator shouldn't indicate the type of the return value (of calling > the decorated generator) -- it should indicate how we think about the > function. Compare @staticmethod and @classmethod -- the return type > doesn't enter into it. I think of the function/generator itself as the > context manager; it returns a context. Let me have another go. . . One of the proposals when Raymond raised the issue of naming the PEP 343 protocol was to call objects with "__enter__"/"__exit__" methods "contexts". This was rejected because there were too many things (like decimal.Context) that already used that name but couldn't easily be made to fit the new definition. So we settled on calling them "context managers" instead. This wasn't changed by the later discussion that introduced the __context__ method. Instead, the new term "manageable context" (or simply "context") was introduced to mean "anything with a __context__ method". This was OK, because existing context objects like decimal.Context could easily add a context method to return an appropriate context manager. Notice that in *both* approved versions of PEP 343 (before and after the inclusion of the __context__ method) the name of the decorator matched the name of the kind of object returned by the decorated generator function. *THIS ALL CHANGED AT PYCON* (at least, I assume that's where it happened - it sure didn't happen on this list, and the timing is about right). During implementation, the meanings of "context" and "context manager" were swapped from the meanings in the approved PEP, leading to the current situation where decimal.Context is actually not, in fact, a context. These meanings were then the ones included in the checked in documentation for contextlib, and in PJE's recent update to PEP 343 itself. However, *despite* the meanings of the two terms being swapped, the decorator kept the same name. This means that when using a generator to create a "context manager" like decimal.Context under the revised terminology, you are forced to claim that the __context__ method is itself a context manager: class Context(object): # Actually a context manager, despite the class name @contextlib.contextmanager def __context__(self): # Actually a context, despite the decorator name newctx = self.copy() oldctx = decimal.getcontext() decimal.setcontext(newctx) try: yield newctx finally: decimal.setcontext(oldctx) I also note that the decimal module in 2.5a1 actually uses the originally approved PEP 343 terminology, calling the object returned from __context__ a ContextManager. And all of this is why I believe we need to either fix the documentation to use the terminology used in the PEP at the time of approval, or else finish the job of swapping the two terms and change the name of the decorator. Having remembered why we picked "context manager" over "context" in the first place, my preference is strongly for reverting to the original terminology. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From guido at python.org Fri Apr 21 11:32:04 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 10:32:04 +0100 Subject: [Python-Dev] bin codec + EOF In-Reply-To: <1d85506f0604210134p6da01bf3pa3ce2dcdf45775bb@mail.gmail.com> References: <1d85506f0604210134p6da01bf3pa3ce2dcdf45775bb@mail.gmail.com> Message-ID: <ca471dc20604210232g1d2645f1xe2c0a09f60b7408a@mail.gmail.com> On 4/21/06, tomer filiba <tomerfiliba at gmail.com> wrote: > yeah, i came to realize nothing helpful will ever come out from this list, > so i might as well stop trying. Thanks for your encouraging words. Now go away. (Just returning your polite banter.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri Apr 21 11:37:00 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 10:37:00 +0100 Subject: [Python-Dev] Visual studio 2005 express now free Message-ID: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> Microsoft just announced that Visual Studio 2005 express will be free forever, including the IDE and the optimizing C++ compiler. (Not included in the "forever" clause are VS 2007 or later versions.) Does this make a difference for Python development for Windows? http://msdn.microsoft.com/vstudio/express/ -- --Guido van Rossum (home page: http://www.python.org/~guido/) From greg.ewing at canterbury.ac.nz Fri Apr 21 11:47:41 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 21:47:41 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> Message-ID: <4448AA3D.3050107@canterbury.ac.nz> Guido van Rossum wrote: > You > can't blame KDE for providing mechanisms that only work in the KDE > world. It's fine for Python to provide Python-specific solutions for > issues that have no cross-platform native solution. Also keep in mind that we're talking about resources used internally by the application. They don't *need* to be accessible outside the application. So I don't think the KDE argument applies here. -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 12:10:22 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 22:10:22 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4447FC88.8040101@v.loewis.de> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <444771B5.4050504@canterbury.ac.nz> <4447FC88.8040101@v.loewis.de> Message-ID: <4448AF8E.303@canterbury.ac.nz> Martin v. L?wis wrote: > Greg Ewing wrote: > >>>The "resources" name is actually quite a common meme; >> >>I believe it goes back to the original Macintosh, > > I can believe that history. Still, I thought a resource > is something you can exhaust; Haven't you heard the term "renewable resource" ?-) In the real world, yes, most resources will eventually become exhausted, but I don't think it's a necessary part of the meaning. It's just something that you exploit, or make use of. BTW, in the game programming industry the in-vogue term at the moment seems to be "asset", which has even more inappropriate connotations. Or perhaps not, if you're a commercial entity that attaches a dollar value to all your intellectual property... > the fork should have been named "data fork" Except that's what they call the *other* fork (equivalent to the only "fork" on systems that don't have forks). > or just "second fork". But then the relevant Toolbox module would have to have been called the Second Fork Manager, which sounds like an API for use by dining philosophers. :-) FWIW, Apple seem to be deprecating the use of resource forks these days, and the Resource Manager, which is a bit sad. It was *fun* programming the Mac back when it was quirky as hell and like nothing else on the planet. Frustrating at times, but fun! -- Greg From mateusz.rukowicz at vp.pl Fri Apr 21 12:12:28 2006 From: mateusz.rukowicz at vp.pl (Mateusz Rukowicz) Date: Fri, 21 Apr 2006 12:12:28 +0200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> Message-ID: <4448B00C.2040901@vp.pl> Guido van Rossum wrote: >On 4/21/06, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote: > > >>Next thing I would add is multi precision floating point type to the >>core and fraction type, which in some cases highly improves operations, >>which would have to be done using floating point instead. >>Of course, math module will need update to support multi precision >>floating points, and with that, one could compute asin or any other >>function provided with math with precision limited by memory and time. >>It would be also good idea to add function which computes pi and exp >>with unlimited precision. >> >> > >I would suggest doing this as a separate module first, rather than as >a patch to the Python core. > >Can you show us some practical applications of these operations? > > > Multi precision float is mostly used by physicians and mathematicians, interpreted languages are particularly good for physics simulations, in which small error would grow so much, that results are useless. Rational numbers would be used in codes where precision is of critical-importance, and one cannot accept rounding of results. So fraction (and also multi precision floating) would mostly be used in linear-programming algorithms, like solving set of equations or some optimizing methods. >>And last thing - It would be nice to add some number-theory functions to >>math module (or new one), like prime-tests, factorizations etc. >> >> > >Probably better a new module. But how many people do you think need these? > > > Mostly cryptography would exploit fast number-theory algorithms. Adding these as a module would boost and make easier to code everything which is related to cryptography, such as ssl, rsa etc. Also hobbyist looking for huge primes etc. would appreciate that, but it is not main purpose of these ;). I understand that most of these improvements have quite limited audience, but I still think python should be friendly to them ;) Best regards, Mateusz Rukowicz. From greg.ewing at canterbury.ac.nz Fri Apr 21 12:21:55 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 22:21:55 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44480147.6080709@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <4447F629.8000500@v.loewis.de> <0EFC5F7D-43D1-4F07-B8BF-A5FB763AFD81@redivi.com> <44480147.6080709@v.loewis.de> Message-ID: <4448B243.6060904@canterbury.ac.nz> Martin v. L?wis wrote: > I can readily believe that package authors indeed see this as > a simplification, but I also see an increased burden on system > admins in return. There are two conflicting desires here. Package authors don't want to have to make M different kinds of package for M different systems. System administrators don't want to have to deal with N different kinds of package originating from N different software development environments. What can be done about this? Maybe some kind of babel server in the middle, which knows how to convert between all the different packaging formats. Authors submit packages in the form which is easiest for them to produce, and users request packages in the form which is easiest for them to install. The server does whatever is necessary to convert between them. Perhaps something along these lines could be incorporated into the cheeseshop? So you could ask e.g. for your favourite package in .rpm format, and if it only knew how to get it in .egg format it would feed it to a handy bdist-bot for conversion. -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 12:39:39 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 22:39:39 +1200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060420175843.042035a8@mail.telecommunity.com> References: <444771B5.4050504@canterbury.ac.nz> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <444771B5.4050504@canterbury.ac.nz> <5.1.1.6.0.20060420175843.042035a8@mail.telecommunity.com> Message-ID: <4448B66B.7020402@canterbury.ac.nz> Phillip J. Eby wrote: > By now, however, the term is popularly used with GUI toolkits of all > kinds to mean essentially read-only data files that are required by a > program or library to function, but which are not directly part of the > code. It's just occurred to me that there's another thread to the history of this term: the X window system's use of the term "resource" to mean essentially a user-configurable setting. This is probably where the usage you're referring to comes from, since Gtk and the ilk are fairly deeply rooted in X. This is really quite a different idea, although there's some overlap because X applications are typically distributed with a "resources" file containing default values for the settings, and won't work without there being some version of this file available. However, the sense we're talking about is more like the Mac one, where resources are an integral part of the program just as much as the code is, and have every right to be kept with the code. Indeed, in a classic 68K Mac application, the code literally WAS a resource. The 68K machine code was kept in the resource fork in resources of type "CODE". The data fork of an application was usually empty. -- Greg From p.f.moore at gmail.com Fri Apr 21 12:43:33 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 21 Apr 2006 11:43:33 +0100 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <444815C1.70601@colorstudy.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <79990c6b0604201549t3e947bcat90b61c3cf39b8ac3@mail.gmail.com> <444815C1.70601@colorstudy.com> Message-ID: <79990c6b0604210343p6e8fbc85v5a530a448c0ba395@mail.gmail.com> On 4/21/06, Ian Bicking <ianb at colorstudy.com> wrote: > Paul Moore wrote: > > 2. Distributors will supply .egg files rather than bdist_wininst > > installers (this is already happening). > > Really people should at least be uploading source packages in addition > to eggs; it's certainly not hard to do so. Windows user perspective: So what? If there's a C extension involved, I can't build it on at least one machine I use (so I build elsewhere and move the installer over). If external libraries are involved, I quite likely don't have .lib files, headers, etc available. I know this isn't such an issue on Unix. But the rise of bdist_wininst installers on Windows was a HUGE benefit. I remember the nightmares I used to have finding binaries for things I couldn't build myself (and I'm atypical - I have lots of tools that most people don't). Nowadays, there are more and more packages I can't find bdist_wininst installers for, again. (Turbogears and PyProtocols are the two I recall offhand). I can find eggs, but to use them I lose the package management I *like*. Please don't tell me that Windows' Add/Remove Programs is limited or incomplete - it does exactly what I want, and I have no need to change. And I don't want a mix - I want *everything* under my Python install directory to be managed by an Add/Remove Programs entry. I'm quite happy to convert eggs to bdist_wininst installers (or MSI installers) by hand. That's a mechanical task, in theory, and as I'm catering for my preferences, I see no reason to expect someone else to do that for me. But I can't build from scratch, so don't force me to. > Perhaps a distributor quick intro needs to be written for the standard > library. Something short; both distutils and setuptools documentation > are pretty long, and for someone who just has some simple Python code to > get out it's overwhelming. Agreed. Maybe I'll knock up a simple set of setup.py templates (for a pure-python module, a simple package, and a C extension, maybe) and put them on the Wiki. I'll add it to my TODO list. Paul. From dh at triple-media.com Fri Apr 21 06:07:50 2006 From: dh at triple-media.com (Dennis Heuer) Date: Fri, 21 Apr 2006 06:07:50 +0200 Subject: [Python-Dev] Module names in Python: was Re: python 2.5alpha and naming schemes In-Reply-To: <20060421010049.GZ10641@zot.electricrain.com> References: <20060419051254.9b1318b4.dh@triple-media.com> <20060421010049.GZ10641@zot.electricrain.com> Message-ID: <20060421060750.9397fff3.dh@triple-media.com> There's also a difflib though the module doesn't look like a wrapper for diff. The math module is not called mathlib though. Python is quite inconsistent here. On the one hand it tries to use human-understandable terms and on the other hand it takes the easy approach, which means it falls back to common ways of naming in the C world. However, the module hashlib is not really a 'hashing library'. The term 'hash' is quite abstract and not really appropriate. The term 'lib' is quite displaced in the python world. 'hashmod' would be more appropriate but also very bad. The hashing algorithms can be used for very different things. They are mainly just converters. Hashing is only a buzzword-like synonym for 'encoding without the chance for decoding'. Possibly there should be an extra section (main module) called 'converters', which assorts all the en/decoders like punycode, zip, or md5 in submodules. The 'hashlib' submodule could be called 'cryptographic', for example: converters.cryptographic.sha This is possibly too long but one should keep this in the back of one's mind because python could profit from some renaming here and there to get closer to its goal to be more human-understandable--and well-sorted. 'math.cryptography' or at least 'math.hash' would work as well. Dennis On Thu, 20 Apr 2006 18:00:49 -0700 "Gregory P. Smith" <greg at electricrain.com> wrote: > > Module names like hashlib are not python-like too (too c/lowlevel-like). > > what is python-like? > > hashlib was chosen because it is a library of hash functions and > hash() is already taken as a builtin function (otherwise i'd leave off > the lib). > > -g > > From fraca7.python at free.fr Fri Apr 21 09:41:30 2006 From: fraca7.python at free.fr (=?ISO-8859-1?Q?J=E9r=F4me_Laheurte?=) Date: Fri, 21 Apr 2006 09:41:30 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <4447EE27.7060601@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <4443526C.8040907@v.loewis.de> <4443E597.9000407@v.loewis.de> <C4600955-7A13-41D2-BC17-9049A42202DC@free.fr> <4447EE27.7060601@v.loewis.de> Message-ID: <64486FDF-E9E7-4EAB-9FB5-D5343C018886@free.fr> Le 20 avr. 06 ? 22:25, Martin v. L?wis a ?crit : > J?r?me Laheurte wrote: >> Sorry I'm late, but something like "os.popen('taskkill.exe /F /IM >> python_d.exe')" would have worked; we use this at work. > > Thanks, I didn't know about it. Is it available on Windows 2000, > too? (because the system in question is Windows 2000, and I see > it on a "What's new in Windows XP" page) Ah, no, it's only available in XP. There are some alternatives though: http://www.robvanderwoude.com/index.html From p.f.moore at gmail.com Fri Apr 21 12:54:37 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 21 Apr 2006 11:54:37 +0100 Subject: [Python-Dev] Distutils thoughts In-Reply-To: <44489932.20605@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> <44489932.20605@canterbury.ac.nz> Message-ID: <79990c6b0604210354g1e832e7hfcd5418e2a4f4f1@mail.gmail.com> On 4/21/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > While we're on the subject of distutils revision, here > are a few things I've encountered about distutils which > seem less than desirable. > > * There doesn't seem to be a way of supplying options > on the command line for anything except the top-level > command. Sometimes, e.g. I want to do an "install" but > override some options for the "build_ext" that gets > invoked by the install. But distutils won't let me, > because those options are not recognised by the > "install" command itself. I do things like python setup.py build --compiler=mingw32 bdist_wininst which seem to work for me. Is that any help? Paul. From guido at python.org Fri Apr 21 12:59:26 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 11:59:26 +0100 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <4448B00C.2040901@vp.pl> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <4448B00C.2040901@vp.pl> Message-ID: <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> On 4/21/06, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote: > Guido van Rossum wrote: > >On 4/21/06, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote: > >>Next thing I would add is multi precision floating point type to the > >>core and fraction type, which in some cases highly improves operations, > >>which would have to be done using floating point instead. > >>Of course, math module will need update to support multi precision > >>floating points, and with that, one could compute asin or any other > >>function provided with math with precision limited by memory and time. > >>It would be also good idea to add function which computes pi and exp > >>with unlimited precision. > > > >I would suggest doing this as a separate module first, rather than as > >a patch to the Python core. > > > >Can you show us some practical applications of these operations? > > > Multi precision float is mostly used by physicians and mathematicians, (Aside: you probably mean physicist, someone who practices physics. A physician is a doctor; don't ask me why. :-) > interpreted languages are particularly good for physics simulations, in > which small error would grow so much, that results are useless. We already have decimal floating point which can be configured to use however many digits of precision you want. Would this be sufficient? If you want more performance, perhaps you could tackle the very useful project of translating decimal.py into C? > Rational numbers would be used in codes where precision is of > critical-importance, and one cannot accept rounding of results. So > fraction (and also multi precision floating) would mostly be used in > linear-programming algorithms, like solving set of equations or some > optimizing methods. I'm not sure I see the value of rational numbers implemeted in C; they're easy to write in Python and all the time goes into division of two ints which is already implemented in C. > >>And last thing - It would be nice to add some number-theory functions to > >>math module (or new one), like prime-tests, factorizations etc. > > > >Probably better a new module. But how many people do you think need these? > > > Mostly cryptography would exploit fast number-theory algorithms. Adding > these as a module would boost and make easier to code everything which > is related to cryptography, such as ssl, rsa etc. Also hobbyist looking > for huge primes etc. would appreciate that, but it is not main purpose > of these ;). Have you looked at the existing wrappers around OpenSSL, such as pyopenssl and m2crypto? ISTM that these provide most of the needed algorithms, already coded in open source C. > I understand that most of these improvements have quite limited > audience, but I still think python should be friendly to them ;) Sure. The question is, if there are more student applications than Google wants to fund, which projects will be selected? I would personally vote for the projects that potentially help out the largest number of Python users. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri Apr 21 13:09:00 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 12:09:00 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4448A677.5090905@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> Message-ID: <ca471dc20604210409j4d16e1ady81fa64400ab5b23b@mail.gmail.com> OK, now I'm confused. I hope that Phillip understands this and will know what to do. On 4/21/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > Guido van Rossum wrote: > > Sorry Nick, but you've still got it backwards. The name of the > > decorator shouldn't indicate the type of the return value (of calling > > the decorated generator) -- it should indicate how we think about the > > function. Compare @staticmethod and @classmethod -- the return type > > doesn't enter into it. I think of the function/generator itself as the > > context manager; it returns a context. > > Let me have another go. . . > > One of the proposals when Raymond raised the issue of naming the PEP 343 > protocol was to call objects with "__enter__"/"__exit__" methods "contexts". > This was rejected because there were too many things (like decimal.Context) > that already used that name but couldn't easily be made to fit the new > definition. So we settled on calling them "context managers" instead. > > This wasn't changed by the later discussion that introduced the __context__ > method. Instead, the new term "manageable context" (or simply "context") was > introduced to mean "anything with a __context__ method". This was OK, because > existing context objects like decimal.Context could easily add a context > method to return an appropriate context manager. > > Notice that in *both* approved versions of PEP 343 (before and after the > inclusion of the __context__ method) the name of the decorator matched the > name of the kind of object returned by the decorated generator function. > > *THIS ALL CHANGED AT PYCON* (at least, I assume that's where it happened - it > sure didn't happen on this list, and the timing is about right). > > During implementation, the meanings of "context" and "context manager" were > swapped from the meanings in the approved PEP, leading to the current > situation where decimal.Context is actually not, in fact, a context. These > meanings were then the ones included in the checked in documentation for > contextlib, and in PJE's recent update to PEP 343 itself. > > However, *despite* the meanings of the two terms being swapped, the decorator > kept the same name. This means that when using a generator to create a > "context manager" like decimal.Context under the revised terminology, you are > forced to claim that the __context__ method is itself a context manager: > > class Context(object): > # Actually a context manager, despite the class name > > @contextlib.contextmanager > def __context__(self): > # Actually a context, despite the decorator name > newctx = self.copy() > oldctx = decimal.getcontext() > decimal.setcontext(newctx) > try: > yield newctx > finally: > decimal.setcontext(oldctx) > > I also note that the decimal module in 2.5a1 actually uses the originally > approved PEP 343 terminology, calling the object returned from __context__ a > ContextManager. > > And all of this is why I believe we need to either fix the documentation to > use the terminology used in the PEP at the time of approval, or else finish > the job of swapping the two terms and change the name of the decorator. Having > remembered why we picked "context manager" over "context" in the first place, > my preference is strongly for reverting to the original terminology. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > http://www.boredomandlaziness.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at iinet.net.au Fri Apr 21 13:10:44 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Fri, 21 Apr 2006 21:10:44 +1000 Subject: [Python-Dev] Removing Python 2.4 -m switch helpers from import.c Message-ID: <4448BDB4.9010708@iinet.net.au> With the -m switch switching to using the runpy module in 2.5, the two private helper functions exposed by import.c (_PyImport_FindModule & _PyImport_IsScript) aren't needed anymore. Should I remove them, since they're essentially dead code now? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From greg.ewing at canterbury.ac.nz Fri Apr 21 13:45:50 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 23:45:50 +1200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <5.1.1.6.0.20060420181648.03f2b3d8@mail.telecommunity.com> References: <4447E3F0.3080708@colorstudy.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <5.1.1.6.0.20060420181648.03f2b3d8@mail.telecommunity.com> Message-ID: <4448C5EE.9050302@canterbury.ac.nz> Phillip J. Eby wrote: > You seem to believe that there are other > things more important than making things Just Work for this audience. While it's clearly a good thing when something "just works", I don't think that this should be the only goal. Just as important to my mind -- probably even more important -- is what the experience is like when things *don't* work. Because in such a varied world, you're never going to make everything "just work" for everyone all the time. When I type "make install" and something goes wrong, I find that there are two different kinds of experience I typically get: (1) I look at the Makefile, and find that it's written in a straightforward style. I can see what it's trying to do, find the problem, fix it, "make install" again and everything is all right. (2) I look at the Makefile and find that it's full of macros which get expanded and scripts that generate more files that get macro expanded again and wrapped up in duct tape and eventually somehow build something. I haven't a clue how it's supposed to work and don't have the time or inclination to figure it out. I give up. It sounds like (2) is the sort of experience that some people have been having with distutils. If that's true, then in the long run you are not going to improve matters by wrapping distutils up in yet another layer of magic, indirection and duct tape. You might succeed in making a certain number of things work that didn't work before. But when something breaks, it won't be any easier to fix than the original distutils, because it contains distutils as a major component. Rather than just trying to make a few more things "just work", we should be trying hard to improve the "just doesn't work" case. To do that, we need *LESS* magic, not more. We need to do things in as straightforward, obvious and transparent a way as possible -- so that when it goes wrong, you can see why it is going wrong and how to make it go right. -- Greg From p.f.moore at gmail.com Fri Apr 21 13:50:38 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 21 Apr 2006 12:50:38 +0100 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e28tc1$62n$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> Message-ID: <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> On 4/20/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > - they're currently discussing whether to use stricter version requirements > for individual components, to increase the chance that people end up using > a combination that someone else has actually tested. That makes me quite nervous. While I understand that there are support issues, I don't like the idea of a package that refuses to run, simply because I have version 2.1 of something it uses, and the package insists on 2.0 - without any concrete reason beyond "we haven't tested that version yet". We have to deal with this sort of thing a lot at work (in a non-Python context), and it's a support nightmare of its own. And no, I don't want to install the 2 versions side-by-side. Ian Bicking complained recently about the "uncertainty" of multiple directories on sys.path meaning you can't be sure which version of a module you get. Well, having 2 versions of a module installed and knowing that which one is in use depends on require calls which get issued at runtime worries me far more. [Please note, I have no problem with people who are happy to work like this. It's just that I don't, and I want to make sure that the new ways of working promoted by setuptools don't ignore my needs. It's even fine to consider my needs and decide to not support them, but let's note the fact that this was done... :-)] Paul. From greg.ewing at canterbury.ac.nz Fri Apr 21 13:55:46 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 21 Apr 2006 23:55:46 +1200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <5.1.1.6.0.20060420232014.01e458a8@mail.telecommunity.com> References: <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> <444802EC.2000302@egenix.com> <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <444802EC.2000302@egenix.com> <5.1.1.6.0.20060420193805.03f816d0@mail.telecommunity.com> <5.1.1.6.0.20060420232014.01e458a8@mail.telecommunity.com> Message-ID: <4448C842.2010702@canterbury.ac.nz> Phillip J. Eby wrote: > So, this is a nice > example of how complex it can be to extend the distutils in ways that don't > break random popular packages. It's probably also an example of what happens when something doesn't have a well-documented extension interface -- nobody can tell what's an implementation detail and what isn't. -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 14:28:34 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 00:28:34 +1200 Subject: [Python-Dev] Distutils thoughts In-Reply-To: <79990c6b0604210354g1e832e7hfcd5418e2a4f4f1@mail.gmail.com> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> <44489932.20605@canterbury.ac.nz> <79990c6b0604210354g1e832e7hfcd5418e2a4f4f1@mail.gmail.com> Message-ID: <4448CFF2.4040800@canterbury.ac.nz> Paul Moore wrote: > I do things like > > python setup.py build --compiler=mingw32 bdist_wininst > > which seem to work for me. Is that any help? Possibly. I'll have to try it next time I have the problem and see. BTW, does that do anything different from python setup.py build --compiler=mingw32 python setup.py bdist_wininst ? If so, that's rather unintuitive and could do with documenting more clearly. -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 14:31:48 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 00:31:48 +1200 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4448A677.5090905@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> Message-ID: <4448D0B4.9010206@canterbury.ac.nz> Nick Coghlan wrote: > During implementation, the meanings of "context" and "context manager" were > swapped from the meanings in the approved PEP, leading to the current > situation where decimal.Context is actually not, in fact, a context. That's rather disappointing. I *liked* the way that decimal.Context was a context. Was there a conscious choice to swap the terms, or did it happen by accident? -- Greg From greg.ewing at canterbury.ac.nz Fri Apr 21 14:40:19 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 00:40:19 +1200 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> Message-ID: <4448D2B3.7030109@canterbury.ac.nz> Paul Moore wrote: > Well, having 2 versions of a module installed and > knowing that which one is in use depends on require calls which get > issued at runtime worries me far more. There's also the problem of having module A which requires version 2.0 or earlier of Z, and B which requires 2.1 or later of Z, and you want to use C which requires both A and B... -- Greg From theller at python.net Fri Apr 21 14:40:05 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 14:40:05 +0200 Subject: [Python-Dev] Distutils thoughts In-Reply-To: <4448CFF2.4040800@canterbury.ac.nz> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> <44489932.20605@canterbury.ac.nz> <79990c6b0604210354g1e832e7hfcd5418e2a4f4f1@mail.gmail.com> <4448CFF2.4040800@canterbury.ac.nz> Message-ID: <e2ajr5$uvt$1@sea.gmane.org> Greg Ewing wrote: > Paul Moore wrote: > >> I do things like >> >> python setup.py build --compiler=mingw32 bdist_wininst >> >> which seem to work for me. Is that any help? > > Possibly. I'll have to try it next time I have the > problem and see. > > BTW, does that do anything different from > > python setup.py build --compiler=mingw32 > python setup.py bdist_wininst > > ? If so, that's rather unintuitive and could do > with documenting more clearly. In principle, it does the same. In practice, it doesn't because 'bdist_wininst' will instantiate a compiler (by default MSVC) to check whether the extensions need to be build (built?) or are up to date. Which will fail unless MSVC is installed. The best solution is to configure the mingw32 compiler in the distutils configuration file. Thomas From ncoghlan at gmail.com Fri Apr 21 14:59:10 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 21 Apr 2006 22:59:10 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4448D0B4.9010206@canterbury.ac.nz> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <4448D0B4.9010206@canterbury.ac.nz> Message-ID: <4448D71E.3050601@gmail.com> Greg Ewing wrote: > Nick Coghlan wrote: > >> During implementation, the meanings of "context" and "context manager" were >> swapped from the meanings in the approved PEP, leading to the current >> situation where decimal.Context is actually not, in fact, a context. > > That's rather disappointing. I *liked* the way that > decimal.Context was a context. Was there a conscious > choice to swap the terms, or did it happen by accident? That's what I'm currently trying to find out - whether or not this was a deliberate decision made at PyCon. Conference sprints are great for getting things done, but they do occasionally lead to decisions getting made without being properly recorded :) The terminology in the current docs is more natural in some ways than what the PEP settled on (mainly due to the __context__ method), so I'm wondering if the downside that lead us to pick the slightly more awkward terminology may have been forgotten at the time of implementation. Unfortunately this kind of discussion can take days via email*, even though it would probably only take ten minutes or so in person - not getting any immediate feedback when your point of view isn't being understood really slows things down. I'm just glad AMK noticed the discrepancy - I completely missed it when I read the contextlib docs (I suspect my brain was being 'helpful' and automatically filled in what I expected to see rather than what was actually there). Cheers, Nick. * s/can take/already has taken/ ;) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From andymac at bullseye.apana.org.au Fri Apr 21 13:28:02 2006 From: andymac at bullseye.apana.org.au (Andrew MacIntyre) Date: Fri, 21 Apr 2006 22:28:02 +1100 Subject: [Python-Dev] patch #1454481 - runtime tunable thread stack size Message-ID: <4448C1C2.9010407@bullseye.apana.org.au> http://www.python.org/sf/1454481 I would like to see this make it in to 2.5. To that end I was hoping to elicit any review interest beyond Martin and Hye-Shik, both of whom I thank for their feedback. As I can't readily test on Windows (in particular) or Linux I would appreciate some kind soul(s) actually testing to make sure that the patch doesn't break builds on those platforms. Thanks, Andrew. ------------------------------------------------------------------------- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac at pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From greg.ewing at canterbury.ac.nz Fri Apr 21 15:16:52 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 01:16:52 +1200 Subject: [Python-Dev] Distutils thoughts In-Reply-To: <e2ajr5$uvt$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <4445D80E.8050008@canterbury.ac.nz> <4446776E.5090702@v.loewis.de> <200604201039.29138.anthony@interlink.com.au> <444721F3.6070704@v.loewis.de> <20060420133343.GA2111@rogue.amk.ca> <44489932.20605@canterbury.ac.nz> <79990c6b0604210354g1e832e7hfcd5418e2a4f4f1@mail.gmail.com> <4448CFF2.4040800@canterbury.ac.nz> <e2ajr5$uvt$1@sea.gmane.org> Message-ID: <4448DB44.3050400@canterbury.ac.nz> Thomas Heller wrote: > The best solution is to configure the mingw32 compiler in the distutils > configuration file. That's the same conclusion I came to. But it's unintuitive that you can't also do the same thing using command line options, or if you can, it's not obvious how to do it. -- Greg From aahz at pythoncraft.com Fri Apr 21 16:46:35 2006 From: aahz at pythoncraft.com (Aahz) Date: Fri, 21 Apr 2006 07:46:35 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <44488AC9.10304@vp.pl> References: <44488AC9.10304@vp.pl> Message-ID: <20060421144635.GB16802@panix.com> On Fri, Apr 21, 2006, Mateusz Rukowicz wrote: > > Next thing I would add is multi precision floating point type to > the core and fraction type, which in some cases highly improves > operations, which would have to be done using floating point instead. > Of course, math module will need update to support multi precision > floating points, and with that, one could compute asin or any other > function provided with math with precision limited by memory and > time. It would be also good idea to add function which computes pi > and exp with unlimited precision. And last thing - It would be nice > to add some number-theory functions to math module (or new one), like > prime-tests, factorizations etc. To echo and amplify what Guido said: an excellent project would be to rewrite the decimal module in C. Another option would be to pick up and enhance either of the GMP wrappers: http://gmpy.sourceforge.net/ http://www.egenix.com/files/python/mxNumber.html That would also deal with your suggestion of rational numbers. Working with long ints would also be excellent, but I'd hesitate to tackle that unless Tim Peters volunteers to help (because there may be special implementation constraints). -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From amk at amk.ca Fri Apr 21 16:51:21 2006 From: amk at amk.ca (A.M. Kuchling) Date: Fri, 21 Apr 2006 10:51:21 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4448A677.5090905@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> Message-ID: <20060421145121.GA10419@localhost.localdomain> On Fri, Apr 21, 2006 at 07:31:35PM +1000, Nick Coghlan wrote: > fit the new definition. So we settled on calling them "context managers" > instead. ... > method. Instead, the new term "manageable context" (or simply "context") > was introduced to mean "anything with a __context__ method". This was OK, Meaning that 'manageable context' objects create and destroy 'context managers'... My view is still that 'context manager' is a terrible name when used alongside objects called 'contexts': the object doesn't manage anything, and it certainly doesn't manage contexts -- in fact it's created by 'context' objects. Perhaps we need to do some usability tests. Go to a local user group, explain the 'with' statement and the necessary objects using __foo__ instead of __context__, provide three or four pairs of names, and then ask the audience which set of names seems most sensible. For the What's New, I'm now beginning to think the text should say 'objects that have a __context__() method', and then refer to either contexts or context managers (whichever way the decision goes) for the objects with enter/exit, to avoid this confusion. --amk From aleaxit at gmail.com Fri Apr 21 16:57:51 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Fri, 21 Apr 2006 07:57:51 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <20060421144635.GB16802@panix.com> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> Message-ID: <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> On Apr 21, 2006, at 7:46 AM, Aahz wrote: > On Fri, Apr 21, 2006, Mateusz Rukowicz wrote: >> >> Next thing I would add is multi precision floating point type to >> the core and fraction type, which in some cases highly improves >> operations, which would have to be done using floating point instead. >> Of course, math module will need update to support multi precision >> floating points, and with that, one could compute asin or any other >> function provided with math with precision limited by memory and >> time. It would be also good idea to add function which computes pi >> and exp with unlimited precision. And last thing - It would be nice >> to add some number-theory functions to math module (or new one), like >> prime-tests, factorizations etc. > > To echo and amplify what Guido said: an excellent project would be to > rewrite the decimal module in C. Another option would be to pick > up and > enhance either of the GMP wrappers: > > http://gmpy.sourceforge.net/ > http://www.egenix.com/files/python/mxNumber.html > > That would also deal with your suggestion of rational numbers. GMP is covered by LGPL, so must any such derivative work (presumably ruling it out for embedding in Python's core itself). That being said, as gmpy's author, I'd be enthusiastically happy to mentor anybody who wants to work on gmpy or other multiprecision arithmetic extension for Python. Alex From martin at v.loewis.de Fri Apr 21 18:18:58 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 18:18:58 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <9EC47CB3-A186-4E13-BB5E-F0527A4A3D2C@mac.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <1225473E-39CA-4840-AF61-DEA403D734F8@mac.com> <4447F852.8070502@v.loewis.de> <9EC47CB3-A186-4E13-BB5E-F0527A4A3D2C@mac.com> Message-ID: <444905F2.5040209@v.loewis.de> Ronald Oussoren wrote: >> That doesn't require Eggs to be single-file zipfiles; deleting a >> directory would work just as will (and I believe it will work with >> ez_install, too). > > Egg directories (which are basically just the same as packages using > .pth files) also work for this and that's what I usually install. > Setuptools can work with python extension inside .zip files, but I'm > not entirely comfortable with that. It's primarily the .egg *files* that I dislike. I'm can accept the .egg directories. Regards, Martin From martin at v.loewis.de Fri Apr 21 18:20:02 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 18:20:02 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <64486FDF-E9E7-4EAB-9FB5-D5343C018886@free.fr> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <4443526C.8040907@v.loewis.de> <4443E597.9000407@v.loewis.de> <C4600955-7A13-41D2-BC17-9049A42202DC@free.fr> <4447EE27.7060601@v.loewis.de> <64486FDF-E9E7-4EAB-9FB5-D5343C018886@free.fr> Message-ID: <44490632.70009@v.loewis.de> J?r?me Laheurte wrote: > Ah, no, it's only available in XP. There are some alternatives though: > > http://www.robvanderwoude.com/index.html Sure. Writing my own one wasn't that difficult, in the end, either (except that I overlooked that the API I used first exists in XP and later only). regards, Martin From thomas at python.org Fri Apr 21 18:26:04 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 21 Apr 2006 18:26:04 +0200 Subject: [Python-Dev] [Python-3000-checkins] r45617 - in python/branches/p3yk/Lib/plat-mac/lib-scriptpackages: CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py Finder/Finder_Basics.py Finder Message-ID: <9e804ac0604210926i343308a5r67cfc277625ebe6e@mail.gmail.com> On 4/21/06, guido.van.rossum <python-3000-checkins at python.org> wrote: > The hardest part was fixing two mutual recursive imports; > somehow changing "import foo" into "from . import foo" where > foo and bar import each other AND both are imported from __init__.py > caused things to break. Bah. Hm, this is possibly a flaw in the explicit relative import mechanism. Normal circular imports work because a module object is stuffed into sys.modules before any code for the module is executed, so the next 'import' of that module just finds the half-loaded module object. I guess 'from . import name' really looks at the package contents, though, and there, the module isn't stored until it's done loading. I'm not sure why it raises a 'cannot import name' exception instead of recursing into a spiral of death, but I guess that's a good thing. :) Should this be fixed, or is it an esoteric enough case that I shouldn't bother? -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060421/3e3f1a1a/attachment.htm From mateusz.rukowicz at vp.pl Fri Apr 21 18:35:26 2006 From: mateusz.rukowicz at vp.pl (Mateusz Rukowicz) Date: Fri, 21 Apr 2006 18:35:26 +0200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> Message-ID: <444909CE.4030901@vp.pl> Alex Martelli wrote: > > On Apr 21, 2006, at 7:46 AM, Aahz wrote: > >> On Fri, Apr 21, 2006, Mateusz Rukowicz wrote: >> >>> >>> Next thing I would add is multi precision floating point type to >>> the core and fraction type, which in some cases highly improves >>> operations, which would have to be done using floating point instead. >>> Of course, math module will need update to support multi precision >>> floating points, and with that, one could compute asin or any other >>> function provided with math with precision limited by memory and >>> time. It would be also good idea to add function which computes pi >>> and exp with unlimited precision. And last thing - It would be nice >>> to add some number-theory functions to math module (or new one), like >>> prime-tests, factorizations etc. >> >> >> To echo and amplify what Guido said: an excellent project would be to >> rewrite the decimal module in C. Another option would be to pick up >> and >> enhance either of the GMP wrappers: >> >> http://gmpy.sourceforge.net/ >> http://www.egenix.com/files/python/mxNumber.html >> >> That would also deal with your suggestion of rational numbers. > > > GMP is covered by LGPL, so must any such derivative work (presumably > ruling it out for embedding in Python's core itself). That being > said, as gmpy's author, I'd be enthusiastically happy to mentor > anybody who wants to work on gmpy or other multiprecision arithmetic > extension for Python. > Rewriting decimal module in C is not far from my initial idea, and I am quite happy about that, you think it's good idea. If I was doing it, I would write all needed things myself - I am quite experienced in coding multi precision computation algorithms (and all kind of algorithms at all). >Working with long ints would also be excellent, but I'd hesitate to >tackle that unless Tim Peters volunteers to help (because there may be >special implementation constraints). I am quite confident, that I would be able to make those changes transparently, that is - representation of long int wouldn't change. And as far as I saw in the code it's quite straightforward (actually, it would be strange otherwise). But of course it's up to you. Anyway, I hope it will be possible to add those changes. I guess that's quite enough for SoC project, but I hope, it will be possible to add some of my initial ideas anyway. Best regards, Mateusz Rukowicz. From guido at python.org Fri Apr 21 18:39:37 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 17:39:37 +0100 Subject: [Python-Dev] [Python-3000-checkins] r45617 - in python/branches/p3yk/Lib/plat-mac/lib-scriptpackages: CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py Finder/Finder_Bas Message-ID: <ca471dc20604210939p466b0acesba7c6f306db94f@mail.gmail.com> On 4/21/06, Thomas Wouters <thomas at python.org> wrote: > > On 4/21/06, guido.van.rossum <python-3000-checkins at python.org> wrote: > > The hardest part was fixing two mutual recursive imports; > > somehow changing "import foo" into "from . import foo" where > > foo and bar import each other AND both are imported from __init__.py > caused things to break. Bah. > > Hm, this is possibly a flaw in the explicit relative import mechanism. > Normal circular imports work because a module object is stuffed into > sys.modules before any code for the module is executed, so the next 'import' > of that module just finds the half-loaded module object. I guess 'from . > import name' really looks at the package contents, though, and there, the > module isn't stored until it's done loading. I'm not sure why it raises a > 'cannot import name' exception instead of recursing into a spiral of death, > but I guess that's a good thing. :) That's what I thought. But I believe I reproduced it in 2.4 as well. I'm not 100% sure but I've got a feeling that the situation is actually due to these imports happening *while __init__ is loading*. When P is a package and M is a module, "from P import M" seems to first check that sys.modules["P.M"] exists, but then it somehow asks for P.M instead for sys.modules["P.M"]. At least that's how I cam up with the fix (which sets P.M from inside M) and the fix made the problem go away. > Should this be fixed, or is it an esoteric enough case that I shouldn't > bother? I do think it ought to be fixed, and in 2.5 as well. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Fri Apr 21 18:41:19 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 17:41:19 +0100 Subject: [Python-Dev] [Python-3000-checkins] r45617 - in python/branches/p3yk/Lib/plat-mac/lib-scriptpackages: CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py Finder/Finder_Bas Message-ID: <ca471dc20604210941n5492c5c6id43019c11ad9819f@mail.gmail.com> On 4/21/06, Thomas Wouters <thomas at python.org> wrote: > > On 4/21/06, guido.van.rossum <python-3000-checkins at python.org> wrote: > > The hardest part was fixing two mutual recursive imports; > > somehow changing "import foo" into "from . import foo" where > > foo and bar import each other AND both are imported from __init__.py > caused things to break. Bah. > > Hm, this is possibly a flaw in the explicit relative import mechanism. > Normal circular imports work because a module object is stuffed into > sys.modules before any code for the module is executed, so the next 'import' > of that module just finds the half-loaded module object. I guess 'from . > import name' really looks at the package contents, though, and there, the > module isn't stored until it's done loading. I'm not sure why it raises a > 'cannot import name' exception instead of recursing into a spiral of death, > but I guess that's a good thing. :) That's what I thought. But I believe I reproduced it in 2.4 as well. I'm not 100% sure but I've got a feeling that the situation is actually due to these imports happening *while __init__ is loading*. When P is a package and M is a module, "from P import M" seems to first check that sys.modules["P.M"] exists, but then it somehow asks for P.M instead for sys.modules["P.M"]. At least that's how I cam up with the fix (which sets P.M from inside M) and the fix made the problem go away. > Should this be fixed, or is it an esoteric enough case that I shouldn't > bother? I do think it ought to be fixed, and in 2.5 as well. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Fri Apr 21 18:45:19 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 12:45:19 -0400 Subject: [Python-Dev] [Python-3000-checkins] r45617 - in python/branches/p3yk/Lib/plat-mac/lib-scriptpackages: CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py Finder/Finder_Basics.py Finder In-Reply-To: <9e804ac0604210926i343308a5r67cfc277625ebe6e@mail.gmail.com > Message-ID: <5.1.1.6.0.20060421124214.036e5008@mail.telecommunity.com> At 06:26 PM 4/21/2006 +0200, Thomas Wouters wrote: >On 4/21/06, guido.van.rossum ><<mailto:python-3000-checkins at python.org>python-3000-checkins at python.org> >wrote: >>The hardest part was fixing two mutual recursive imports; >>somehow changing "import foo" into "from . import foo" where >>foo and bar import each other AND both are imported from __init__.py >>caused things to break. Bah. > >Hm, this is possibly a flaw in the explicit relative import mechanism. Actually, this sounds rather like a problem that happens in Python 2.4 as well. If you have a package 'foo' containing modules 'bar' and 'baz', and foo/__init__.py imports both bar and baz, then 'import foo.bar' will fail inside of baz. You have to use 'from foo import bar' or it doesn't work. We run into this a lot in Chandler, which tries to expose package APIs from the package __init__ modules. From martin at v.loewis.de Fri Apr 21 18:43:51 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 18:43:51 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <4448AA3D.3050107@canterbury.ac.nz> References: <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> <4448AA3D.3050107@canterbury.ac.nz> Message-ID: <44490BC7.2090908@v.loewis.de> Greg Ewing wrote: > Guido van Rossum wrote: >> You >> can't blame KDE for providing mechanisms that only work in the KDE >> world. It's fine for Python to provide Python-specific solutions for >> issues that have no cross-platform native solution. > > Also keep in mind that we're talking about resources > used internally by the application. They don't *need* > to be accessible outside the application. So I don't > think the KDE argument applies here. They might need to be available outside "Python". Two use cases: 1. The application embeds an sqlite database. Even though it knows how to get at the data, it can't use it, because the sqlite3 library won't accept .../foo-1.0.egg/resources/the_data (or some such) as a database name, if foo-1.0.egg is a ZIP file. If the installed application was a set of files, that would work. 2. The application embeds an SGML DTD (say, HTML). In order to perform validation, it invokes nsgmls on the host system. It cannot pass the SGML catalog to nsgmls (using the -C option) since you can't refer to DTD files inside ZIP files inside an SGML catalog. If this was a bunch of installed files, it would work. 3. The application includes an SSL certificate. It can't pass it to socket.ssl, since OpenSSL expects a host system file name, not a fragment of a zip file. If this was installed as files, it would work. This is precisely what happens in KDE: you have konqueror happily browse an SMB directory, double-click on a .xls file, OpenOffice starts and can't access the file it was started with. It doesn't matter to the user that there is "a good reason" for that. Regards, Martin From jimjjewett at gmail.com Fri Apr 21 18:51:30 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 21 Apr 2006 12:51:30 -0400 Subject: [Python-Dev] setuptools in 2.5. Message-ID: <fb6fbf560604210951m18518b5el14b3e74c4f76b9a3@mail.gmail.com> M.-A. Lemburg wrote: >There's really nothing wrong with the standard distutils >two step process: >1. download and unzip the source file On windows, the closest thing to a standard unzip is winzip. I have recently found several zip files aimed at windows users which winzip could not open, though python's tarfile module could. So the unzip isn't always trivial. >2. run "python setup.py install" setup.py isn't a unique enough name to convince me that things are easily reversed or repeated. The distutils are complex enough that my concern doesn't go away. (Whether setuptools will be clear enough or documented enough to actually solve this, I don't yet know. I have high hopes.) Phillip J. Eby wrote: > Such packages may have customized their installation process > by extending the distutils, *without* overriding get_outputs(). Since few > people actually use the --record option for anything important, nobody > notices when it breaks. I just searched through the (2.4.2) documentation, and could find no reference to either get_outputs or --record. I also looked through the source, and couldn't find any reference to --record. (I later found one in bdist_rpm, but "build a binary for Red Hat Linux" isn't a natural thing for anyone not on Linux to even try.) When I did a grep (which I wouldn't do unless I already knew I needed to worry about it), I finally found a few references in install.py which boiled down to "nothing happens, but you could get a list of files if you went through enough contortions". This looks more like a debugging aid than something I would have to worry about. > As for --install-data, just put your data in the packages and use Python > 2.4's ability to install package data, or one of the pre-existing distutils > extensions that beat install_data over the head to make it install the data > with the packages. Use of a third-party extension (that isn't mentioned in the docs) is hardly intuitive. If it is really required, that is far more intrusive than the stdlib changes setuptools makes. (Would it only intrude on the packager, at least? But then how could he or she be confident that the package would work on someone else's box?) -jJ From mateusz.rukowicz at vp.pl Fri Apr 21 18:52:13 2006 From: mateusz.rukowicz at vp.pl (Mateusz Rukowicz) Date: Fri, 21 Apr 2006 18:52:13 +0200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <4448B00C.2040901@vp.pl> <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> Message-ID: <44490DBD.3050703@vp.pl> Guido van Rossum wrote: >(Aside: you probably mean physicist, someone who practices physics. A >physician is a doctor; don't ask me why. :-) > > > ;) I'll remember ;) >>interpreted languages are particularly good for physics simulations, in >>which small error would grow so much, that results are useless. >> >> > >We already have decimal floating point which can be configured to use >however many digits of precision you want. Would this be sufficient? >If you want more performance, perhaps you could tackle the very useful >project of translating decimal.py into C? > > > Yes, it seems like better idea - already written software would benefit that transparently. I think I could develop 'margin' ideas later. >I'm not sure I see the value of rational numbers implemeted in C; >they're easy to write in Python and all the time goes into division of >two ints which is already implemented in C. > > > Well, quite true ;P >Have you looked at the existing wrappers around OpenSSL, such as >pyopenssl and m2crypto? ISTM that these provide most of the needed >algorithms, already coded in open source C. > > > Well, you already convinced me to not do that right now, but I still think python would benefit that, and it would be done later on, but this discussion may be moved in time. >>I understand that most of these improvements have quite limited >>audience, but I still think python should be friendly to them ;) >> >> > >Sure. The question is, if there are more student applications than >Google wants to fund, which projects will be selected? I would >personally vote for the projects that potentially help out the largest >number of Python users. > > > So I think the most valuable of my ideas would be improving long int + coding decimal in C. Anyway, I think it would be possible to add other ideas later. From pje at telecommunity.com Fri Apr 21 19:01:09 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 13:01:09 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060421145121.GA10419@localhost.localdomain> References: <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> Message-ID: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> At 10:51 AM 4/21/2006 -0400, A.M. Kuchling wrote: >On Fri, Apr 21, 2006 at 07:31:35PM +1000, Nick Coghlan wrote: > > fit the new definition. So we settled on calling them "context managers" > > instead. > ... > > method. Instead, the new term "manageable context" (or simply "context") > > was introduced to mean "anything with a __context__ method". This was OK, > >Meaning that 'manageable context' objects create and destroy 'context >managers'... My view is still that 'context manager' is a terrible >name when used alongside objects called 'contexts': the object doesn't >manage anything, and it certainly doesn't manage contexts -- in fact >it's created by 'context' objects. And that's more or less why I wrote the documentation the way I did. Nick, as I understand your argument, it's that we were previously using the term "context manager" to mean "thing with __enter__ and __exit__". But that was *never* my interpretation. My understanding of "context manager" was always, "thing that you give to a with statement". So to me, when we added a __context__ method, we were creating a *new object* that hadn't existed before, and we moved some methods on to it. Thus, "context manager" still meant "thing you give to the with statement" -- and that never changed, from my POV. And that's why I see the argument that we've "reversed" the terminology as bogus: to me it's been consistent all along. We just added another object *besides* the context manager. Note too that the user of the "with" statement doesn't know that this other object exists, and in fact sometimes it doesn't actually exist, it's the same object. None of this is relevant for the with-statement user, only the context manager. So there's no reason (IMO) to monkey with the definition of "context manager" as "thing you use in a with statement". Now, I get your point about @contextmanager on a __context__ method, and I agree that that seems backwards at first. What I don't see is how to change the terminology to handle that subtlety in a way that doesn't muck up the basically simple definitions that are in place now. If it must be explained, however, I'd rather simply document it in contextlib that @contextmanager-decorated functions return an object that is both a context manager and a context (or whatever name you want for the invisible-behind-the-scenes-thing with enter and exit methods). Since it is possible for an object to be both, that seems to do fine for explaining why you can use @contextmanager to define a __context__ method. I'm definitely open to other terminology for the invisible thing besides "context", but I don't care for "managed context" or "manageable context", as these aren't much better. I'm somewhat tempted by "context instance" or "context invocation". E.g, the __context__ method should return a "context instance": an object representing a single instance of use of the context. There's a wee hint of suggestion that this means "instance of type context", but it's more suggestive of one-time use than "context object". From martin at v.loewis.de Fri Apr 21 18:59:03 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 18:59:03 +0200 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <e2a15h$325$1@sea.gmane.org> References: <20060419051254.9b1318b4.dh@triple-media.com> <44486885.3000404@v.loewis.de> <e2a15h$325$1@sea.gmane.org> Message-ID: <44490F57.1090202@v.loewis.de> Thomas Heller wrote: > Personally, I *like* the ctypes name. But I'm open for suggestions, > and it might have intersting consequences if the Python core package > would be renamed to something else. > > Any suggestions? Well, my only concern was py_object. I wondered whether the py_ prefix is really necessary, or, if it is meant to match PyObject* whether it should be called PyObject or PyObject_p. However, if every ctypes user knows what py_object is, there is probably little point in renaming it. Regards, Martin From theller at python.net Fri Apr 21 19:01:17 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 19:01:17 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <4438C97F.50905@v.loewis.de> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> Message-ID: <e2b34t$qkt$1@sea.gmane.org> Martin v. L?wis wrote: > Thomas Heller wrote: >>>> return Py_BuildValue("HHHHs", >>>> ver.dwMajorVersion, >>>> ver.dwMinorVersion, >>>> ver.dwBuildNumber, >>>> ver.dwPlatformId, >>>> ver.szCSDVersion); >>>> >>>> The crash disappears if I change the first parameter in the >>>> Py_BuildValue call to "LLLLs". No idea why. >>>> With this change, I can start the exe without a crash, but >>>> sys.versioninfo starts with (IIRC) (2, 0, 5,...). >>> Very strange. What is your compiler version (first line of cl /?)? > > I have looked into this. In the latest SDK (2003 SP1), Microsoft has > changed the include structure; there are no separate amd64 > subdirectories anymore. Then, cl.exe was picking up the wrong > stdarg.h (the one of VS 2003), which would not work for AMD64. > > I have corrected that in vsextcomp, but I will need to check a few > more things before releasing it. I've upgraded to vsextcomp0.8. On XP (32-bit), I can compile python25.dll and python.exe for AMD64 now, after adding bufferoverflowU.lib to the linker options. And the exe even works on XP-64 ;-) When trying to build on XP64, I get errors like these (if it helps, I can upload the complete buildlogs somewhere): ------ Rebuild All started: Project: make_buildinfo, Configuration: Release Win32 ------ Deleting intermediate files and output files for project 'make_buildinfo', configuration 'Release|Win32'. Compiling... ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ Error : Could not create new temporary options response file------ CL.EXE Wrapper for VSExtCompiler Plugin ------ ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ (lots of these lines removed) ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ Linking... ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ Error : Could not create new temporary options response file------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ (lots of these lines removed) ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ Build log was saved at "file://c:\devel\trunk\PCbuild\x86-temp-release\make_buildinfo\BuildLog.htm" make_buildinfo - 0 error(s), 0 warning(s) ------ Rebuild All started: Project: make_versioninfo, Configuration: Release Win32 ------ Deleting intermediate files and output files for project 'make_versioninfo', configuration 'Release|Win32'. Compiling... ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ Error : Could not create new temporary options response file------ CL.EXE Wrapper for VSExtCompiler Plugin ------ ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ (lots of these lines removed) ------ CL.EXE Wrapper for VSExtCompiler Plugin ------ Linking... ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ Error : Could not create new temporary options response file------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ (lots of these lines removed) ------ LINK.EXE Wrapper for VSExtCompiler Plugin ------ Performing Custom Build Step '.\make_versioninfo.exe' is not recognized as an internal or external command, operable program or batch file. Project : error PRJ0019: A tool returned an error code from "Performing Custom Build Step" Build log was saved at "file://c:\devel\trunk\PCbuild\x86-temp-release\make_versioninfo\BuildLog.htm" make_versioninfo - 1 error(s), 0 warning(s) From martin at v.loewis.de Fri Apr 21 19:12:57 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 19:12:57 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> Message-ID: <44491299.9040607@v.loewis.de> Guido van Rossum wrote: > Microsoft just announced that Visual Studio 2005 express will be free > forever, including the IDE and the optimizing C++ compiler. (Not > included in the "forever" clause are VS 2007 or later versions.) > > Does this make a difference for Python development for Windows? For future versions, perhaps. For 2.5, I think we now have settled on VS 2003, for several reasons: - I personally consider VS 2005 still verdant (crude? immature? unfledged?). They can't really mean the whole breakage they have done to the C library. Also, I expect another release of VS after Vista, to cover all the new .NET API, and I hope that we can skip VS 2005 (although Vista gets delays, and so gets VS 2007) - Fredrik Lundh points out that it would be nice if people producing extensions for multiple Python releases wouldn't need a separate compiler for each release. - Paul Moore has contributed a Python build procedure for the free version of the 2003 compiler. This one is without IDE, but still, it should allow people without a VS 2003 license to work on Python itself; it should also be possible to develop extensions with that compiler (although I haven't verified that distutils would pick that up correctly). Regards, Martin From pje at telecommunity.com Fri Apr 21 19:19:33 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 13:19:33 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <fb6fbf560604210951m18518b5el14b3e74c4f76b9a3@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060421131220.04437110@mail.telecommunity.com> At 12:51 PM 4/21/2006 -0400, Jim Jewett wrote: >Phillip J. Eby wrote: > > > Such packages may have customized their installation process > > by extending the distutils, *without* overriding get_outputs(). Since few > > people actually use the --record option for anything important, nobody > > notices when it breaks. > >I just searched through the (2.4.2) documentation, and could find no >reference to either get_outputs or --record. I also looked through >the source, and couldn't find any reference to --record. (I later >found one in bdist_rpm, but "build a binary for Red Hat Linux" isn't a >natural thing for anyone not on Linux to even try.) > >When I did a grep (which I wouldn't do unless I already knew I needed >to worry about it), I finally found a few references in install.py >which boiled down to "nothing happens, but you could get a list of >files if you went through enough contortions". This looks more like a >debugging aid than something I would have to worry about. As I said, this is probably why most actual system packaging tools use --root to record the results instead. Of course, it's still possible for people to do extensions that break --root, so what it basically boils down to is that customizing the distutils is generally a bad idea -- except that for lots of things there seems to be no other choice. > > As for --install-data, just put your data in the packages and use Python > > 2.4's ability to install package data, or one of the pre-existing distutils > > extensions that beat install_data over the head to make it install the data > > with the packages. > >Use of a third-party extension (that isn't mentioned in the docs) is >hardly intuitive. Python 2.4 has the package_data option, and it is documented. This feature was first implemented in setuptools, and then Fred Drake backported it to Python and wrote documentation for it. > If it is really required, that is far more >intrusive than the stdlib changes setuptools makes. (Would it only >intrude on the packager, at least? But then how could he or she be >confident that the package would work on someone else's box?) I'm not sure I follow you. If you want to install data inside your package directories, and you don't have at least Python 2.4, you'll have to extend the distutils. You can do it yourself (possibly borrowing code from another package that looks like it does what you want) or you can use setuptools; those are pretty much your only options. From rasky at develer.com Fri Apr 21 19:23:23 2006 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 21 Apr 2006 19:23:23 +0200 Subject: [Python-Dev] Visual studio 2005 express now free References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> Message-ID: <018501c66568$4afb38a0$bf03030a@trilan> Martin v. L?wis wrote: > - Paul Moore has contributed a Python build procedure for the > free version of the 2003 compiler. This one is without IDE, > but still, it should allow people without a VS 2003 license > to work on Python itself; it should also be possible to develop > extensions with that compiler (although I haven't verified > that distutils would pick that up correctly). It's been possible to compile distutils extensions with the VS 2003 toolkit for far longer than it's possible to compile Python itself: http://www.vrplumber.com/programming/mstoolkit/ In fact, it would be great if the patches provided here were reviewed and integrated into the official Python distutils. -- Giovanni Bajo From martin at v.loewis.de Fri Apr 21 19:25:14 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 19:25:14 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <e2b34t$qkt$1@sea.gmane.org> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> Message-ID: <4449157A.4060308@v.loewis.de> Thomas Heller wrote: > On XP (32-bit), I can compile python25.dll and python.exe for AMD64 now, > after adding bufferoverflowU.lib to the linker options. On what project? There should be /GS- options on all projects that need it, which, in turn, should result in bufferoverflowU.lib not being needed. Or I forgot to check that change in... Will do Monday. > Error : Could not create new temporary options response file I've never seen these. I will have to study the source again, and find out how that could happen. Do you have spaces in the directory leading to the working copy? Regards, Martin From martin at v.loewis.de Fri Apr 21 19:26:41 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 19:26:41 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <018501c66568$4afb38a0$bf03030a@trilan> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <018501c66568$4afb38a0$bf03030a@trilan> Message-ID: <444915D1.40504@v.loewis.de> Giovanni Bajo wrote: > It's been possible to compile distutils extensions with the VS 2003 toolkit > for far longer than it's possible to compile Python itself: > http://www.vrplumber.com/programming/mstoolkit/ Sure: with distutils modifications. > In fact, it would be great if the patches provided here were reviewed and > integrated into the official Python distutils. Something like this should already be in Python 2.5. I just haven't tested it in this configuration. Regards, Martin From tim.peters at gmail.com Fri Apr 21 19:28:15 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 21 Apr 2006 13:28:15 -0400 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> Message-ID: <1f7befae0604211028k7406cd92q7b7c95ee83ca39d7@mail.gmail.com> [Mateusz Rukowicz] >> I wish to participate in Google Summer of Code as a python developer. I >> have few ideas, what would be improved and added to python. Since these >> changes and add-ons would be codded in C, and added to python-core >> and/or as modules,I am not sure, if you are willing to agree with these >> ideas. >> >> First of all, I think, it would be good idea to speed up long int >> implementation in python. Especially multiplying and converting >> radix-2^k to radix-10^l. It might be done, using much faster algorithms >> than already used, and transparently improve efficiency of multiplying >> and printing/reading big integers. [Guido van Rossum] > That would be a good project, if someone is available to mentor it. > (Tim Peters?) Generally speaking, I think longobject.c is already at the limit for what's _reasonable_ for the core to support in C. In fact, I regret that we added Karatsuba multiplication. That grossly complicated CPython's bigint multiplication code, yet anyone slinging bigints seriously would still be far better off using a Python wrapper for a library (like GMP) seriously devoted to gonzo bigint algorithms. CPython is missing mountains of possible bigint optimizations and algorithms, and that's fine by me -- the core can't afford to keep up with (or even approach) the state of the art for such stuff, but there are already Python-callable libraries that do keep up. Speeding the decimal module is a more attractive project. Note that's an explicit possibility for next month's "Need for Speed" sprint: http://mail.python.org/pipermail/python-dev/2006-April/063428.html Mateusz, if that interests you, it would be good to contact the sprint organizers. >> Next thing I would add is multi precision floating point type to the >> core and fraction type, which in some cases highly improves operations, >> which would have to be done using floating point instead. You can have this today in Python by installing the GMP library and its Python wrapper. >> Of course, math module will need update to support multi precision >> floating points, and with that, one could compute asin or any other >> function provided with math with precision limited by memory and time. >> It would be also good idea to add function which computes pi and exp >> with unlimited precision. I don't think the CPython core can afford to support such sophisticated and limited-audience algorithms. OTOH, the decimal type has potentially unbounded precision already, and the math-module functions have no idea what to do about that. Perhaps some non-gonzo (straightforward even if far from optimal, and coded in Python) arbitrary-precision algorithms would make a decent addition. For example, the Emacs `calc` package uses non-heroic algorithms that are fine for casual use. > I would suggest doing this as a separate module first, rather than as > a patch to the Python core. > > Can you show us some practical applications of these operations? He could, yes, but they wouldn't meet any web developer's definition of "practical". The audience is specialized (he says as if web developers' peculiar desires were universal ;-)). >> And last thing - It would be nice to add some number-theory functions to >> math module (or new one), like prime-tests, GMP again, although the developers note: http://www.swox.com/gmp/projects.html ... GMP is not really a number theory library and probably shouldn't have large amounts of code dedicated to sophisticated prime testing algorithms, ... If they think doing much more here is out of bounds for them, trying to sneak it into longobject.c is crazy :-) Note that tests like Miller-Rabin are very easy to code in Python. > factorizations etc. Very large topic all by itself. On Windows, I use factor.exe: http://indigo.ie/~mscott/ and call it from Python via os.popen(). That doesn't implement the Number Field Sieve, but does use most earlier high-powered methods (including elliptic curve and MPQS). > Probably better a new module. But how many people do you think need these? The people who install GMP <0.5 wink>. >> Every of above improvements would be coded in pure C, and without using >> external libraries, so portability of python won't be cracked, and no >> new dependencies will be added. > Great -- that's important! Since when does coding a module in pure C do anything for Jython or IronPython or PyPy besides make their lives more difficult? It would be better to code in Python, if the functionality is all that's desired; but it isn't, and getting gonzo speed too is the proper concern of libraries and development teams devoted to gonzo-speed math libraries. >> I am waiting for your responses about my ideas, which of these you think >> are good, which poor etc. Main reason I am posting my question here is >> that, I am not sure, how much are you willing to change the core of the >> python. All the things you mentioned are Good Things, solving some real problems for some real people. What I question is the wisdom of trying to implement them in the CPython core, especially when most of them are already available in CPython via downloading a Python-wrapped library. >> At the end, let my apologize for my poor English, I am trying to do my best. > No problem. We can understand you fine! Yup! From aleaxit at gmail.com Fri Apr 21 19:37:01 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Fri, 21 Apr 2006 10:37:01 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <44490DBD.3050703@vp.pl> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <4448B00C.2040901@vp.pl> <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> <44490DBD.3050703@vp.pl> Message-ID: <e8a0972d0604211037y22957fdey8796418e707943f1@mail.gmail.com> On 4/21/06, Mateusz Rukowicz <mateusz.rukowicz at vp.pl> wrote: ... > So I think the most valuable of my ideas would be improving long int + > coding decimal in C. Anyway, I think it would be possible to add other > ideas later. I see "redo Decimal in C" (possibly with the addition of some fast elementary transcendentals) and "enhance operations on longs" (multiplication first and foremost, and base-conversions probably next, as you suggested -- possibly with the addition of some fast number-theoretical functions) as two separate projects, each of just about the right magnitude for an SoC project. I would be glad to mentor either (not both); my preference would be for the former -- it appears to me that attempting to do both together might negatively impact both. Remember, it isn't just the coding...: thorough testing, good docs, accurate performance measurements on a variety of platforms, ..., are all important part of a project that, assuming success, it's scheduled to become a core part of Python 2.6, after all. Alex From jimjjewett at gmail.com Fri Apr 21 19:44:15 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 21 Apr 2006 13:44:15 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <5.1.1.6.0.20060421131220.04437110@mail.telecommunity.com> References: <5.1.1.6.0.20060421131220.04437110@mail.telecommunity.com> Message-ID: <fb6fbf560604211044j5544c955hfed1b5ec00218d3f@mail.gmail.com> On 4/21/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 12:51 PM 4/21/2006 -0400, Jim Jewett wrote: > >Phillip J. Eby wrote: > > > As for --install-data > > > ... one of the pre-existing distutils extensions ... > > Use of a third-party extension (that isn't mentioned > > in the docs [2.3]) is hardly intuitive. > > If it is really required, that is far more > >intrusive than the stdlib changes setuptools makes. > [Request for clarification] If I were developing under 2.3, having to install a 3rd-party tool to make distutils work is no better than installing setuptools, even with its changes to the stdlib -- and considerably worse, since setuptools is at least a blessed extension. Even if I'm developing mostly with 2.5, I'm reluctant to require that my users install more than they need to. I don't like that setuptools is needed, and changes their system defaults -- but that is still better than doing the same thing with a random third-party extension. -jJ From guido at python.org Fri Apr 21 19:54:11 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 18:54:11 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> References: <20060418185518.0359E1E407C@bag.python.org> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <20060421145121.GA10419@localhost.localdomain> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> Message-ID: <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.com> Phillip, I do recomment you look at decimal.py. If we're not reversing the PEP changes, that module needs to be changed; it has a class Context (that was always there) with a __context__ method which returns an instance of a class ContextManager (newly created for the with-statement). This looks backwards from the PEP's current POV. --Guido On 4/21/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 10:51 AM 4/21/2006 -0400, A.M. Kuchling wrote: > >On Fri, Apr 21, 2006 at 07:31:35PM +1000, Nick Coghlan wrote: > > > fit the new definition. So we settled on calling them "context managers" > > > instead. > > ... > > > method. Instead, the new term "manageable context" (or simply "context") > > > was introduced to mean "anything with a __context__ method". This was OK, > > > >Meaning that 'manageable context' objects create and destroy 'context > >managers'... My view is still that 'context manager' is a terrible > >name when used alongside objects called 'contexts': the object doesn't > >manage anything, and it certainly doesn't manage contexts -- in fact > >it's created by 'context' objects. > > And that's more or less why I wrote the documentation the way I did. > > Nick, as I understand your argument, it's that we were previously using the > term "context manager" to mean "thing with __enter__ and __exit__". But > that was *never* my interpretation. > > My understanding of "context manager" was always, "thing that you give to a > with statement". > > So to me, when we added a __context__ method, we were creating a *new > object* that hadn't existed before, and we moved some methods on to > it. Thus, "context manager" still meant "thing you give to the with > statement" -- and that never changed, from my POV. > > And that's why I see the argument that we've "reversed" the terminology as > bogus: to me it's been consistent all along. We just added another object > *besides* the context manager. > > Note too that the user of the "with" statement doesn't know that this other > object exists, and in fact sometimes it doesn't actually exist, it's the > same object. None of this is relevant for the with-statement user, only > the context manager. So there's no reason (IMO) to monkey with the > definition of "context manager" as "thing you use in a with statement". > > Now, I get your point about @contextmanager on a __context__ method, and I > agree that that seems backwards at first. What I don't see is how to > change the terminology to handle that subtlety in a way that doesn't muck > up the basically simple definitions that are in place now. > > If it must be explained, however, I'd rather simply document it in > contextlib that @contextmanager-decorated functions return an object that > is both a context manager and a context (or whatever name you want for the > invisible-behind-the-scenes-thing with enter and exit methods). Since it > is possible for an object to be both, that seems to do fine for explaining > why you can use @contextmanager to define a __context__ method. > > I'm definitely open to other terminology for the invisible thing besides > "context", but I don't care for "managed context" or "manageable context", > as these aren't much better. I'm somewhat tempted by "context instance" or > "context invocation". E.g, the __context__ method should return a "context > instance": an object representing a single instance of use of the > context. There's a wee hint of suggestion that this means "instance of > type context", but it's more suggestive of one-time use than "context object". > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From theller at python.net Fri Apr 21 19:58:02 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 19:58:02 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <4449157A.4060308@v.loewis.de> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> Message-ID: <e2b6fb$681$1@sea.gmane.org> Martin v. L?wis wrote: > Thomas Heller wrote: >> On XP (32-bit), I can compile python25.dll and python.exe for AMD64 now, >> after adding bufferoverflowU.lib to the linker options. > > On what project? There should be /GS- options on all projects that need > it, which, in turn, should result in bufferoverflowU.lib not being needed. > > Or I forgot to check that change in... Will do Monday. Ok, /GS- helps. No need to hurry - I would commit that myself but I only have readonly sandboxes on these installations, no putty, and so on. >> Error : Could not create new temporary options response file > > I've never seen these. I will have to study the source again, and > find out how that could happen. > > Do you have spaces in the directory leading to the working copy? No. But your guess is somewhat correct: If I move the sandbox to a path *with* spaces in it, this problem goes away ;-). On the 64-bit box. Now I have to learn how to debug on win64. Thanks, Thomas From theller at python.net Fri Apr 21 19:59:57 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 19:59:57 +0200 Subject: [Python-Dev] python 2.5alpha and naming schemes In-Reply-To: <44490F57.1090202@v.loewis.de> References: <20060419051254.9b1318b4.dh@triple-media.com> <44486885.3000404@v.loewis.de> <e2a15h$325$1@sea.gmane.org> <44490F57.1090202@v.loewis.de> Message-ID: <e2b6it$5q7$1@sea.gmane.org> Martin v. L?wis wrote: > Thomas Heller wrote: >> Personally, I *like* the ctypes name. But I'm open for suggestions, >> and it might have intersting consequences if the Python core package >> would be renamed to something else. >> >> Any suggestions? > > Well, my only concern was py_object. I wondered whether the py_ > prefix is really necessary, or, if it is meant to match PyObject* > whether it should be called PyObject or PyObject_p. > > However, if every ctypes user knows what py_object is, there is > probably little point in renaming it. I have learned that it is too late to rename things in ctypes. If it were still possible, there would be a lot lot more to clean up, but it has grown over the last three years or so. Thomas From guido at python.org Fri Apr 21 20:07:17 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 19:07:17 +0100 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <1f7befae0604211028k7406cd92q7b7c95ee83ca39d7@mail.gmail.com> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <1f7befae0604211028k7406cd92q7b7c95ee83ca39d7@mail.gmail.com> Message-ID: <ca471dc20604211107g603fc343t53d2db6076c93dca@mail.gmail.com> On 4/21/06, Tim Peters <tim.peters at gmail.com> wrote: > OTOH, the decimal type > has potentially unbounded precision already, and the math-module > functions have no idea what to do about that. Perhaps some non-gonzo > (straightforward even if far from optimal, and coded in Python) > arbitrary-precision algorithms would make a decent addition. For > example, the Emacs `calc` package uses non-heroic algorithms that are > fine for casual use. (Slightly off-topic.) I wonder if the math module would actually be a good proving ground for generic/overloaded functions. It seems a clean fit (and even has a few applications for multiple dispatch, probably). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From p.f.moore at gmail.com Fri Apr 21 20:07:21 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 21 Apr 2006 19:07:21 +0100 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <44491299.9040607@v.loewis.de> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> Message-ID: <79990c6b0604211107i51dd86d0j365cbba8fc2726fe@mail.gmail.com> On 4/21/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > - Paul Moore has contributed a Python build procedure for the > free version of the 2003 compiler. This one is without IDE, > but still, it should allow people without a VS 2003 license > to work on Python itself; it should also be possible to develop > extensions with that compiler (although I haven't verified > that distutils would pick that up correctly). It works fine. You need to set PATH, INCLUDE, and LIB appropriately, and set MSSdk (to anything, but the SDK install location is what it's meant to be) so that distutils assumes the environment is right and doesn't check the registry. I'll see if I can find some time to write a doc patch - it can go alongside the "building using mingw" section. Paul. From theller at python.net Fri Apr 21 20:06:45 2006 From: theller at python.net (Thomas Heller) Date: Fri, 21 Apr 2006 20:06:45 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <e2b6fb$681$1@sea.gmane.org> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> <e2b6fb$681$1@sea.gmane.org> Message-ID: <e2b6vl$86a$1@sea.gmane.org> Thomas Heller wrote: > Martin v. L?wis wrote: >> Thomas Heller wrote: >>> On XP (32-bit), I can compile python25.dll and python.exe for AMD64 now, >>> after adding bufferoverflowU.lib to the linker options. >> On what project? There should be /GS- options on all projects that need >> it, which, in turn, should result in bufferoverflowU.lib not being needed. >> >> Or I forgot to check that change in... Will do Monday. > > Ok, /GS- helps. No need to hurry - I would commit that myself but I only > have readonly sandboxes on these installations, no putty, and so on. > >>> Error : Could not create new temporary options response file >> I've never seen these. I will have to study the source again, and >> find out how that could happen. >> >> Do you have spaces in the directory leading to the working copy? > > No. > > But your guess is somewhat correct: If I move the sandbox to a path *with* spaces > in it, this problem goes away ;-). On the 64-bit box. I forgot to mention that there are a lot of warnings about conversion betweem Py_ssize_t to int - if you want me to fix the obvious ones I'll offer to correct some of them from time to time and commit the changes. I wonder why gcc doesn't warn about those. Thomas From amk at amk.ca Fri Apr 21 21:45:51 2006 From: amk at amk.ca (A.M. Kuchling) Date: Fri, 21 Apr 2006 15:45:51 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.com> References: <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <20060421145121.GA10419@localhost.localdomain> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.com> Message-ID: <20060421194551.GA3333@rogue.amk.ca> On Fri, Apr 21, 2006 at 06:54:11PM +0100, Guido van Rossum wrote: > Phillip, I do recomment you look at decimal.py. If we're not reversing > the PEP changes, that module needs to be changed; it has a class > Context (that was always there) with a __context__ method which > returns an instance of a class ContextManager (newly created for the > with-statement). This looks backwards from the PEP's current POV. Does this detail matter to users of the Decimal module, though? Those users may well be thinking using the term 'context'. That the underlying 'with' details use the term differently doesn't matter to module users, only to the implementors of decimal.Context. --amk From guido at python.org Fri Apr 21 20:47:05 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 21 Apr 2006 19:47:05 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060421194551.GA3333@rogue.amk.ca> References: <44462655.60603@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <20060421145121.GA10419@localhost.localdomain> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.com> <20060421194551.GA3333@rogue.amk.ca> Message-ID: <ca471dc20604211147g229280abgc7d0f73f068d3bb8@mail.gmail.com> On 4/21/06, A.M. Kuchling <amk at amk.ca> wrote: > On Fri, Apr 21, 2006 at 06:54:11PM +0100, Guido van Rossum wrote: > > Phillip, I do recomment you look at decimal.py. If we're not reversing > > the PEP changes, that module needs to be changed; it has a class > > Context (that was always there) with a __context__ method which > > returns an instance of a class ContextManager (newly created for the > > with-statement). This looks backwards from the PEP's current POV. > > Does this detail matter to users of the Decimal module, though? Those > users may well be thinking using the term 'context'. That the > underlying 'with' details use the term differently doesn't matter to > module users, only to the implementors of decimal.Context. Half and half. I'm not proposing to rename Context (which is already well known). But ContextManager is brand new and uses the confused terminology, so perhaps ought to be renamed to something else (and it should probably get a leading underscore too, lest people start instantiating it). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Fri Apr 21 21:12:27 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 15:12:27 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.co m> References: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <20060418185518.0359E1E407C@bag.python.org> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <20060421145121.GA10419@localhost.localdomain> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060421150940.01e0add0@mail.telecommunity.com> At 06:54 PM 4/21/2006 +0100, Guido van Rossum wrote: >Phillip, I do recomment you look at decimal.py. If we're not reversing >the PEP changes, that module needs to be changed; it has a class >Context (that was always there) with a __context__ method which >returns an instance of a class ContextManager (newly created for the >with-statement). This looks backwards from the PEP's current POV. I don't mind doing the work to change it, as long as you first pronounce on what the terminology *should* be. :) If it turns out my doc was wrong, I'll fix that. If not, I'll fix decimal.ContextManager. Or if both are wrong, I'll fix that too. Just tell me which it is. :) From jcarlson at uci.edu Fri Apr 21 21:22:50 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Fri, 21 Apr 2006 12:22:50 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <1f7befae0604211028k7406cd92q7b7c95ee83ca39d7@mail.gmail.com> References: <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <1f7befae0604211028k7406cd92q7b7c95ee83ca39d7@mail.gmail.com> Message-ID: <20060421122053.F1E4.JCARLSON@uci.edu> "Tim Peters" <tim.peters at gmail.com> wrote: > [Mateusz Rukowicz] > >> And last thing - It would be nice to add some number-theory functions to > >> math module (or new one), like prime-tests, > > If they think doing much more here is out of bounds for them, trying > to sneak it into longobject.c is crazy :-) Note that tests like > Miller-Rabin are very easy to code in Python. I've seen at least two examples of such in the ASPN Python cookbook. - Josiah From facundobatista at gmail.com Fri Apr 21 21:36:09 2006 From: facundobatista at gmail.com (Facundo Batista) Date: Fri, 21 Apr 2006 16:36:09 -0300 Subject: [Python-Dev] Summer of Code preparation In-Reply-To: <ee2a432c0604170043y6d62d541w5a264b515e0e5411@mail.gmail.com> References: <ee2a432c0604170043y6d62d541w5a264b515e0e5411@mail.gmail.com> Message-ID: <e04bdf310604211236q2c01c979hdee2ca9a65daab46@mail.gmail.com> 2006/4/17, Neal Norwitz <nnorwitz at gmail.com>: > I can help manage the process from inside Google, but I need help > gathering mentors and ideas. I'm not certain of the process, but if > you are interested in being a mentor, send me an email. I will try to I already applied to Google to be a Mentor. Hope I'll help with spanish-talking students! Regards, . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From pje at telecommunity.com Fri Apr 21 21:57:42 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 15:57:42 -0400 Subject: [Python-Dev] setuptools: past, present, future Message-ID: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> I've noticed that there seems to be a lot of confusion out there about what setuptools is and/or does, at least among Python-Dev folks, so I thought it might be a good idea to give an overview of its structure, so that people have a better idea of what is and isn't "magic". Setuptools began as a fairly routine collection of distutils extensions, to do the same boring things that everybody needs distutils extensions to do. Basic stuff like installing data with your packages, running unit tests, that sort of thing. At some point, I was getting tired of having to deal with dependencies by making people install them manually, or else having to bundle them. I wanted a more automated way to deal with this problem, and in 2004 brought the problem to the distutils-sig and planned to do a PyCon sprint to try to address the problem. Tim Peters encouraged me to move the preliminary work I'd done to the Python sandbox, where others could follow the work and improve upon it, and he sponsored me for CVS privileges so I could do so. As it turned out, I wasn't able to go to PyCon, but I produced some crude stuff to try to implement dependency handling, based on some previous work by Bob Ippolito. Bob's stuff used imports to check version strings, and mine was a bit more sophisticated in that it could scan .py or .pyc files without actually importing them. But there was no reasonable way to track download URLs, though, or deal with the myriad package formats (source, RPM, etc.) platform-specificness, etc. and PyPI didn't really exist yet. To top it all off, within a couple of months I was laid off, so the problem ceased to be of immediate practical interest for me any more. I decided to take a six-month sabbatical and work on RuleDispatch, after which I began contracting for OSAF. OSAF's Chandler application has a plugin platform akin to Eclipse, and I saw that it was going to need a cross-platform plugin format. I put out the call to distutils-sig, and Bob Ippolito took up the challenge. We designed the first egg format, and we agreed that it should support Python libraries, not just plugins, and that it should be possible to treat .egg zipfiles and directories interchangeably, and that it should be possible to put more than one conceptual egg into one physical zipfile. The true "egg" was the project release, not the zipfile itself. (We called a zipfile containing multiple eggs a "basket", which we thought would be useful for things like py2exe. pkg_resources still supports baskets today, but there are no tools for generating them - you have to just zip up a bunch of .egg directories to make one.) Bob wrote the prototype pkg_resources module to support accessing resources in zipfiles and regular directories, while I worked on creating a bdist_egg command, which I added to the then-dormant setuptools package, figuring that the experimental dependency stuff could be later refactored to allow dependencies to be resolved using eggs. We had a general notion that there would be some kind of web pages you could use to list packages on, since at that time PyPI didn't allow uploads yet. Or at any rate, we didn't know about it until PyCon in 2005. After PyCon, I kept hearing about projects to make a CPAN-like tool for Python, such as the Uragas project. However, all of these projects sounded like they were going to reinvent everything from scratch, particularly a lot of stuff that Bob and I had just done. It then occurred to me for the first time that the .egg format could be used to solve the problems both of having a local package database, and also the uninstallation and upgrade of packages. In fact, the only piece missing was that there was no way to find and download the packages to be installed, and if I could solve that problem, the CPAN problem would be solved. So, I did some research by taking a random sample of packages from PyPI, to find out what information people were actually registering. I found that, more often than not, at least one of their PyPI URLs would point to a page that had links to packages that could be downloaded directly. And that was basically enough to permit writing a very simple spider that would only follow "download" or "homepage" links from PyPI pages, and would also inspect URLs to see if they were recognizable as distutils-generated filenames, from which it could extract package name and version info. Thus, easy_install was born, completing what some people now call the eggs/setuptools/easy_install trifecta. If you are going to work on or support these tools, it's important that you understand that these three things are related, but distinct. Setuptools is at heart just an ordinary collection of distutils enhancements, that just happens to include a bdist_egg command. EasyInstall is another enhanced command built on setuptools, that leverages setuptools to build eggs for packages that don't have them. But setuptools in its turn depends on EasyInstall, so that packages can have dependencies. So the components are: pkg_resources: standalone module for working with project releases, dependency specification and resolution, and bundled resources setuptools: a package of distutils extensions, including ones to build eggs with easy_install: a distutils extension built using setuptools, that finds, downloads, builds eggs for, and installs packages that use either distutils or setuptools And if you look at that list, it's pretty easy to see which part is the most magical, implicit, heuristic, etc. It's easy_install, no question. If it weren't for the fact that easy_install tries to support non-setuptools packages, there would be little need for monkeypatching or sandboxing. If it weren't for the fact that easy_install tries to interpret web pages, there would be no need for heuristics or guessing. So, in a perfect world where everybody neatly files everything with PyPI, easy_install would not have anything implicit about it. But this isn't a perfect world, and to gain adoption, it had to have backward compatibility. If easy_install could handle *enough* existing packages, then it would encourage package authors to use it so that they could depend on those existing packages. These authors would end up using setuptools, which would then tend to ensure that *their* package would be easy_install-able as well. And, since the user needs setuptools to install these new packages, then the user now has setuptools, and the option to try using it to install other packages. Users then encourage package authors to have correct PyPI information so their packages can be easy_install-ed as well, and the network effect increases from there. So, I bundled all three things (pkg_resources, setuptools, and easy_install) into a single distribution bundle precisely so it would have this "viral" network effect. I knew that if everybody had to be made to get their PyPI entries straight *first*, it would never work. But if I could leverage an ever-growing user population to put pressure on authors and system packagers, and an ever-growing author population to increase the number of users, then the natural course of things should be that packages that don't play will die off, be forked, etc., and those who do play will be rewarded with more users. I made an explicit, conscious, and cold-blooded decision to do things that way, knowing full well that it would immediately kill off all the competing "CPAN for Python" projects, and that it would also force lots of people to deal with setuptools who didn't care about it one way or another. The community as a whole would benefit immensely, even if the costs would be borne by people who didn't agree with what I was doing. So, yes, I'm a cold calculating bastard. EasyInstall is #1 in the field because it was designed to make its competition irrelevant and to virally spread itself across the entire Python ecosphere. I'm pointing these things out now because I think it's better not to mince words; easy_install was designed with Total World Domination in mind from day one and that is exactly what it's here to do. Compatibility at any cost is its watchword, because that is what fuels its adoption. End-users are its market, because what the end users want ultimately controls what the developers and the packagers do. Thus, if you look at the history of setuptools, you'll see that the vast majority of work I do on it is increasing the Just-Works-iness of easy_install. The majority of changes to non-easy_install code (and both setuptools.package_index and setuptools.sandbox are there only for easy_install) are architectural or format changes intended to support greater justworksiness for easy_install. (There are also lots of changes included to enhance setuptools' usefulness as a distutils extension, but these are driven mainly by user requests and Chandler needs, and there aren't nearly as many such changes.) So, if you take easy_install and its support modules entirely out of setuptools, you would be left with a modest assortment of distutils extensions, most of which don't have any backward compatibility issues. They could be merged into the distutils with nary a complaint. The only significant change is the "sdist" command, which in setuptools supports a cleaner (and extensible) way of managing the source distribution manifest, that frees developers from messing with the MANIFEST file and remembering to constantly add junk to MANIFEST.in. And there's probably some way we could decide to either keep the old behavior or make the old behavior an option for anybody who's relying on the way it worked before. And that's all well and good, but now you don't have the features that are the real reason end users want the whole thing: easy_install. And it's not just the users. Package authors want it too. TurboGears really couldn't exist without this. It's easy to argue that oh, they could've made distribution packages for six formats and nine platforms, or they could've made tarballs, etc. to bundle all the dependencies in, but those approaches really just don't scale -- especially for the single package author just starting to build something new. None of these options are economically viable for the author of a new package, especially if their core competency isn't packaging and distribution. Now that there's a Turbogears community, yes, there are probably people available who can do a lot of those distribution-related tasks. But there wouldn't have *been* a community if Kevin couldn't have shipped the software by himself! This is the *real* problem that I always meant to address, from the very beginning: Python development and distribution *costs too much* for the community to flourish as it should. It's too hard for non-experts, and until now it required bundling, system packaging, or asking users to install their own dependencies. But asking users to install dependencies doesn't scale for large numbers of dependencies. And not being able to reuse packages leads to proliferating wheel-reinvention, because installation cost is a barrier to entry. So, the work that I've done is simply social engineering through economic leverage. The goal is to change the cost equations so that entry barriers for package distribution are low, so that users can try different packages, so they can switch, so market forces can choose winners. Because switching and installation costs are low, interoperability and reuse are more attractive choices, and more likely to be demanded by users. You can already see these forces taking effect in such developments as the joint CherryPy/TurboGears template plugin interface, which uses another setuptools innovation (entry points) to allow plug-and-play. I am doing all this because I got tired of reinventing wheels. When you add in installation costs, writing your own package looks more attractive than reusing the other guy's. But if installation is cheap, then people are more inclined to overlook the minor differences between how the other guy did it and how they would have done it, and are more likely to say to the "other guy", hey, I like this but would you add X? And it's more likely that the "other guy" will say yes, because it will *multiply* his install base to get another published package depending on his project. So, my question to all of you is, is that worth a little implicitness, a little magic? My answer, of course, is yes. It will probably be a multi-year effort to get the state of community practice up to a level where all the heuristics and webscraping can be removed from easy_install, without negatively affecting the cost equation. Or maybe not. Maybe we're just hitting the turn of the hockey stick now, and inclusion in 2.5 is just what the doctor ordered to kick the number of users so high that anybody would be crazy not to have clean PyPI listings, I don't know. To be honest, though, I think the outstanding proposal on Catalog-SIG to merge Grig's "CheeseCake" rating system into PyPI (so that package authors will be shown what they can do to improve their listing quality) will actually have more direct impact on this than 2.5 inclusion will. Guido's choice to bless setuptools is important for system packagers and developers to have confidence that this is the direction Python is taking; it doesn't have to actually go *in* 2.5 to do that. install_egg_info clearly shows the direction we're taking. So, after reading all the other stuff that's gone by in the last few days, this is what I think should happen: First, setuptools should be withdrawn from inclusion in Python 2.5. Not directly because of the opposition, but because of the simple truth that it's just not ready. Some of that is because I've spent way too much time on the discussions this week, to the point of significant sleep deprivation at one point. But when Guido first asked about it, I had concerns about getting everything done that really needed to be done, and effectively only agreed because I figured out a way to allow new versions to be distributed after-the-fact. With the latest Python 2.5 release schedule, I'd be hard-pressed to get 0.7 to stability before the 2.5 betas go, certainly if I'm the only one working on it. And a stable version of 0.7 is really the minimum that should go in the standard library, because the package management side of things really needs to have commands to list, uninstall, upgrade, etc., and they need to be easy to understand, not the confusing mishmash that is easy_install's current assortment of options. (Which grew organically, rather than being designed ahead of time.) And Fredrik is right to bring up concerns about both easy_install's confusing array of options, and the general support issues of asking Python-Dev to adopt setuptools. These are things that can be addressed, and *are* being addressed, but they're not going to happen by Tuesday, when the alpha release is scheduled. I hate to say this, because I really don't want to disappoint Guido or anyone on Python-Dev or elsewhere who has been calling for it to go in. I really appreciate all your support, but Fredrik is right, and I can't let my desire to please all of you get in the way of what's right. What *should* happen now instead, is a plan for merging setuptools into the distutils for 2.6. That includes making the decisions about what "install" and "sdist" should do, and whether backward compatibility of internal behaviors should be implicit or explicit. I don't want to start *that* thread right now, and we've already heard plenty of arguments on both sides. Indeed, since Martin and Marc seem to be diametrically opposed on that issue, it is guaranteed that *somebody* will be unhappy with whatever decision is made. :) Between 2.5 and 2.6, setuptools should continue to be developed in the sandbox, and keep the name 'setuptools'. For 2.6, however, we should merge the code bases and have setuptools just be an alias. Or, perhaps what is now called setuptools should be called "distutils2" and distributed as such, with "setuptools" only being a legacy name. But regardless, the plan should be to have only one codebase for 2.6, and to issue backported releases of that codebase for at least Python 2.4 and 2.5. These ideas are new for me, because I hadn't thought that anybody would have cared enough to want to get into the code and share any of the work. That being the case, it seems to make more sense for me to back off a little on the development in order to work on developer documentation, such as of the kind Fredrik has been asking for, and to work on a development roadmap so we can co-ordinate who will work on what, when, to get 0.7 to stability. In the meantime, Python 2.5 *does* have install_egg_info, and it should definitely not be pulled out. install_egg_info ensures that every package installed by the distutils is detectable by setuptools, and thus will not be reinstalled just because it wasn't installed by setuptools. And there is one other thing that should go into 2.5, and that is PKG-INFO files for each bundled package that we are including in the standard library, that is distributed separately for older Python versions and is API-compatible. So for example, if ctypes 0.9.6 is going in Python 2.5, it should hav a PKG-INFO in the appropriate directory to say so. Thus, programs written for Python 2.4 that say they depend on something like "ctypes>=0.9" will work with Python 2.5 without needing to change their setup scripts to remove the dependency when the script is run under Python 2.5. Last, but not least, we need to find an appropriate spot to add documentation for install_egg_info. These are tasks that can be accomplished for 2.5, they are reasonably noncontroversial, and they do not add any new support requirements or stability issues that I can think of. One final item that is a possibility: we could leave pkg_resources in for 2.5, and add its documentation. This would allow people to begin using its API to check for installed packages, accessing resources, etc. I'd be interested in hearing folks' opinions about that, one way or the other. From martin at v.loewis.de Fri Apr 21 23:56:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 21 Apr 2006 23:56:35 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <e2b6vl$86a$1@sea.gmane.org> References: <43FA4A1B.3030209@v.loewis.de> <e0g6qv$j70$1@sea.gmane.org> <442C1323.9000109@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> <e2b6fb$681$1@sea.gmane.org> <e2b6vl$86a$1@sea.gmane.org> Message-ID: <44495513.3080406@v.loewis.de> Thomas Heller wrote: > I forgot to mention that there are a lot of warnings about conversion > betweem Py_ssize_t to int - if you want me to fix the obvious ones > I'll offer to correct some of them from time to time and commit the > changes. Right - they have been there ever since I started (in fact, I started this entire project *because* of these warnings). You can get them on x86, too, if you enable /Wp64. Please feel free to fix any of them that you feel comfortable about. > I wonder why gcc doesn't warn about those. It just doesn't implement truncation warnings between variables of differently-sized integer types. This (typically) isn't undefined behaviour: the C standard mandates very well what should happen for these truncations. Warning about any and all truncations would lead to incredible noise. /Wp64 *only* works because they restrict themselves to int64->int32 warnings (essentially). Regards, Martin From pje at telecommunity.com Sat Apr 22 00:22:15 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 18:22:15 -0400 Subject: [Python-Dev] Reject or update PEP 243? Message-ID: <5.1.1.6.0.20060421181756.01e3bc00@mail.telecommunity.com> I just noticed that the 2.5 What's New references PEP 243 in relation to the distutils' "upload" command, but in fact the upload command implements a different mechanism entirely. About the only thing the PEP and implementation have in common is that they do a POST to python.org/pypi and have a protocol_version field of "1". Nothing else is the same. Perhaps someone who knows how the actual implementation works could rewrite the PEP, or write another one? From pje at telecommunity.com Sat Apr 22 00:32:48 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 18:32:48 -0400 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44490BC7.2090908@v.loewis.de> References: <4448AA3D.3050107@canterbury.ac.nz> <20060418005956.156301E400A@bag.python.org> <5.1.1.6.0.20060418113808.0217a4a0@mail.telecommunity.com> <5.1.1.6.0.20060418133427.0211a268@mail.telecommunity.com> <444537BC.7030702@egenix.com> <5.1.1.6.0.20060418151010.01e06940@mail.telecommunity.com> <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <ca471dc20604200408x2f7e4a1ckaa40422239f5586b@mail.gmail.com> <4447E9EA.10106@v.loewis.de> <ca471dc20604201404i261db6eaq83cea36c0f3fecc7@mail.gmail.com> <4448AA3D.3050107@canterbury.ac.nz> Message-ID: <5.1.1.6.0.20060421182825.01e26260@mail.telecommunity.com> At 06:43 PM 4/21/2006 +0200, Martin v. L?wis wrote: >They might need to be available outside "Python". Two use cases: > >1. The application embeds an sqlite database. Even though it > knows how to get at the data, it can't use it, because the > sqlite3 library won't accept > .../foo-1.0.egg/resources/the_data > (or some such) as a database name, if foo-1.0.egg is a ZIP file. > > If the installed application was a set of files, that would work. > >2. The application embeds an SGML DTD (say, HTML). In order to > perform validation, it invokes nsgmls on the host system. > It cannot pass the SGML catalog to nsgmls (using the -C option) > since you can't refer to DTD files inside ZIP files inside > an SGML catalog. > > If this was a bunch of installed files, it would work. > >3. The application includes an SSL certificate. It can't pass it > to socket.ssl, since OpenSSL expects a host system file name, > not a fragment of a zip file. > > If this was installed as files, it would work. In all of these cases, the applications could use pkg_resources.resource_filename(), which returns a true OS filename, even for files inside of .zip files. Of course, it does this by extracting to a cache directory in such cases, and is only suitable for read-only access. But it works fine for such cases as these. Passing a resource directory name results in an operating system directory name being returned as well, with all the contents (recursively extracted) therein. If the application is *not* in a zip file, then resource_filename() simply returns the obvious results by __file__ manipulation, so the author need not concern him or herself with this in code. They just use resource_string() or resource_string() wherever possible, and resort to resource_filename() when working with tools such as the above, that cannot handle anything but files. From bfulg at pacbell.net Sat Apr 22 01:01:54 2006 From: bfulg at pacbell.net (Brent Fulgham) Date: Fri, 21 Apr 2006 16:01:54 -0700 (PDT) Subject: [Python-Dev] IronPython Performance Message-ID: <20060421230154.1586.qmail@web81205.mail.mud.yahoo.com> A while ago (nearly two years) an interesting paper was published by Jim Hugunin (http://www.python.org/pycon/dc2004/papers/9/) crowing about the significant speed advantage observed in this interpreter running on top of Microsoft's .NET VM stack. I remember being surprised by these results, as Python has always seemed fairly fast for an interpreted language. I've been working on the Programming Language Shootout for a few years now, and after growing tired of the repeated requests to include scores for IronPython I finally added the implementation this week. Comparison of IronPython (1.0 Beta 5) to Python 2.4.3 [1] and IronPython (1.0 Beta 1) to Python 2.4.2 [2] do not match the results reported in the 2004 paper. In fact, IronPython is consistenly 3 to 4 times slower (in some cases worse than that), and seemed to suffer from recursion/stack limitations. Since the shootout runs on a Debian Linux box, I was not able to use Microsoft's CLR; instead, I was pleased to find that Mono successfully ran most of the Shootout testing suite. The 1.0 Beta 1 version of IronPython was built using the compiler chain that ships with Mono 1.1.13.6; the 1.0 Beta 5 version is an 'official' build from the IronPython download site running on Mono 1.1.13.4. I am aware of the following issues that might affect the results: 1) Mono vs. Microsoft's CLR. 2) Python 2.1 vs. Python 2.4 3) Changes in IronPython over the last two years. In addition, there is a small startup cost paid by IronPython since the py_compile libraries were not present in the IronPython beta, preventing me from precompiling the *.py files to *.pyo. However, most of our benchmarks are now sufficiently long that initial startup costs are less significant. I thought these findings might be interesting to some of the Python, IronPython, and Mono developers. Let the flames begin! ;-) -Brent =============================================================== Current results for Python versus IronPython are posted here: [1] http://shootout.alioth.debian.org/sandbox/benchmark.php?test=all?=iron&lang2=python [2] http://shootout.alioth.debian.org/gp4sandbox/benchmark.php?test=all?=iron&lang2=python From pje at telecommunity.com Sat Apr 22 02:45:51 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 21 Apr 2006 20:45:51 -0400 Subject: [Python-Dev] Easy, uncontroversial setuptools->distutils tasks Message-ID: <5.1.1.6.0.20060421202721.01e370a0@mail.telecommunity.com> This is for the folks who wanted to get their feet wet with the setuptools codebase... Setuptools has 4 "convenience" commands that can be trivially ported to the distutils, and 3 of those shouldn't require anything more than copying the code, translating the docs from reST to LaTeX, and changing a couple of imports from 'setuptools' to 'distutils'. The four commands are: alias, rotate, saveopts and setopt. Porting 'alias' will also require you to merge a few lines of alias expansion code from setuptools.dist.Distribution._parse_command_opts() into the corresponding class in distutils.dist, but the others should be entirely self-contained. One of the commands also currently runs the "egg_info" command, but that's just to support setuptools' version-tagging features, which I'm not proposing be ported at the moment. These aren't exactly critical features; they're really just conveniences for people that work with the distutils a lot. Not really setuptools' main audience, in other words. ;) But getting these commands moved over in 2.5 would take away a few hundred lines of code from setuptools and thereby make it about 5% less scary by line count, and about 15% less scary by module count. :) The reference documentation for these commands is in reST format in trunk/sandbox/setuptools.txt. And speaking of the sandbox, if anybody wants to update setuptools.command.__init__ so it doesn't include these commands when running under 2.5, that counts for extra credit. That is, it means you'll have worked on setuptools itself as well as the distutils. :) Any takers? Better hurry, 'cause these modules *will* go fast. :) It'll probably take you longer to edit the docs than to change the code. From ianb at colorstudy.com Sat Apr 22 02:47:57 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 21 Apr 2006 19:47:57 -0500 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> Message-ID: <44497D3D.2090407@colorstudy.com> Paul Moore wrote: > And no, I don't want to install the 2 versions side-by-side. Ian > Bicking complained recently about the "uncertainty" of multiple > directories on sys.path meaning you can't be sure which version of a > module you get. Well, having 2 versions of a module installed and > knowing that which one is in use depends on require calls which get > issued at runtime worries me far more. These are valid concerns. From my own experience, I don't think setuptools makes it any worse than the status quo, but it certainly doesn't magically solve these issues. And though these issues are intrinsically hard, I think Python makes it harder than it should. For instance, if you really want to be confident about how your libraries are layed out, this script is the most reliable way: http://peak.telecommunity.com/dist/virtual-python.py It basically copies all of Python to a new directory. That this is required to get a self-consistent and well-encapsulated Python setup is... well, not good. Maybe this could be fixed for Python 2.5 as well -- to at least make this isolation easier to apply. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From greg.ewing at canterbury.ac.nz Sat Apr 22 02:50:24 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 12:50:24 +1200 Subject: [Python-Dev] [Python-3000-checkins] r45617 - in python/branches/p3yk/Lib/plat-mac/lib-scriptpackages: CodeWarrior/CodeWarrior_suite.py CodeWarrior/__init__.py Explorer/__init__.py Finder/Containers_and_folders.py Finder/Files.py Finder/Finder_Basics.py Finder In-Reply-To: <9e804ac0604210926i343308a5r67cfc277625ebe6e@mail.gmail.com> References: <9e804ac0604210926i343308a5r67cfc277625ebe6e@mail.gmail.com> Message-ID: <44497DD0.4060707@canterbury.ac.nz> Thomas Wouters wrote: > > On 4/21/06, *guido.van.rossum* <python-3000-checkins at python.org > <mailto:python-3000-checkins at python.org>> wrote: > > somehow changing "import foo" into "from . import foo" ... > caused things to break. Bah. > > Hm, this is possibly a flaw in the explicit relative import mechanism. I don't see that this has anything to do with relative imports. It's just the usual kind of problem you get with 'from ... import ...' and mutual imports. Does import .foo work? -- Greg From greg.ewing at canterbury.ac.nz Sat Apr 22 02:50:30 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 12:50:30 +1200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> Message-ID: <44497DD6.5010606@canterbury.ac.nz> Alex Martelli wrote: > GMP is covered by LGPL, so must any such derivative work But the wrapper is just using GMP as a library, so it shouldn't be infected with LGPLness, should it? -- Greg From aleaxit at gmail.com Sat Apr 22 02:58:44 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Fri, 21 Apr 2006 17:58:44 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <44497DD6.5010606@canterbury.ac.nz> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> <44497DD6.5010606@canterbury.ac.nz> Message-ID: <e8a0972d0604211758u719407el1c7f0ba2fc1fff1@mail.gmail.com> On 4/21/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: ... > > GMP is covered by LGPL, so must any such derivative work > > But the wrapper is just using GMP as a library, so > it shouldn't be infected with LGPLness, should it? If a lawyer for the PSF can confidently assert that gmpy is not a derivative work of GMP, I'll have no problem changing gmpy's licensing. But I won't make such a call myself: for example, gmpy.c #include's gmp.h and uses (==expands) some of the C macros there defined -- doesn't that make gmpy.o a derived work of gmp.h? I'm quite confident that the concept of "derived work" would not apply if gmpy.so only accessed a gmp.so (or other kinds of dynamic libraries), but I fear the connection is stronger than that, so, prudently, I'm assuming the "derived work" status until further notice. Alex From tjreedy at udel.edu Sat Apr 22 03:13:46 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 21 Apr 2006 21:13:46 -0400 Subject: [Python-Dev] [pypy-dev] Python Software Foundation seeksmentors and students for Google Summer of Code References: <ee2a432c0604200032g4d62440dt1b410b4078f11e7c@mail.gmail.com><4e4a11f80604210012t2edc6813q975b5c2db794c904@mail.gmail.com> <ca471dc20604210206p3195ca9v8c76c9f7a7111fad@mail.gmail.com> Message-ID: <e2c008$fl0$1@sea.gmane.org> "Guido van Rossum" <guido at python.org> wrote in message news:ca471dc20604210206p3195ca9v8c76c9f7a7111fad at mail.gmail.com... > sign up through the Google SoC website! > (code.google.com/soc/) Easier said than done. I emailed soc2006support suggesting that they add something or make whatever I missed more obvious. tjr From greg.ewing at canterbury.ac.nz Sat Apr 22 03:46:36 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 13:46:36 +1200 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <20060421194551.GA3333@rogue.amk.ca> References: <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <20060421145121.GA10419@localhost.localdomain> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <ca471dc20604211054n49b6400fn6600367d1c84ebbd@mail.gmail.com> <20060421194551.GA3333@rogue.amk.ca> Message-ID: <44498AFC.1080406@canterbury.ac.nz> A.M. Kuchling wrote: > Does this detail matter to users of the Decimal module, though? Those > users may well be thinking using the term 'context'. Seems to me the most straightforward term should be applied to the object that users are most likely to know about and use. The term "context" is familiar and understandable, whereas "context manager" isn't. I've kind of lost track of why we're having two separate objects, anyway. Is it really necessary? As far as I remember, the original 'with' statement proposal was very simple and quite straightforward to explain and understand. Somewhere along the way it seems to have mutated into a monster that I can't keep in my brain any more. This can't be a good thing. -- Greg From tjreedy at udel.edu Sat Apr 22 03:50:21 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 21 Apr 2006 21:50:21 -0400 Subject: [Python-Dev] patch #1454481 - runtime tunable thread stack size References: <4448C1C2.9010407@bullseye.apana.org.au> Message-ID: <e2c24r$jo7$1@sea.gmane.org> "Andrew MacIntyre" <andymac at bullseye.apana.org.au> wrote in message news:4448C1C2.9010407 at bullseye.apana.org.au... > http://www.python.org/sf/1454481 > > I would like to see this make it in to 2.5. To that end I was hoping to > elicit any review interest beyond Martin and Hye-Shik, both of whom I > thank for their feedback. > > As I can't readily test on Windows (in particular) or Linux I would > appreciate some kind soul(s) actually testing to make sure that the patch > doesn't break builds on those platforms. If you checked it in (after tests pass on your ?mac?, and while being ready to revert), wouldn't the next buildbot cycle do the testing you need? Isn't testing on 'other' platforms what buildbot is for? tjr From bob at redivi.com Sat Apr 22 05:06:57 2006 From: bob at redivi.com (Bob Ippolito) Date: Fri, 21 Apr 2006 20:06:57 -0700 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <e8a0972d0604211758u719407el1c7f0ba2fc1fff1@mail.gmail.com> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> <44497DD6.5010606@canterbury.ac.nz> <e8a0972d0604211758u719407el1c7f0ba2fc1fff1@mail.gmail.com> Message-ID: <C053E7DB-A3BA-441C-BC17-945D08CDD291@redivi.com> On Apr 21, 2006, at 5:58 PM, Alex Martelli wrote: > On 4/21/06, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote: > ... >>> GMP is covered by LGPL, so must any such derivative work >> >> But the wrapper is just using GMP as a library, so >> it shouldn't be infected with LGPLness, should it? > > If a lawyer for the PSF can confidently assert that gmpy is not a > derivative work of GMP, I'll have no problem changing gmpy's > licensing. But I won't make such a call myself: for example, gmpy.c > #include's gmp.h and uses (==expands) some of the C macros there > defined -- doesn't that make gmpy.o a derived work of gmp.h? > > I'm quite confident that the concept of "derived work" would not apply > if gmpy.so only accessed a gmp.so (or other kinds of dynamic > libraries), but I fear the connection is stronger than that, so, > prudently, I'm assuming the "derived work" status until further > notice. Well we already wrap readline, would this really be any worse? Readline is GPL. -bob From ncoghlan at gmail.com Sat Apr 22 05:34:29 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 13:34:29 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> References: <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> Message-ID: <4449A445.5020606@gmail.com> Phillip J. Eby wrote: > At 10:51 AM 4/21/2006 -0400, A.M. Kuchling wrote: >> On Fri, Apr 21, 2006 at 07:31:35PM +1000, Nick Coghlan wrote: >>> fit the new definition. So we settled on calling them "context managers" >>> instead. >> ... >>> method. Instead, the new term "manageable context" (or simply "context") >>> was introduced to mean "anything with a __context__ method". This was OK, >> Meaning that 'manageable context' objects create and destroy 'context >> managers'... My view is still that 'context manager' is a terrible >> name when used alongside objects called 'contexts': the object doesn't >> manage anything, and it certainly doesn't manage contexts -- in fact >> it's created by 'context' objects. > > And that's more or less why I wrote the documentation the way I did. > > Nick, as I understand your argument, it's that we were previously using the > term "context manager" to mean "thing with __enter__ and __exit__". But > that was *never* my interpretation. > > My understanding of "context manager" was always, "thing that you give to a > with statement". Then why didn't you speak up when the discussion was summarised in PEP 343 for Guido's approval? I said it explicitly: This PEP proposes that the protocol used by the with statement be known as the "context management protocol", and that objects that implement that protocol be known as "context managers". The term "context" then encompasses all objects with a __context__() method that returns a context manager object. (This means that all context managers are contexts, but not all contexts are context managers) I guess a slight ambiguity came in from the fact I didn't spell out that the protocol I was referring to was all three methods with __context__ returning self (i.e. the moral equivalent of the 'iterator protocol'). But the rest of the paragraph doesn't make any sense otherwise. Under Resolved Issues, before the recent changes, it said this: 3. After this PEP was originally approved, a subsequent discussion on python-dev [4] settled on the term "context manager" for objects which provide __enter__ and __exit__ methods, and "context management protocol" for the protocol itself. With the addition of the __context__ method to the protocol, the natural adjustment is to call all objects which provide a __context__ method "contexts" (or "manageable contexts" in situations where the general term "context" would be ambiguous). This is now documented in the "Standard Terminology" section. *This* is what Guido approved, not what is currently written up in the PEP on python.org. > So to me, when we added a __context__ method, we were creating a *new > object* that hadn't existed before, and we moved some methods on to > it. Thus, "context manager" still meant "thing you give to the with > statement" -- and that never changed, from my POV. That may have been what you personally thought, but it's not what the PEP said. If you disagreed with the summarisation in the PEP, you should have said so before Guido approved it, or brought it back to python-dev as a discussion about changing the standard terminology rather than just "the PEP's confusing, I want to clear it up" (and completely changing the meaning in the process). > And that's why I see the argument that we've "reversed" the terminology as > bogus: to me it's been consistent all along. We just added another object > *besides* the context manager. I agree with the second part, but not the first part. Originally we only had context managers (objects that managed their own state in their __enter__/__exit__ methods). Jason brought up the point that this excluded decimal.Context objects because it was extremely difficult to produce a thread-safe and nesting-safe __enter__/__exit__ pair to manipulate the decimal context. Without a __context__ method, the decimal module would have had to provide a separate object for use as a context manager. So we added contexts to the PEP - objects that could produce a context manager to work in conjunction with the with statement to manage the state of the context object. Context managers created in this fashion, instead of operating on their own state, actually operate on the state of the context that created them. This is what got reversed in the contextlib docs - the PEP said that context managers work on contexts the same way that iterators work on iterables.The contextlib docs (and the latest version of the PEP) say that contexts manipulate context managers. This is just plain bad English. "context" is a passive noun like "iterable" - it doesn't imply any sort of active operation. "context manager" on the other hand describes an actor doing something, just like "iterator" does. > > Note too that the user of the "with" statement doesn't know that this other > object exists, and in fact sometimes it doesn't actually exist, it's the > same object. None of this is relevant for the with-statement user, only > the context manager. So there's no reason (IMO) to monkey with the > definition of "context manager" as "thing you use in a with statement". Paraphrasing: Note too that the user of the "for" statement doesn't know that this other object exists, and in fact sometimes it doesn't actually exist, it's the same object. None of this is relevant for the for-statement user, only the iterator writer. So there's no reason (IMO) to monkey with the definition of "iterator" as "thing you use in a for statement". "context" is to "context manager" as "iterable" is to "iterator". Why is this a difficult concept? The only difference is that we have a builtin to do the obj.__iter__() call, but have to do obj.__context__() explicitly. And while "thing you use in a with statement" may have been the definition of context manager in your mind, it was never the definition in the PEP. > Now, I get your point about @contextmanager on a __context__ method, and I > agree that that seems backwards at first. What I don't see is how to > change the terminology to handle that subtlety in a way that doesn't muck > up the basically simple definitions that are in place now. Easy: we go back to the definitions we used in the originally approved PEP, where "context" precisely paralleled "iterable" and "context manager" precisely paralleled "iterator". Leverage off people's long experience with iterators and iterables instead of doing something deliberately different. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From greg.ewing at canterbury.ac.nz Sat Apr 22 06:11:50 2006 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sat, 22 Apr 2006 16:11:50 +1200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <e8a0972d0604211758u719407el1c7f0ba2fc1fff1@mail.gmail.com> References: <44488AC9.10304@vp.pl> <20060421144635.GB16802@panix.com> <88372AF6-B978-4C08-8773-8D181CC7F83F@gmail.com> <44497DD6.5010606@canterbury.ac.nz> <e8a0972d0604211758u719407el1c7f0ba2fc1fff1@mail.gmail.com> Message-ID: <4449AD06.3080109@canterbury.ac.nz> Alex Martelli wrote: > gmpy.c > #include's gmp.h and uses (==expands) some of the C macros there > defined -- doesn't that make gmpy.o a derived work of gmp.h? Assuming that gmp.h is the header which defines the public interface of the gmp library, any code which uses it as a library will be including gmp.h. So that in itself doesn't imply that you're doing more than using it as a library. -- Greg From tjreedy at udel.edu Sat Apr 22 06:22:51 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 22 Apr 2006 00:22:51 -0400 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> Message-ID: <e2cb2p$f1s$1@sea.gmane.org> "Phillip J. Eby" <pje at telecommunity.com> wrote in message news:5.1.1.6.0.20060421134259.0419f318 at mail.telecommunity.com... I have some general comments which I will not try to tie to specific quotes. 1. Based on comments on c.l.py, the biggest legitimate fact-based (versus personal-taste-based) knock again Python versus, in particular, Perl is the lack of a CPAN-like facility. As I remember, there have even been a few people say something like "I like Python the language better that Perl, but I won't switch because I love CPAN even more." So I think easier code reuse will boost Python usage. 2. Some think that PEPs are not needed once Guido approves something, on the view that the main purpose of a PEP is to host a discussion to help him decide. But I think you would have slept better if you had written a short PEP explaining the meaning of 'possible addition' and the implication of actual addition. Regardless of the past, I think you should soon (when rested ;-) condense this down to a PEP 'Upgrading Disutils with Setuptools'. Make separate sections for 2.5 upgrades and 2.6 plans. Include the 'uncontroversial tasks' from your later post. Reference this and a few other relevant posts. 3. I think must of the hostility expressed at setuptools really ought to be directed at the multiplicity of x86 *nix distribution formats, which I believe significantly inhibits the market for non-Windows systems. History: I started microcomputing with a Z80 CP/M machine that, like others, had one of numerous vendor-specific, slightly incompatible, technically unneeded, user-contemptuous, 5-1/4 floppy formats. So 3rd party software distribution was difficult (crippled), -- and all those vendors eventually died. A few years later my employer got a desktop Motorola 68000 Unix system mostly to run a specific package. It was a wonderful system, in some respects a couple of decades ahead of DOS and later Windows. But the vendors copied the CP/M mistake, making third-party software distribution across multiple systems non-existent -- and as far as I know, they all died. So I think a write-one-setup, distribute-everywhere system would be great. 4. Why can't you remove the heuristic and screen-scrape info-search code from the easy_install client and run one spider that would check new/revised PyPI entries, search for missing info, insert it into PyPI when found (and mark the entry eggified), or email the package author or a human search volunteer if it does not find enough? Terry Jan Reedy From rasky at develer.com Sat Apr 22 06:39:07 2006 From: rasky at develer.com (Giovanni Bajo) Date: Sat, 22 Apr 2006 06:39:07 +0200 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> Message-ID: <012d01c665c6$b1274a70$a8b62997@bagio> Phillip J. Eby <pje at telecommunity.com> wrote: > What *should* happen now instead, is a plan for merging setuptools > into the distutils for 2.6. That includes making the decisions about > what "install" and "sdist" should do, and whether backward > compatibility of internal behaviors should be implicit or explicit. +1. > Between 2.5 and 2.6, setuptools should continue to be developed in the > sandbox, and keep the name 'setuptools'. For 2.6, however, we should > merge > the code bases and have setuptools just be an alias. Or, perhaps > what is > now called setuptools should be called "distutils2" and distributed as > such, with "setuptools" only being a legacy name. But regardless, > the plan > should be to have only one codebase for 2.6, and to issue backported > releases of that codebase for at least Python 2.4 and 2.5. +1. > One final item that is a possibility: we could leave pkg_resources in > for 2.5, and add its documentation. This would allow people to begin > using its API to check for installed packages, accessing resources, etc. > I'd be interested in hearing folks' opinions about that, one way or the > other. This would be good. I believe pkg_resources is useful in 2.5 and in no way it represents a not properly integrated layer of additional functionalities (like setuptools is to distutils now). If you sincerely believe that pkg_resources' API is mature enough, I don't see any reason for keeping it off 2.5. Thanks for your hard work! Giovanni Bajo From nnorwitz at gmail.com Sat Apr 22 07:27:59 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 21 Apr 2006 22:27:59 -0700 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <44495513.3080406@v.loewis.de> References: <43FA4A1B.3030209@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> <e2b6fb$681$1@sea.gmane.org> <e2b6vl$86a$1@sea.gmane.org> <44495513.3080406@v.loewis.de> Message-ID: <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> On 4/21/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Thomas Heller wrote: > > I forgot to mention that there are a lot of warnings about conversion > > betweem Py_ssize_t to int - if you want me to fix the obvious ones > > I'll offer to correct some of them from time to time and commit the > > changes. > > Right - they have been there ever since I started (in fact, I started > this entire project *because* of these warnings). You can get them on > x86, too, if you enable /Wp64. In case it wasn't clear, the /Wp64 flag is available in icc (Intel's C compiler). n From pje at telecommunity.com Sat Apr 22 08:19:03 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 02:19:03 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4449A445.5020606@gmail.com> References: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> At 01:34 PM 4/22/2006 +1000, Nick Coghlan wrote: >Phillip J. Eby wrote: >>At 10:51 AM 4/21/2006 -0400, A.M. Kuchling wrote: >>>On Fri, Apr 21, 2006 at 07:31:35PM +1000, Nick Coghlan wrote: >>>>fit the new definition. So we settled on calling them "context managers" >>>>instead. >>> ... >>>>method. Instead, the new term "manageable context" (or simply "context") >>>>was introduced to mean "anything with a __context__ method". This was OK, >>>Meaning that 'manageable context' objects create and destroy 'context >>>managers'... My view is still that 'context manager' is a terrible >>>name when used alongside objects called 'contexts': the object doesn't >>>manage anything, and it certainly doesn't manage contexts -- in fact >>>it's created by 'context' objects. >>And that's more or less why I wrote the documentation the way I did. >>Nick, as I understand your argument, it's that we were previously using >>the term "context manager" to mean "thing with __enter__ and >>__exit__". But that was *never* my interpretation. >>My understanding of "context manager" was always, "thing that you give to >>a with statement". > >Then why didn't you speak up when the discussion was summarised in PEP 343 >for Guido's approval? I said it explicitly: > > This PEP proposes that the protocol used by the with statement be > known as the "context management protocol", and that objects that > implement that protocol be known as "context managers". The term > "context" then encompasses all objects with a __context__() method > that returns a context manager object. (This means that all context > managers are contexts, but not all contexts are context managers) > >I guess a slight ambiguity came in from the fact I didn't spell out that >the protocol I was referring to was all three methods with __context__ >returning self (i.e. the moral equivalent of the 'iterator protocol'). But >the rest of the paragraph doesn't make any sense otherwise. Because the last time I looked at the PEP, I was trying to make sure that the code samples in the PEP worked with Guido's last-minute decision to go with the return vs. raise protocol that I originally proposed for __exit__, and didn't have the time to sort through the terminology change. Later, when I wrote up documentation, I mostly did it from memory. The next time I looked at the PEP was when AMK asked for clarification. >That may have been what you personally thought, but it's not what the PEP >said. If you disagreed with the summarisation in the PEP, you should have >said so before Guido approved it, or brought it back to python-dev as a >discussion about changing the standard terminology rather than just "the >PEP's confusing, I want to clear it up" (and completely changing the >meaning in the process). I changed the PEP because Guido asked me to, right here on Python-Dev, after AMK asked the question and I seconded his guess as to the interpretation. I wouldn't have otherwise checked in changes to a PEP that doesn't have my name on it: http://mail.python.org/pipermail/python-dev/2006-April/063856.html If you have a problem with what I did to the PEP, kindly take it up with Guido. If you have a problem with the documentation I took the time to write and contribute, by all means change it. At this point, I'm getting pretty tired of people of accusing me of violating procedures around here, and I'm past caring what you do or don't call the bloody objects. At least I've gotten contextlib and test_contextlib to actually work, and arranged for there to be *some* documentation for the "with" statement and the contextlib module. Meanwhile, the iterator-iterable analogy is false. You have to be able to iterate over an iterator, but as AMK pointed out, you don't have to be able to pass a [thing having __enter__/__exit__] to a "with" statement. So I was wrong to apply that analogy myself, as are you now. That having been said, I don't think either you or I or even Guido should be the ones to fix the PEP and the docs at this point, as we've all stared at the bloody thing way too long to see it with fresh eyes. So far, AMK is the one who's finding all our screwups, so maybe he should be the one to explain it all to *us*. :) From martin at v.loewis.de Sat Apr 22 08:27:17 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 22 Apr 2006 08:27:17 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> References: <43FA4A1B.3030209@v.loewis.de> <442C16ED.8080502@python.net> <442C1AA6.9030408@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> <e2b6fb$681$1@sea.gmane.org> <e2b6vl$86a$1@sea.gmane.org> <44495513.3080406@v.loewis.de> <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> Message-ID: <4449CCC5.6060409@v.loewis.de> Neal Norwitz wrote: >> Right - they have been there ever since I started (in fact, I started >> this entire project *because* of these warnings). You can get them on >> x86, too, if you enable /Wp64. > > In case it wasn't clear, the /Wp64 flag is available in icc (Intel's C > compiler). It still isn't clear :-) The flags is also available in msvc: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccore/html/vchowWp64Detect64BitPortabilityProblems.asp There is even a checkbox for it in the project settings. Regards, Martin From nnorwitz at gmail.com Sat Apr 22 08:33:34 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 21 Apr 2006 23:33:34 -0700 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <4449CCC5.6060409@v.loewis.de> References: <43FA4A1B.3030209@v.loewis.de> <442C2321.9000105@python.net> <4438C97F.50905@v.loewis.de> <e2b34t$qkt$1@sea.gmane.org> <4449157A.4060308@v.loewis.de> <e2b6fb$681$1@sea.gmane.org> <e2b6vl$86a$1@sea.gmane.org> <44495513.3080406@v.loewis.de> <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> <4449CCC5.6060409@v.loewis.de> Message-ID: <ee2a432c0604212333m2f237d5fta1c8c3cd9e3c0d98@mail.gmail.com> On 4/21/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Neal Norwitz wrote: > >> Right - they have been there ever since I started (in fact, I started > >> this entire project *because* of these warnings). You can get them on > >> x86, too, if you enable /Wp64. > > > > In case it wasn't clear, the /Wp64 flag is available in icc (Intel's C > > compiler). > > It still isn't clear :-) The flags is also available in msvc: Glad to see there's still some humour left on py-dev. I didn't say /Wp64 was *only* available in icc. For anyone who thinks I implied msvc, I've got a bridge for sale, just let me know. :-) Cheers, n From pje at telecommunity.com Sat Apr 22 08:36:57 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 02:36:57 -0400 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4449A445.5020606@gmail.com> References: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422023123.042860b8@mail.telecommunity.com> At 01:34 PM 4/22/2006 +1000, Nick Coghlan wrote: >Then why didn't you speak up when the discussion was summarised in PEP 343 >for >Guido's approval? I said it explicitly: >... >That may have been what you personally thought, but it's not what the PEP >said. By the way, Greg Ewing coined the term "context manager", combining my proposals of "resource manager" and "context listener": http://mail.python.org/pipermail/python-dev/2005-July/054607.html http://mail.python.org/pipermail/python-dev/2005-July/054628.html And from this email, it's clear that other people in the discussion interpreted this term to refer to "thing given to the 'with' statement": http://mail.python.org/pipermail/python-dev/2005-July/054615.html However, you seemed to want to call this a "context", even then: http://mail.python.org/pipermail/python-dev/2005-July/054656.html So, if anything is clear from all this, it's that nothing has ever been particularly clear in all this. :) Or more precisely, I think everybody has been perfectly clear, we just haven't really gotten on the same page about which words mean what. ;) From pje at telecommunity.com Sat Apr 22 09:00:32 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 03:00:32 -0400 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <e2cb2p$f1s$1@sea.gmane.org> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> At 12:22 AM 4/22/2006 -0400, Terry Reedy wrote: >Why can't you remove the heuristic and screen-scrape info-search code >from the easy_install client and run one spider that would check >new/revised PyPI entries, search for missing info, insert it into PyPI when >found (and mark the entry eggified), or email the package author or a human >search volunteer if it does not find enough? I actually considered that at one point. After all, I certainly have the technology. However, I didn't consider it for more than 10 seconds or so. Package authors have no reason to listen to some random guy with a bot -- but they do have reasons to listen to their users, both actual and potential. The problem isn't fundamentally a technical one, but a social one. You can effect social change through technology, but not by being some random guy with a nagging 'bot. Hm, can I nominate myself for the QOTF? :) Seriously, though, posting Cheesecake scores (which include ratings for findability of code, use of distutils, etc.) would be a fine way to achieve the same effect, and if they're part of PyPI itself, they don't give off the same "random guy with a bot" effect. Instead, they are a visible reflection of community standards or values, and influence action through public shame instead of nagging. And shame scales better as the size of a community increases. :) There are actually additional technical and social reasons why I don't believe the bot approach would work or scale well, even if it was clearly a community effort. For example, doing the work *for* package authors would effectively mean supporting them forever, since they would never have a reason to learn to do it themselves. But these other reasons rather pale compared to the chicken-and-egg problem that I'd have faced in trying to kick off such an effort without easy_install already having been established with a sizable base of fan(atic)s. Anyway, it's certainly an attractive idea, and until you brought it up I'd forgotten I had even considered it once. It would be nice if it could work, but I still think adding Cheesecake scores to PyPI would be a better accelerant -- especially because it measures other "qwalitee" factors besides easy_install-ability. And since Cheesecake actually *depends* on easy_install to be able to rate documentation and various other aspects of a package (because it actually uses easy_install to find and fetch a package's source code), you're not going to be able to score at *all* on some factors if you don't make your package findable. Thus, easy_install-ability is a prerequisite to even being able to see how you compare to others. So... let them eat Cheesecake. :) From ncoghlan at gmail.com Sat Apr 22 09:41:19 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 17:41:19 +1000 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> Message-ID: <4449DE1F.7020201@gmail.com> Phillip J. Eby wrote: > At 12:22 AM 4/22/2006 -0400, Terry Reedy wrote: >> Why can't you remove the heuristic and screen-scrape info-search code >>from the easy_install client and run one spider that would check >> new/revised PyPI entries, search for missing info, insert it into PyPI when >> found (and mark the entry eggified), or email the package author or a human >> search volunteer if it does not find enough? > > I actually considered that at one point. After all, I certainly have the > technology. > > However, I didn't consider it for more than 10 seconds or so. Package > authors have no reason to listen to some random guy with a bot -- but they > do have reasons to listen to their users, both actual and potential. I'm not sure that's what Terry meant - I took it to mean *make the spider part of PyPI itself*. So, when you do a PyPI upload, PyPI's spider is triggered, trawls through whatever was uploaded, and adds the results of the search to the PyPI entry for later use by easy_install (e.g under a "Easy Install Info" section - or possibly even a separate page). If there are any problems, PyPI emails the person responsible for the package upload. So any problem reports wouldn't be coming from some random guy with a bot, they'd be coming from PyPI itself. Then all the heuristics and screen-scraping would be server-side - all easy_install would have to do is look at the meta-data provided by the PyPI spider. Cheers, Nick. P.S. I still prefer Py-pee-eye to Cheeseshop. The old name makes even more sense if Phillip's heuristic scan gets added :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From mateusz.rukowicz at vp.pl Sat Apr 22 10:07:15 2006 From: mateusz.rukowicz at vp.pl (Mateusz Rukowicz) Date: Sat, 22 Apr 2006 10:07:15 +0200 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <e8a0972d0604211037y22957fdey8796418e707943f1@mail.gmail.com> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <4448B00C.2040901@vp.pl> <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> <44490DBD.3050703@vp.pl> <e8a0972d0604211037y22957fdey8796418e707943f1@mail.gmail.com> Message-ID: <4449E433.1090401@vp.pl> Alex Martelli wrote: >I see "redo Decimal in C" (possibly with the addition of some fast >elementary transcendentals) and "enhance operations on longs" >(multiplication first and foremost, and base-conversions probably >next, as you suggested -- possibly with the addition of some fast >number-theoretical functions) as two separate projects, each of just >about the right magnitude for an SoC project. I would be glad to >mentor either (not both); my preference would be for the former -- it >appears to me that attempting to do both together might negatively >impact both. Remember, it isn't just the coding...: thorough testing, >good docs, accurate performance measurements on a variety of >platforms, ..., are all important part of a project that, assuming >success, it's scheduled to become a core part of Python 2.6, after >all. > > >Alex > > > If it's enough, that's ok for me - I would concentrate on one thing and very likely do it better. My main reason of doing these both at the same time, was that, these would share many code, and since I did quite a bit research in this area (comparing algorithms with diffirent complexity, comparing diffirent cache-friendly implementations, timing on machines starting at p200 ending at dual Xeon 3GHz), I have quite a experience in that stuff. But anyway, conforming to Tim Peters mail, It's not likely something will change in long int ;) - I understand that, and don't want to change you developing python policy (actually, I was expecting that, when I realized that multiply is karatsuba-only). Here is a little comparsion made by my friend python vs my implementation of multiplying (coded in C, without any assembly - I have also assembly version ;P) http://parkour.pl/mat.png It computes product of k*l log(k) = log(l), X axis is sqrt(n), where O(n) = O(logk), in other words, n is 'length' of numbers. I think, that decimal coded in C by me would achieve quite similiar time (also I would eliminate 'staircase' effect), if you wish. I am quite efficiency concerned, in my opinion, efficiency is third most important thing after easy-to-use-code and functionality, and should not be forgetten. Also I state that efficient code/algorithms isn't necessary hard to maintain. And all in all, these algorithms are not so complicated, (in my opinion, fft multiply - which is assymptoticly the best - is less complicated than karatsuba, but that's my opinion). I am now quite sure, what I would like to do, and is possible by you to accept - code decimal in C, most important things about that would: 1.Compatibility with old decimal 1.easy to maintain and expand code 1.don't broke python portability, nor adding new depedencies 2.Efficiency (if and only if 1. isn't broke) I am quite confident I can achieve this points. But everything is open for discussion ;) I'd like to know as much as possible what do you think about this idea. (But already i know quite much) Best regards, Mateusz Rukowicz. From ncoghlan at gmail.com Sat Apr 22 10:14:43 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 18:14:43 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> References: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> Message-ID: <4449E5F3.3090104@gmail.com> Phillip J. Eby wrote: > If you have a problem with what I did to the PEP, kindly take it up with > Guido. If you have a problem with the documentation I took the time to > write and contribute, by all means change it. At this point, I'm > getting pretty tired of people of accusing me of violating procedures > around here, and I'm past caring what you do or don't call the bloody > objects. At least I've gotten contextlib and test_contextlib to > actually work, and arranged for there to be *some* documentation for the > "with" statement and the contextlib module. I'm not trying to diminish the work you've done to make this happen - I *did* review those docs after you put them in, and completely missed the discrepancy between them and the wording in the PEP. So the current confusion is at least as much my fault as anyone else's :) The one thing I wasn't sure of after AMK brought it up was whether or not there'd been an offline discussion at PyCon that had made the change on purpose. > Meanwhile, the iterator-iterable analogy is false. You have to be able > to iterate over an iterator, but as AMK pointed out, you don't have to > be able to pass a [thing having __enter__/__exit__] to a "with" > statement. So I was wrong to apply that analogy myself, as are you now. This is only true if we're happy for calling ctx.__context__() explicitly to produce something unusable. i.e., just as these are equivalent: for x in iterable: pass itr = iter(iterable) for x in itr: pass I believe these should be equivalent: with ctx as foo: pass ctx_mgr = ctx.__context__() with ctx_mgr as foo: pass The only way for that to happen is if context managers all have a __context__() method that returns self. > That having been said, I don't think either you or I or even Guido > should be the ones to fix the PEP and the docs at this point, as we've > all stared at the bloody thing way too long to see it with fresh eyes. > So far, AMK is the one who's finding all our screwups, so maybe he > should be the one to explain it all to *us*. :) Heh. I actually had to go trawling back through the python-dev archives to figure out whether or not I was going nuts :) Alternatively, I could have a go at clearing it up for next week's alpha2, and we can ask Anthony to make an explicit request for review of those docs in the announcement. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From guido at python.org Sat Apr 22 11:04:28 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Apr 2006 10:04:28 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4449A445.5020606@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <4449A445.5020606@gmail.com> Message-ID: <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com> On 4/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > *This* is what Guido approved, not what is currently written up in the PEP on > python.org. Nick, please get unstuck on the "who said what when and who wasn't listening" thing. I want this to be resolved to use the clearest terminology possible. As you can clearly tell from my recent posts I'm not sure what's best myself. So stop beating people over the head with "Guido approved X". I can't decide this myself -- you and Phillip have to find a way to agree on one version or the other, that's the only pronouncement you will get from me. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at pythonware.com Sat Apr 22 12:05:41 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 12:05:41 +0200 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> Message-ID: <e2cv5o$mpf$1@sea.gmane.org> Phillip J. Eby wrote: > The problem isn't fundamentally a technical one, but a social one. You can > effect social change through technology, but not by being some random guy > with a nagging 'bot. > Seriously, though, posting Cheesecake scores (which include ratings for > findability of code, use of distutils, etc.) would be a fine way to achieve > the same effect, and if they're part of PyPI itself, they don't give off > the same "random guy with a bot" effect. like "some random bozos who likes play code nazis on the internet" is better than "a random guy with a bot". sheesh. does anyone know if this kind of non-productive control- freakery is common over in Ruby land? </F> From fredrik at pythonware.com Sat Apr 22 12:07:54 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 12:07:54 +0200 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> Message-ID: <e2cv9t$n24$1@sea.gmane.org> Terry Reedy wrote: > 1. Based on comments on c.l.py, the biggest legitimate fact-based (versus > personal-taste-based) knock again Python versus, in particular, Perl is the > lack of a CPAN-like facility. As I remember, there have even been a few > people say something like "I like Python the language better that Perl, but > I won't switch because I love CPAN even more." when did anyone last say that? I thought Perl-to-Python migration flame wars was a Y2K thing? </F> From guido at python.org Sat Apr 22 12:24:48 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Apr 2006 11:24:48 +0100 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <e2cv9t$n24$1@sea.gmane.org> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> Message-ID: <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> On 4/22/06, Fredrik Lundh <fredrik at pythonware.com> wrote: > Terry Reedy wrote: > > > 1. Based on comments on c.l.py, the biggest legitimate fact-based (versus > > personal-taste-based) knock again Python versus, in particular, Perl is the > > lack of a CPAN-like facility. As I remember, there have even been a few > > people say something like "I like Python the language better that Perl, but > > I won't switch because I love CPAN even more." > > when did anyone last say that? I thought Perl-to-Python migration > flame wars was a Y2K thing? Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be diverse -- yes, I've heard people say that this is something that Python sorely lacks; but I've also heard from more than one person that CPAN sucks from a quality perspective. So I think we shouldn't focus on emulating CPAN; rather, we should solve the problems we actually have. I note that CPAN originated in an age before the web was mature. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From fredrik at pythonware.com Sat Apr 22 12:34:34 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 12:34:34 +0200 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com><e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> Message-ID: <e2d0rs$r30$1@sea.gmane.org> Guido van Rossum wrote: > Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be > diverse -- yes, I've heard people say that this is something that > Python sorely lacks; but I've also heard from more than one person > that CPAN sucks from a quality perspective. So I think we shouldn't > focus on emulating CPAN; rather, we should solve the problems we > actually have. the first problem seems to be to define what those problems really are ;-) (as for the CPAN quality, any public repository will end up being full of crap; I don't see any way to work around that. automatic scoring based on superficial aspects or ratings by small numbers of anonymous visitors are probably among the worst ways to distinguish crap from good stuff; for quality, you need initiatives like http://code.enthought.com/enthon/ and other "fat python" projects. including the standard python.org distribution, of course.) </F> From ncoghlan at gmail.com Sat Apr 22 12:35:38 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 20:35:38 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <5.1.1.6.0.20060422023123.042860b8@mail.telecommunity.com> References: <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <4448A677.5090905@gmail.com> <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060418153521.03f8ff98@mail.telecommunity.com> <20060418200131.GA12715@localhost.localdomain> <44462655.60603@gmail.com> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422023123.042860b8@mail.telecommunity.com> Message-ID: <444A06FA.9000308@gmail.com> Phillip J. Eby wrote: > So, if anything is clear from all this, it's that nothing has ever been > particularly clear in all this. :) > > Or more precisely, I think everybody has been perfectly clear, we just > haven't really gotten on the same page about which words mean what. ;) +1 QOTT (Quote of the Thread) :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From fredrik at pythonware.com Sat Apr 22 12:34:54 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 12:34:54 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org><4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org><79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> <44497D3D.2090407@colorstudy.com> Message-ID: <e2d0sf$rb0$1@sea.gmane.org> Ian Bicking wrote: > For instance, if you really want to be confident about how your libraries > are layed out, this script is the most reliable way: > http://peak.telecommunity.com/dist/virtual-python.py note the use of "this script is the most reliable way", not "something like this script", or "you have to do this, see e.g." it's pretty easy to get the feeling that you and Phillip seem to think that you're the only ones who have ever addressed these problems, and that your solutions are automatically superior to anyone elses... (frankly, do you think there's any experienced developer out there whos first thought when asked the question "how do I create a tightly controlled Python environment" isn't either "can I solve this by tweaking sys.path in my application?" or "disk space is cheap, bugs are expensive; let's use a separate install", spends 15 minutes setting that up, checks in the result, and goes back to working on the hard stuff...) </F> From fredrik at pythonware.com Sat Apr 22 13:09:50 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 13:09:50 +0200 Subject: [Python-Dev] Why are contexts also managers? (was r45544 -peps/trunk/pep-0343.txt) References: <20060418185518.0359E1E407C@bag.python.org><44462655.60603@gmail.com> <444636DF.8090506@gmail.com><20060419142105.GA32149@rogue.amk.ca><5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com><44477557.8090707@gmail.com><ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com><4448A677.5090905@gmail.com><5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com><4449A445.5020606@gmail.com> <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com> Message-ID: <e2d2u0$1mq$1@sea.gmane.org> Guido van Rossum wrote: > Nick, please get unstuck on the "who said what when and who wasn't > listening" thing. I want this to be resolved to use the clearest > terminology possible. which probably means that the words "context" and "manager" shouldn't be used at all ;-) "space" and "potato", perhaps? like in http://tinyurl.com/k5spk ? </F> From p.f.moore at gmail.com Sat Apr 22 13:40:49 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 22 Apr 2006 12:40:49 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <4449E5F3.3090104@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> <4449E5F3.3090104@gmail.com> Message-ID: <79990c6b0604220440rdc2a81bwf5e23df02d134444@mail.gmail.com> On 4/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > Alternatively, I could have a go at clearing it up for next week's alpha2, and > we can ask Anthony to make an explicit request for review of those docs in the > announcement. . . I've just had a *very* quick scan through the 2.5a1 documentation. I did not look at the PEP, just the official documentation. I've been reading the messages going round on the subject, but I'm getting pretty confused, so I'd still count myself as "unprejudiced"... :-) My immediate reaction was that the docs make reasonable sense: - a context is a thing with enter/exit methods (a block of code is "in" a context) - the with statement delimits the block which is in a context - the with statement asks a context manager for the context in which the block runs - context managers have __context__ methods to produce contexts (they manage the production of explicit context objects) The contextlib.contextmanager decorator starts off looking fine as well: @contextmanager def tag(name): print "<%s>" % name yield print "</%s>" % name Yes, that's a context manager - you pass it to a with statement: >>> with tag("h1"): ... print "foo" ... <h1> foo </h1> But then things go wrong: class Tag: def __init__(self, name): self.name = name @contextmanager def __context__(self): print "<%s>" % self.name yield self print "</%s>" % self.name h1 = Tag("h1") That's bizarre: __context__ isn't the context manager I'm trying to create - those are the instances of Tag. I think this is where the terminology comes unstuck, and it's simply because this is an "abuse" (a bit strong, that - bear with me) of the contextmanager decorator. The thing is, __context__ should be *a function which returns a context*. But we defined it with the decorator as a context manager - an object whose __context__ method produces a context! It works, because context managers produced by the decorator return themselves - that is, they are both context managers and contexts....... No, I just got lost. BUT - the point is that everything was fine until the point where the __context__ method got defined using @contextmanager. Maybe all we need is to have *two* decorators - @context to generate a context (this would be @contextmanager but without the __context__ method) and @contextmanager as now (actually, it only needs the __context__ method - the __enter__ and __exit__ methods are only present to allow the trick of returning self from __context__). Then, the definitions are easy: context manager - has __context__ producing a context context - has __enter__ and __exit__ methods, used by the with statement Things with all 3 methods are just a convenience trick to avoid defining 2 objects - there's no *need* for them (unlike iterators, where "iter(it) is it" is an important defining characteristic of an iterator over an iterable). So my proposal: - use the definitions above - modify contextlib to have 2 decorators - @contextmanager producing a context manager, and @context producing a context. They can be the same under the hood, using an object that defines all 3 methods, but that's just an implementation detail (trick) - amend the documentation of the Tag example in the contextlib docs to use the @context decorator. - tidy up the PEP to reflect this approach Or alternatively, I'm just confused, like the rest of you :-) Paul. From ncoghlan at gmail.com Sat Apr 22 14:26:01 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 22:26:01 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 -peps/trunk/pep-0343.txt) In-Reply-To: <e2d2u0$1mq$1@sea.gmane.org> References: <20060418185518.0359E1E407C@bag.python.org><44462655.60603@gmail.com> <444636DF.8090506@gmail.com><20060419142105.GA32149@rogue.amk.ca><5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com><44477557.8090707@gmail.com><ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com><4448A677.5090905@gmail.com><5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com><4449A445.5020606@gmail.com> <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com> <e2d2u0$1mq$1@sea.gmane.org> Message-ID: <444A20D9.5010607@gmail.com> Fredrik Lundh wrote: > Guido van Rossum wrote: > >> Nick, please get unstuck on the "who said what when and who wasn't >> listening" thing. Sorry about that. I was just trying to figure out how we got to where we are. I stopped paying close attention to PEP 343 developments a few months back, and ended up catching up out loud here on the list. . . > I want this to be resolved to use the clearest >> terminology possible. I'm planning to have one go at it before next week's 2nd alpha (making sure the source code, library reference, language reference and PEP are all at least superficially consistent), and then asking Anthony to include something in the 2nd alpha announcement explicitly requesting review of these docs. As Phillip pointed out, we need input from people that haven't been intimately involved in the PEP 343 discussions to see if the final docs actually make sense. As I discovered in reviewing the contextlib docs, it turned out to be awfully easy for me to see what I expected to see rather than what was actually there. > which probably means that the words "context" and "manager" shouldn't > be used at all ;-) > > "space" and "potato", perhaps? > > like in http://tinyurl.com/k5spk ? That's beautiful. It even matches the PEP [1] :) Cheers, Nick. [1] http://tinyurl.com/pc5uq -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ronaldoussoren at mac.com Sat Apr 22 14:36:12 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sat, 22 Apr 2006 14:36:12 +0200 Subject: [Python-Dev] New artwork for the osx port Message-ID: <E8B8787B-A019-49AD-92B1-BE1BA8D003FD@mac.com> Hi, Over on the pythonmac-sig list we're getting close a new set of icons based on the new python.org logo. What would be needed to get these icons into the python distribution? Does the author of these icons need to donate them to the PSF or is there some other procedure? Ronald From ncoghlan at gmail.com Sat Apr 22 15:10:57 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 23:10:57 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <79990c6b0604220440rdc2a81bwf5e23df02d134444@mail.gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <444636DF.8090506@gmail.com> <20060419142105.GA32149@rogue.amk.ca> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> <4449E5F3.3090104@gmail.com> <79990c6b0604220440rdc2a81bwf5e23df02d134444@mail.gmail.com> Message-ID: <444A2B61.4030408@gmail.com> Paul Moore wrote: > On 4/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote: >> Alternatively, I could have a go at clearing it up for next week's alpha2, and >> we can ask Anthony to make an explicit request for review of those docs in the >> announcement. . . > > I've just had a *very* quick scan through the 2.5a1 documentation. I > did not look at the PEP, just the official documentation. I've been > reading the messages going round on the subject, but I'm getting > pretty confused, so I'd still count myself as "unprejudiced"... :-) Thanks for doing that. I got lost in a maze of twisty contexts all alike around the same place you did, so we apparently need to do something different somewhere. So I'm going to express my gratitude by asking you to read the same docs all over again in a few days time :) > My immediate reaction was that the docs make reasonable sense: > > - a context is a thing with enter/exit methods (a block of code is > "in" a context) > - the with statement delimits the block which is in a context > - the with statement asks a context manager for the context in which > the block runs > - context managers have __context__ methods to produce contexts (they > manage the production of explicit context objects) I'll be making a pass through the docs (and PEP) this weekend using the definitions: - a context manager is a thing with enter/exit methods (it sets up and tears down an execution context for a block of code) - the with statement delimits the block which is in an execution context - the with statement asks a context object for a context manager to set up and tear down an execution context when the block runs - context objects have a __context__ method to produce context managers (hey, it isn't really that much worse than using the __iter__ method to ask an iterable for an iterator. . .) I'll also add something in which parallels the current "Iterator Types" section in the library reference (only for "Context Types"). The big changes from where we are currently are that: - "execution context" will be introduced for the sundry effects that a context manager may have on the code in the body of a with statement (like affecting how exceptions are handled, redirecting IO, changing the thread's active decimal context, affecting thread synchronisation etc) - "context object" will be used where "context manager" is currently used. This is mainly so that decimal.Context can be safely referred to as being a context object. - "context manager" will be used where "context" is currently used. This is so that the __context__ method returns context managers, which means decorating the generator based ones with @contextlib.contextmanager makes sense. I was considering producing a patch instead so it could be reviewed before I changed anything, but I don't think we'll really understand which is clearer until we can review it all together, and documentation patches are difficult to review properly without applying them and rebuilding the docs (which a lot of people aren't set up to do - just ask the effbot ;). If the terminology *still* breaks down with those slightly different definitions, we'll have to try to come up with a third option after the 2nd alpha. I'm really hoping my planned changes actually work out, because if they don't I'm out of ideas for how to make these concepts easier to grok. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From p.f.moore at gmail.com Sat Apr 22 15:23:36 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 22 Apr 2006 14:23:36 +0100 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <444A2B61.4030408@gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> <4449E5F3.3090104@gmail.com> <79990c6b0604220440rdc2a81bwf5e23df02d134444@mail.gmail.com> <444A2B61.4030408@gmail.com> Message-ID: <79990c6b0604220623h76f10626i598591ef1f4ed472@mail.gmail.com> On 4/22/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > So I'm going to express my gratitude by asking you to read the same docs all > over again in a few days time :) No problem. Remind me if I forget... > I'll be making a pass through the docs (and PEP) this weekend using the > definitions: > > - a context manager is a thing with enter/exit methods > (it sets up and tears down an execution context for a block of code) > - the with statement delimits the block which is in an execution context > - the with statement asks a context object for a context manager to set up > and tear down an execution context when the block runs > - context objects have a __context__ method to produce context managers > (hey, it isn't really that much worse than using the __iter__ method to > ask an iterable for an iterator. . .) Sorry, but I don't really like this. I find the idea of a context manager, creating contexts, within which the block in a with statement runs, much more intuitive. As I said, the only issue I have with it is the dual use of the contextmanager decorator (and I think that's fundamental - there are 2 different things going on, and they *should* have different names). But I'll do my best to put away my prejudices and read the new docs as they are written, when they come out. > If the terminology *still* breaks down with those slightly different > definitions, we'll have to try to come up with a third option after the 2nd > alpha. I'm really hoping my planned changes actually work out, because if they > don't I'm out of ideas for how to make these concepts easier to grok. . . Presumably, then, my proposal didn't make things clear to you? I won't comment further on your proposal, as I *want* to avoid thinking about it before I read the docs... Paul. From ncoghlan at gmail.com Sat Apr 22 15:36:45 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 22 Apr 2006 23:36:45 +1000 Subject: [Python-Dev] Why are contexts also managers? (was r45544 - peps/trunk/pep-0343.txt) In-Reply-To: <79990c6b0604220623h76f10626i598591ef1f4ed472@mail.gmail.com> References: <20060418185518.0359E1E407C@bag.python.org> <5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com> <44477557.8090707@gmail.com> <ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com> <4448A677.5090905@gmail.com> <5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com> <5.1.1.6.0.20060422015828.01e33dc8@mail.telecommunity.com> <4449E5F3.3090104@gmail.com> <79990c6b0604220440rdc2a81bwf5e23df02d134444@mail.gmail.com> <444A2B61.4030408@gmail.com> <79990c6b0604220623h76f10626i598591ef1f4ed472@mail.gmail.com> Message-ID: <444A316D.50206@gmail.com> Paul Moore wrote: > Presumably, then, my proposal didn't make things clear to you? As Phillip said, I'm probably way too close to this to be a good judge of how understandable the terminology is. I just want to make one more attempt before admitting defeat. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From guido at python.org Sat Apr 22 15:52:35 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Apr 2006 14:52:35 +0100 Subject: [Python-Dev] New artwork for the osx port In-Reply-To: <E8B8787B-A019-49AD-92B1-BE1BA8D003FD@mac.com> References: <E8B8787B-A019-49AD-92B1-BE1BA8D003FD@mac.com> Message-ID: <ca471dc20604220652n5c4029afpfb77d90ea29a24bd@mail.gmail.com> On 4/22/06, Ronald Oussoren <ronaldoussoren at mac.com> wrote: > Over on the pythonmac-sig list we're getting close a new set of icons > based on the new python.org logo. What would be needed to get these > icons into the python distribution? Does the author of these icons > need to donate them to the PSF or is there some other procedure? I guess the better place to ask is psf at python.org -- this reaches the PSF board which decides and has expertice about such matters. (And no, I'm not on that list any more -- I've learned to delegate. :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jjl at pobox.com Sat Apr 22 17:07:40 2006 From: jjl at pobox.com (John J Lee) Date: Sat, 22 Apr 2006 15:07:40 +0000 (UTC) Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <e2d0rs$r30$1@sea.gmane.org> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com><e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> <e2d0rs$r30$1@sea.gmane.org> Message-ID: <Pine.LNX.4.64.0604221500000.8504@localhost> On Sat, 22 Apr 2006, Fredrik Lundh wrote: > Guido van Rossum wrote: [...] >> Python sorely lacks; but I've also heard from more than one person >> that CPAN sucks from a quality perspective. So I think we shouldn't [...] > (as for the CPAN quality, any public repository will end up being full > of crap; I don't see any way to work around that. automatic scoring [...] I had assumed Guido was referring to the quality of the infrastructure, including CPAN.pm, rather than the quality of the code stored in CPAN. I've certainly heard at least two people complain about the usability and reliability of the CPAN infrastructure recently, and recall I found it rather unfriendly myself. But that was around 5 years ago; I may simply be wrong or out of date. John From guido at python.org Sat Apr 22 17:18:59 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Apr 2006 16:18:59 +0100 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <Pine.LNX.4.64.0604221500000.8504@localhost> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> <e2d0rs$r30$1@sea.gmane.org> <Pine.LNX.4.64.0604221500000.8504@localhost> Message-ID: <ca471dc20604220818ne88c846xf9d49a700b32df51@mail.gmail.com> I was actually referring to the quality of the code. On 4/22/06, John J Lee <jjl at pobox.com> wrote: > On Sat, 22 Apr 2006, Fredrik Lundh wrote: > > Guido van Rossum wrote: > [...] > >> Python sorely lacks; but I've also heard from more than one person > >> that CPAN sucks from a quality perspective. So I think we shouldn't > [...] > > (as for the CPAN quality, any public repository will end up being full > > of crap; I don't see any way to work around that. automatic scoring > [...] > > I had assumed Guido was referring to the quality of the infrastructure, > including CPAN.pm, rather than the quality of the code stored in CPAN. > > I've certainly heard at least two people complain about the usability and > reliability of the CPAN infrastructure recently, and recall I found it > rather unfriendly myself. But that was around 5 years ago; I may simply > be wrong or out of date. > > > John > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ronaldoussoren at mac.com Sat Apr 22 17:28:14 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sat, 22 Apr 2006 17:28:14 +0200 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <44480147.6080709@v.loewis.de> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <4447F629.8000500@v.loewis.de> <0EFC5F7D-43D1-4F07-B8BF-A5FB763AFD81@redivi.com> <44480147.6080709@v.loewis.de> Message-ID: <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> On 20-apr-2006, at 23:46, Martin v. L?wis wrote: > Bob Ippolito wrote: >>> 'There are several binary formats that embody eggs, but the most >>> common >>> is '.egg' zipfile format, because it's a convenient one for >>> distributing >>> projects.' >>> >>> '.egg files are a "zero installation" format for a Python package;' >> >> single modules are also such a "zero installation" format too. So >> what? >> >> You're simply reading things between the lines that aren't there. >> How >> about you describe exactly what parts of the documentation that >> lead you >> to believe that eggs are designed to compete with solutions like >> rpm/msi/deb so that it can be clarified? > > It's not just the documentation: I firmly believe that many people > consider .egg files to be a distribution and package management > format. People have commented that some systems (e.g. OSX) doesn't > have a usable native packager, so setuptools fills a need here. > This shows that people do believe that .egg files are to OSX what > .deb files are to Debian. As .egg files work on Debian, too, > it is natural that they compete with .deb. > > Phillip Eby once said (IIRC) that he doesn't want package authors to > learn all the different bdist_* commands (which also require access > to the target systems sometimes), and that they their life gets easier > as they now only have to ship the "native" Python binary packages, > namely .egg files. > > In this view, rpm/msi/deb have no place anymore, and are obsolete. In the view of at least some Linux packagers nobody but they should create system packages anyway. Personally I think that view is misguided, but the view is there. > > I can readily believe that package authors indeed see this as > a simplification, but I also see an increased burden on system > admins in return. > > So if this attitude (Python Eggs are the preferred binary distribution > format) is wrong, it is the attitude that has to change first. Changes > to the documentation follow from that. If the attitude is right, I'll > have to accept that I have a minority opinion. IMHO python eggs are the preferred distribution format for several use cases, but not all. They are very usefull for systems that lack a proper package manager of their own and for managing a developers sandbox. As a sysadminI'd be a lot less inclined to use eggs to install software on a system with a proper package manager (like most linux distributions) because the eggs will then not be visible in the global view of installed software or play nice with vendor software management tools. Ronald From pje at telecommunity.com Sat Apr 22 18:04:43 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 12:04:43 -0400 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <e2d0rs$r30$1@sea.gmane.org> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> Message-ID: <5.1.1.6.0.20060422115700.0448f138@mail.telecommunity.com> At 12:34 PM 4/22/2006 +0200, Fredrik Lundh wrote: >Guido van Rossum wrote: > > > Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be > > diverse -- yes, I've heard people say that this is something that > > Python sorely lacks; but I've also heard from more than one person > > that CPAN sucks from a quality perspective. So I think we shouldn't > > focus on emulating CPAN; rather, we should solve the problems we > > actually have. > >the first problem seems to be to define what those problems really >are ;-) > >(as for the CPAN quality, any public repository will end up being full >of crap; I don't see any way to work around that. automatic scoring >based on superficial aspects The purpose of automated scoring on superficial aspects isn't so much to ensure quality as it is to ensure *accessibility*, both in the sense of being able to install the thing, and meet some basic levels of having documentation. If something is accessible and trivial to install, then the market can decide which packages are better to actually use. >or ratings by small numbers of anonymous >visitors are probably among the worst ways to distinguish crap from >good stuff; for quality, you need initiatives like > > http://code.enthought.com/enthon/ > >and other "fat python" projects. Actually, every project that uses other projects' code can now be a "chubby python" by expressing dependencies. Really, one of the best ratings of a package's quality (or at least popularity) is going to be how many other projects depend on it. If everybody uploaded eggs to the Cheeseshop, it'd be possible to show links to "projects that use this project's code" by reading the dependency metadata with pkg_resources. (Not to mention "projects that this project uses"). From aahz at pythoncraft.com Sat Apr 22 18:05:35 2006 From: aahz at pythoncraft.com (Aahz) Date: Sat, 22 Apr 2006 09:05:35 -0700 Subject: [Python-Dev] With context, please Message-ID: <20060422160535.GA27426@panix.com> I've been following the with/context discussion, not that closely, but reading all the posts. I also have to write docs on this for Python for Dummies, which I think is going to be the first book out after 2.5. So far, my take is that I want the block of code to be executed in a context. I'm probably going to use that terminology no matter what gets decided here -- I think it's the only sensible way to describe it for newcomers. Aside from that, I don't care all that much. (Actually, we just turned in the first draft, and I haven't talked about context managers at all -- what I said was that EXPR returns a context.) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From pje at telecommunity.com Sat Apr 22 18:18:54 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 12:18:54 -0400 Subject: [Python-Dev] With context, please In-Reply-To: <20060422160535.GA27426@panix.com> Message-ID: <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> At 09:05 AM 4/22/2006 -0700, Aahz wrote: >I've been following the with/context discussion, not that closely, but >reading all the posts. I also have to write docs on this for Python for >Dummies, which I think is going to be the first book out after 2.5. So >far, my take is that I want the block of code to be executed in a >context. I'm probably going to use that terminology no matter what gets >decided here -- I think it's the only sensible way to describe it for >newcomers. Aside from that, I don't care all that much. > >(Actually, we just turned in the first draft, and I haven't talked about >context managers at all -- what I said was that EXPR returns a context.) And what did you say that __context__ returns? From pje at telecommunity.com Sat Apr 22 18:24:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 12:24:53 -0400 Subject: [Python-Dev] setuptools in 2.5. In-Reply-To: <e2d0sf$rb0$1@sea.gmane.org> References: <200604201456.13148.anthony@interlink.com.au> <17479.48302.734002.387099@montanaro.dyndns.org> <e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com> <e28tc1$62n$1@sea.gmane.org> <79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com> <44497D3D.2090407@colorstudy.com> Message-ID: <5.1.1.6.0.20060422120536.044918c8@mail.telecommunity.com> At 12:34 PM 4/22/2006 +0200, Fredrik Lundh wrote: >Ian Bicking wrote: > > > For instance, if you really want to be confident about how your libraries > > are layed out, this script is the most reliable way: > > http://peak.telecommunity.com/dist/virtual-python.py > >note the use of "this script is the most reliable way", not "something >like this script", or "you have to do this, see e.g." Picky, picky, picky. As it happens, EasyInstall's documentation used to just explain the steps, and people would complain about how hard it was. Ian wrote a script to do it automatically, and I touched it up a bit for distribution. While I personally wouldn't have said it the same way Ian did, there is nonetheless a point to his saying it in that way. If you are giving people help, you don't give ambiguous recommendations. More to the point, you don't tell somebody to reinvent something that already exists. If they were the reinventing type, they'd have already read the documentation and either decided to use the tool or not, to tweak it or not, etc., on their own, rather than asking on a mailing list for help. So your projection of attitudes here has nothing to do with Ian. >(frankly, do you think there's any experienced developer out there >whos first thought when asked the question "how do I create a tightly >controlled Python environment" isn't either "can I solve this by tweaking >sys.path in my application?" or "disk space is cheap, bugs are expensive; >let's use a separate install", spends 15 minutes setting that up, checks >in the result, and goes back to working on the hard stuff...) Clearly, we're not dealing with "experienced" developers, then. Of course even now that it's easy to *do*, some people still gripe that setting up a separate install is too "heavy". (Except the audience for whom the script was intended, who consider it a godsend.) Some people are never satisfied, obviously. Anyway, while we're projecting about people's attitudes, what's with this "Real Programmers Should Build It Themselves" attitude? What are you, some kind of Lisp programmer? ;) From aahz at pythoncraft.com Sat Apr 22 18:25:39 2006 From: aahz at pythoncraft.com (Aahz) Date: Sat, 22 Apr 2006 09:25:39 -0700 Subject: [Python-Dev] With context, please In-Reply-To: <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> References: <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> Message-ID: <20060422162539.GA1408@panix.com> On Sat, Apr 22, 2006, Phillip J. Eby wrote: > At 09:05 AM 4/22/2006 -0700, Aahz wrote: >> >>I've been following the with/context discussion, not that closely, but >>reading all the posts. I also have to write docs on this for Python for >>Dummies, which I think is going to be the first book out after 2.5. So >>far, my take is that I want the block of code to be executed in a >>context. I'm probably going to use that terminology no matter what gets >>decided here -- I think it's the only sensible way to describe it for >>newcomers. Aside from that, I don't care all that much. >> >>(Actually, we just turned in the first draft, and I haven't talked about >>context managers at all -- what I said was that EXPR returns a context.) > > And what did you say that __context__ returns? Whoops, I half-lied. I forgot that my co-author did indeed mention "context manager". Here's the main part (sorry about the missing formatting): The syntax is as follows: with EXPRESSION as NAME: BLOCK The with statement works like this: EXPRESSION returns a value that the with statement uses to create a context (a special kind of namespace). The context is used to execute the BLOCK. The block might end normally, get terminated by a break or return, or raise an exception. No matter which of those things happens, the context contains code to clean up after the block. The as NAME part is optional. If you include it, you can use NAME in your BLOCK Then a bit later: The protocol used by the with statement is called the context management protocol, and objects implementing it are context managers. However, we do not talk at all about __context__(), __enter__(), or __exit__() -- we decided that was too advanced for our audience, we simply wanted to give them the bare basics of using `with`. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From pje at telecommunity.com Sat Apr 22 18:38:13 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 12:38:13 -0400 Subject: [Python-Dev] With context, please In-Reply-To: <20060422162539.GA1408@panix.com> References: <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> At 09:25 AM 4/22/2006 -0700, Aahz wrote: > EXPRESSION returns a value that the with statement uses to create a > context (a special kind of namespace). The context is used to > execute the BLOCK. The block might end normally, get terminated by > a break or return, or raise an exception. No matter which of those > things happens, the context contains code to clean up after the > block. > > The as NAME part is optional. If you include it, you can use NAME > in your BLOCK > >Then a bit later: > > The protocol used by the with statement is called the context > management protocol, and objects implementing it are context > managers. Okay, which means that you agree with AMK and Paul Moore that the thing you pass to "with" is a context manager, and the thing that controls execution is a context. Was that conclusion independently arrived at, or based on reading e.g. the docs I wrote? Obviously, if you guys came up with that terminology on your own, that's a stronger vote in its favor. Btw, the phrase "special kind of namespace" seems wrong to me, since there are no names in a context, and that phrase makes it sound like you get a new scope. Looks to me like you could replace the word "namespace" with "object" without changing the intended effect. (That is, I assume the intended effect was merely to point out you're introducing a new term that the reader is not yet expected to know.) From pje at telecommunity.com Sat Apr 22 18:39:51 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 12:39:51 -0400 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <4449DE1F.7020201@gmail.com> References: <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422113949.01e49820@mail.telecommunity.com> At 05:41 PM 4/22/2006 +1000, Nick Coghlan wrote: >Phillip J. Eby wrote: >>At 12:22 AM 4/22/2006 -0400, Terry Reedy wrote: >>>Why can't you remove the heuristic and screen-scrape info-search code >>>from the easy_install client and run one spider that would check >>>new/revised PyPI entries, search for missing info, insert it into PyPI when >>>found (and mark the entry eggified), or email the package author or a human >>>search volunteer if it does not find enough? >>I actually considered that at one point. After all, I certainly have the >>technology. >>However, I didn't consider it for more than 10 seconds or so. Package >>authors have no reason to listen to some random guy with a bot -- but >>they do have reasons to listen to their users, both actual and potential. > >I'm not sure that's what Terry meant - I took it to mean *make the spider >part of PyPI itself*. Which would also be accomplished by using Grig's Cheesecake tool, since it uses easy_install to fetch the source. >Then all the heuristics and screen-scraping would be server-side - all >easy_install would have to do is look at the meta-data provided by the >PyPI spider. Which is certainly attractive from the POV of being able to make changes quickly. However, I forgot to mention another issue, because I was speaking from the point of view of the time when I designed the thing, not the present day. After it was implemented, it has turned out that being able to point easy_install to web pages with a specific collection of packages (e.g. ones built for a specific OS version, or that are tested for a particular purpose, etc.) is *very* useful in practice. And the people who are doing that, are just going to do whatever it takes to make their listing(s) work with easy_install, because that's the whole point for them. So there doesn't have to be unlimited growth of heuristics there. What it basically amounts to, then, is that easy_install heuristics currently only have to chase people who aren't trying to easy_install their packages. For example, I discovered the other day that easy_install can get confused by bdist_dumb distributions. So few people ever distribute bdist_dumb packages that I never ran into that as an issue before now. So I had to update the heuristics to be able to tell from the filename whether a package is likely to be a bdist_dumb. However, if PyPI is doing Cheesecake ratings, there will only be a finite number of such things to deal with, because when people make changes that break their ratings, they'll just fix the problem themselves, as it'll generally be faster than lobbying for new heuristics in easy_install. As the community becomes better educated about making their package links easy to find, the amount of maintenance work needed for easy_install should drop off. Right now, the main reason to add heuristics is to increase compatibility with whatever practices are already out there, in order to leverage the greatest number of existing packages to secure the greatest number of users. From fredrik at pythonware.com Sat Apr 22 19:24:09 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 22 Apr 2006 19:24:09 +0200 Subject: [Python-Dev] setuptools in 2.5. References: <200604201456.13148.anthony@interlink.com.au><17479.48302.734002.387099@montanaro.dyndns.org><e28ggv$o5n$1@sea.gmane.org> <4447E3F0.3080708@colorstudy.com><e28tc1$62n$1@sea.gmane.org><79990c6b0604210450x79010f2aq244a4a088dca0090@mail.gmail.com><44497D3D.2090407@colorstudy.com> <e2d0sf$rb0$1@sea.gmane.org> <5.1.1.6.0.20060422120536.044918c8@mail.telecommunity.com> Message-ID: <e2dort$qun$1@sea.gmane.org> Phillip J. Eby wrote: > >(frankly, do you think there's any experienced developer out there > >whos first thought when asked the question "how do I create a tightly > >controlled Python environment" isn't either "can I solve this by tweaking > >sys.path in my application?" or "disk space is cheap, bugs are expensive; > >let's use a separate install", spends 15 minutes setting that up, checks > >in the result, and goes back to working on the hard stuff...) > > Clearly, we're not dealing with "experienced" developers, then. the paragraph you're quoting used "experienced" in the context of "having solved this problem before". > Of course even now that it's easy to *do*, some people still gripe that > setting up a separate install is too "heavy". what forums are we talking about here? (if this kind of complaints were common on c.l.python, for example, I think I would have noticed...) > Anyway, while we're projecting about people's attitudes, what's with this > "Real Programmers Should Build It Themselves" attitude? because that's what Python is all about: making things so easy that every- one can build things with a minimum of effort, according to their specific requirements. "a lot of action in a small amount of clear code", as some- one once put it. that doesn't rule out helpful libraries and utilities and cookbook examples, but a "we have a prepackaged solution for you, it's the only solution you'll ever need, and you don't really need to know how it works" approach doesn't strike me as very Pythonic. </F> From martin at v.loewis.de Sat Apr 22 22:07:55 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 22 Apr 2006 22:07:55 +0200 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> Message-ID: <444A8D1B.7090008@v.loewis.de> Guido van Rossum wrote: > Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be > diverse -- yes, I've heard people say that this is something that > Python sorely lacks; but I've also heard from more than one person > that CPAN sucks from a quality perspective. So I think we shouldn't > focus on emulating CPAN; rather, we should solve the problems we > actually have. I note that CPAN originated in an age before the web > was mature. My personal problems with CPAN were always of the kind that it recorded too many/too stringent dependencies. I used it over a period of several years on Solaris, roughly two times a year. Each time, the package I wanted to installed depended on another package, this in turn on a third, and some of these eventually on a Perl version more recent than the one I had installed. So CPAN would always *first* install a new version of Perl for me. Sometimes, this would fail, because Perl wouldn't pass its test suite on Solaris. So I did huge downloads, long compilation times, and still didn't get the package installed. I always fixed it by installing the new Perl version manually, and then starting over with CPAN again. I'm not exactly sure why that happened, but I think there are two causes: - when installing a package, the automated download tool should not try to find the most recent version. Instead, it should try to find a version that causes the least amount of changes to my system. - CPAN shouldn't include Perl proper (likewise, the Cheesehop shouldn't include Python proper). If dependencies can't be resolved with the current version, but could be resolved with a later version, the download tool should give up and explain it all. Regards, Martin From amk at amk.ca Sat Apr 22 23:42:43 2006 From: amk at amk.ca (A.M. Kuchling) Date: Sat, 22 Apr 2006 17:42:43 -0400 Subject: [Python-Dev] Adding wsgiref Message-ID: <20060422214243.GA1600@Andrew-iBook2.local> What with all the discussion that resulted from setuptools, we should probably also discuss the suggestion to add wsgiref to the standard library. PEP 356 doesn't have many details about what's under consideration. (wsgiref is an implementation of the WSGI interface defined in PEP 333. I believe the latest version is at <svn://svn.eby-sarna.com/svnroot/wsgiref/src/wsgiref>.) I expect that most of the package would be added. wsgiref.handlers is the heart of it, and needs wsgiref.{headers,util}. wsgiref.simple_server might be debatable; the module docstring warns that the code hasn't been reviewed for security issues, but on the other hand if there's a WSGI library, we do want the available HTTP server to support it. --amk From guido at python.org Sat Apr 22 23:48:10 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Apr 2006 22:48:10 +0100 Subject: [Python-Dev] Adding wsgiref In-Reply-To: <20060422214243.GA1600@Andrew-iBook2.local> References: <20060422214243.GA1600@Andrew-iBook2.local> Message-ID: <ca471dc20604221448w6bfec334y90379c40c60b8bda@mail.gmail.com> On 4/22/06, A.M. Kuchling <amk at amk.ca> wrote: > What with all the discussion that resulted from setuptools, we should > probably also discuss the suggestion to add wsgiref to the standard > library. PEP 356 doesn't have many details about what's under consideration. > > (wsgiref is an implementation of the WSGI interface defined in PEP 333. > I believe the latest version is at > <svn://svn.eby-sarna.com/svnroot/wsgiref/src/wsgiref>.) > > I expect that most of the package would be added. wsgiref.handlers is > the heart of it, and needs wsgiref.{headers,util}. > wsgiref.simple_server might be debatable; the module docstring warns > that the code hasn't been reviewed for security issues, but on the > other hand if there's a WSGI library, we do want the available HTTP > server to support it. I'd like simple_server; I've got an app based on it already. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Sun Apr 23 01:41:00 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 22 Apr 2006 19:41:00 -0400 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) References: <20060418185518.0359E1E407C@bag.python.org><44462655.60603@gmail.com> <444636DF.8090506@gmail.com><20060419142105.GA32149@rogue.amk.ca><5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com><44477557.8090707@gmail.com><ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com><4448A677.5090905@gmail.com><5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com><4449A445.5020606@gmail.com> <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com><e2d2u0$1mq$1@sea.gmane.org> <444A20D9.5010607@gmail.com> Message-ID: <e2eeua$kl0$1@sea.gmane.org> "Nick Coghlan" <ncoghlan at gmail.com> wrote in message news:444A20D9.5010607 at gmail.com... > As Phillip pointed out, we need input from people that haven't been > intimately > involved in the PEP 343 discussions OK, here is my attempt to cut the knot. To me, 'context' and 'context manager' can be seen as near synonyms; either could be used to describe the thing that 'governs' the block execution. I (and some other others) prefer the shorter term; yet I can see how someone (you, at least) could prefer the longer, more explicit term. To me, the thing after 'with' that makes the whatever for the block is DEFINITELY not a 'context'; trying to twist context to mean that is a brain twister. Calling it 'context manager' is possible if one interpretes 'manager' instead as a hands-off manager who appoints a foreman to do the actual work and then departs. But the term is ambiguous as this discussion has shown. So I propose that the context maker be called just that: 'context maker'. That should pretty clearly not be the context that manages the block execution. An additional source of confusion is that we can name a function got several reasons; among them one is what it is, another is what it returns. For instance, a generic generator for-loop could be written as either of for item in generator_function(): <body> for item in generator(): <body> In context, I think the second reads better, as long as it is clear that the function name 'generator' refers what it returns and not what it is. Similar, a context_maker function could be named any of 'context_maker', 'context_manager', or 'context', with the latter two referring to the return value. In the context of 'with ____ as name:', either of the latter two reads better to me. I would call the decorator @contextmaker since that is what it turns the decorated function into. Well, I hope this slightly different viewpoint is at least a bit helpful. Terry Jan Reedy From tjreedy at udel.edu Sun Apr 23 01:52:06 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 22 Apr 2006 19:52:06 -0400 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com><e2cb2p$f1s$1@sea.gmane.org> <e2cv9t$n24$1@sea.gmane.org> <ca471dc20604220324n31f20ea5o36fc8d3f9d59d30@mail.gmail.com> Message-ID: <e2efj4$m7r$1@sea.gmane.org> "Guido van Rossum" <guido at python.org> wrote in message news:ca471dc20604220324n31f20ea5o36fc8d3f9d59d30 at mail.gmail.com... > Leaving aside the Perl vs. Py thing, opinions on CPAN seem to be > diverse -- yes, I've heard people say that this is something that > Python sorely lacks; but I've also heard from more than one person > that CPAN sucks from a quality perspective. So I think we shouldn't > focus on emulating CPAN; No, we should aim to do better, both in terms of functionality, if that is possible, and contents. Terry Jan Reedy From tjreedy at udel.edu Sun Apr 23 02:12:05 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 22 Apr 2006 20:12:05 -0400 Subject: [Python-Dev] setuptools: past, present, future References: <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com><5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com><5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> <4449DE1F.7020201@gmail.com> <5.1.1.6.0.20060422113949.01e49820@mail.telecommunity.com> Message-ID: <e2egoj$p0h$1@sea.gmane.org> "Phillip J. Eby" <pje at telecommunity.com> wrote in message news:5.1.1.6.0.20060422113949.01e49820 at mail.telecommunity.com... > At 05:41 PM 4/22/2006 +1000, Nick Coghlan wrote: >>I'm not sure that's what Terry meant - I took it to mean *make the spider >>part of PyPI itself*. > > Which would also be accomplished by using Grig's Cheesecake tool, since > it > uses easy_install to fetch the source. I think Nick was much closer to what I meant. Let me try again. As I understood your post, setuptools/easyinstall has some spider, heuristic, and screen-scrape code that tries to fetch info that one would like to have been in PyPI, but is not. I inferred that if the fetched info is not cached anywhere, then mutiple clients would have to repeat the process. Based on this understanding, and cognizant that your project's newly elevated status opens options that you did not have before, I had three related suggestions: 1. Move appropriate code from all the clients to one server, either associated with the PyPI server or even that server itself. (Among other things, this would allow you to update heuristics, etc, without distribution to existing clients or worry about bloating them.) 2. Once missing info is discovered, save it so the discovery process is not repeated. 3. If the search fails, email *someone*. I suggested *either* the package author (under an authoritative signature) or a non-author volunteer who could proceed somehow, such as searching more or contacting the author as a human. If my premises above are mistaken, then the suggestions should be modified or discarded. However, I don't see how they conflict at all with a consumer rating system. Terry Jan Reedy From pje at telecommunity.com Sun Apr 23 03:04:57 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 21:04:57 -0400 Subject: [Python-Dev] setuptools: past, present, future In-Reply-To: <e2egoj$p0h$1@sea.gmane.org> References: <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> <5.1.1.6.0.20060421134259.0419f318@mail.telecommunity.com> <5.1.1.6.0.20060422023858.01e46cf8@mail.telecommunity.com> <4449DE1F.7020201@gmail.com> <5.1.1.6.0.20060422113949.01e49820@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060422202939.01e15298@mail.telecommunity.com> At 08:12 PM 4/22/2006 -0400, Terry Reedy wrote: >If my premises above are mistaken, then the suggestions should be modified >or discarded. However, I don't see how they conflict at all with a >consumer rating system. My point was simply that providing rapid, visible feedback to authors results in a shorter feedback loop with less infrastructure. Also, after thinking it over, it's clear that the spidering is never going to be able to come out entirely, because there are lots of valid use cases for people effectively setting up their own mini-indexes. All that will happen is that at some point I'll be able to stop adding heuristics. (Hopefully that point is already past, in fact.) For anybody that wants to know how the current heuristics work, EasyInstall actually only has a few main categories of heuristics used to find packages: * Ones that apply to PyPI * Ones that apply to SourceForge * Ones that interpret distutils-generated filenames * The one that detects when a page is really a Subversion directory, and thus should be checked out instead of downloaded Most of the SourceForge heuristics have been eliminated already, except for the translation of prdownloads.sf.net URLs to dl.sourceforge.net URLs, and automatic retries in the event of a broken mirror. I'm about to begin modifying the PyPI heuristics to use the new XML-RPC interface instead, for the most part. (Although finding links in a package's long description will still be easier via the web interface.) And the distutils haven't started generating any new kinds of filenames lately, although I occasionally run into situations where non-distutils links or obscure corner cases of distutils filenames give problems, or where somebody has filenames that look like they came from the distutils, but the contents aren't a valid distutils source distribution. Anyway, these are the only things that are truly heuristic in the sense that they are attempts to guess well, and there is always the potential for failure or obsolescence if PyPI or SourceForge or Subversion changes, or people do strange things with their links. I should probably also point out that calling this "spidering" may give the impression it's more sophisticated than it is. EasyInstall only retrieves pages that it is explicitly given, or which appear in one of two specific parts of a PyPI listing. But it *scans* links on any page that it retrieves, and if a link looks like a downloadable package, it will parse as much info as practical from the filename in order to catalog it as a possible download source. So, it will read HTML from PyPI pages, pages directly linked from PyPI as either "Home" or "Download" URLs, and page URLs you give to --find-links. But it doesn't "spider" anywhere besides those pages, unless you count downloading an actual package link. The whole process more resembles a few quick redirects in a browser, than it does any sort of traditional web spider. From ncoghlan at gmail.com Sun Apr 23 03:43:15 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 23 Apr 2006 11:43:15 +1000 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> References: <e23n9h$pqi$1@sea.gmane.org> <5.1.1.6.0.20060418190025.037db340@mail.telecommunity.com> <44473B5C.3000409@v.loewis.de> <200604201822.45318.anthony@interlink.com.au> <4447E69C.7060903@v.loewis.de> <A998BC3B-6F4C-447F-BFDD-E648E8E73203@redivi.com> <4447F629.8000500@v.loewis.de> <0EFC5F7D-43D1-4F07-B8BF-A5FB763AFD81@redivi.com> <44480147.6080709@v.loewis.de> <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> Message-ID: <444ADBB3.3000007@gmail.com> Ronald Oussoren wrote: > On 20-apr-2006, at 23:46, Martin v. L?wis wrote: >> So if this attitude (Python Eggs are the preferred binary distribution >> format) is wrong, it is the attitude that has to change first. Changes >> to the documentation follow from that. If the attitude is right, I'll >> have to accept that I have a minority opinion. > > IMHO python eggs are the preferred distribution format for several > use cases, but not all. They are very usefull for systems that lack a > proper package > manager of their own and for managing a developers sandbox. > > As a sysadminI'd be a lot less inclined to use eggs to install > software on a system with a proper package manager (like most linux > distributions) because the eggs will then not be visible in the > global view of installed software or play nice with vendor software > management tools. Maybe we need something that's the equivalent of alien (rpm -> dpkg converter), so that given an egg, one can easily get a native installer for that egg. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From tim.peters at gmail.com Sun Apr 23 04:11:22 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 22 Apr 2006 22:11:22 -0400 Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1) In-Reply-To: <20060419090608.GA4346@python.psfb.org> References: <20060419090608.GA4346@python.psfb.org> Message-ID: <1f7befae0604221911x479f7300xb588daba1e566791@mail.gmail.com> [19 Apr 2006, Neal Norwitz] > test_cmd_line leaked [0, 17, -17] references > test_filecmp leaked [0, 13, 0] references > test_threading_local leaked [-93, 0, 0] references > test_urllib2 leaked [-121, 88, 99] references Thanks to Thomas digging into test_threading_local, I checked in what appeared to be a total leak fix for it last week. On my Windows box, it's steady as a rock now: """ $ python_d -E -tt ../lib/test/regrtest.py -R:50: test_threading_local test_threading_local beginning 55 repetitions 1234567890123456789012345678901234567890123456789012345 ....................................................... 1 test OK. [27145 refs] """ Is it still flaky on other platforms? If not, maybe the reported test_threading_local leaked [-93, 0, 0] references is due to stuff from a _previous_ test getting cleaned up (later than expected/hoped)? From tim.peters at gmail.com Sun Apr 23 04:41:27 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 22 Apr 2006 22:41:27 -0400 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <4444D6C2.10606@bullseye.apana.org.au> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> <4443F47E.7000103@v.loewis.de> <4444D6C2.10606@bullseye.apana.org.au> Message-ID: <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> [Andrew MacIntyre] > I doubt it has anything to do with this issue, but I just thought I'd > mention something strange I've encountered on Windows XP Pro (SP2) at > work. > > If Python terminates due to an uncaught exception, with stdout & stderr > redirected externally (ie within the batch file that started Python), What batch file? > the files that were redirected to cannot be deleted/renamed until the > system is rebooted. This really needs an executable example. Here's my best guess about what you mean, but I don't see any of what you're describing on WinXP Pro SP2: """ $ type batty.py import sys i = 0 for line in sys.stdin: sys.stdout.write(line) i += 1 if i == 3: raise "uncaught exception" $ type runpy.bat @python batty.py < stdin.txt > stdout.txt 2>stderr.txt $ type stdin.txt a b c d e f g h i $ runpy.bat $ type stdout.txt a b c $ type stderr.txt batty.py:8: DeprecationWarning: raising a string exception is deprecated raise "uncaught exception" Traceback (most recent call last): File "batty.py", line 8, in <module> raise "uncaught exception" uncaught exception $ del stdout.txt $ del stderr.txt $ dir/b std*.txt stdin.txt """ I'll note that stdin.txt can also be deleted when runpy.bat finishes. > If a bare except is used to trap any such exceptions (and the traceback > printed explicitly) so that Python terminates normally, there is no > problem (ie the redirected files can be deleted/renamed etc). > > I've never reported this as a Python bug If you do, I'll probably add a comment like the above ;-) > because I've considered the antivirus SW likely to be the culprit. Could be. FWIW, Norton AV was running during the above. > I don't recall seeing this with Windows 2000, but much was changed > in the transition from the Win2k SOE to the WinXP SOE. What's that? Shitty Out-of-box Experience ;-)? From anthony at interlink.com.au Sun Apr 23 05:19:37 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Sun, 23 Apr 2006 13:19:37 +1000 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <444ADBB3.3000007@gmail.com> References: <e23n9h$pqi$1@sea.gmane.org> <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> <444ADBB3.3000007@gmail.com> Message-ID: <200604231319.44014.anthony@interlink.com.au> On Sunday 23 April 2006 11:43, Nick Coghlan wrote: > Maybe we need something that's the equivalent of alien (rpm -> dpkg > converter), so that given an egg, one can easily get a native > installer for that egg. An 'eggconvert' that used the existing bdist_foo machinery to build system specific packages would probably be an ok summer of code project, no? -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From pje at telecommunity.com Sun Apr 23 05:34:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sat, 22 Apr 2006 23:34:53 -0400 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <200604231319.44014.anthony@interlink.com.au> References: <444ADBB3.3000007@gmail.com> <e23n9h$pqi$1@sea.gmane.org> <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> <444ADBB3.3000007@gmail.com> Message-ID: <5.1.1.6.0.20060422232641.01e3d1e8@mail.telecommunity.com> At 01:19 PM 4/23/2006 +1000, Anthony Baxter wrote: >On Sunday 23 April 2006 11:43, Nick Coghlan wrote: > > Maybe we need something that's the equivalent of alien (rpm -> dpkg > > converter), so that given an egg, one can easily get a native > > installer for that egg. > >An 'eggconvert' that used the existing bdist_foo machinery to build >system specific packages would probably be an ok summer of code >project, no? That's probably not going to be the best way to get from an egg to a system package, since a lot of the bdist_foo commands try to build things from source, and an egg for the specific platform is already going to be built, and won't include source (except for Python modules). Probably you'd want to create something more like Vincenzo Di Massa's "easy_deb" program, which uses easy_install to find and fetch a source distribution, then builds a .deb from it. You can currently use: easy_install --editable --build_directory=somewhere SomePackage And it will find SomePackage, and unpack a source distribution into 'somewhere/somepackage'. So you could then change to that directory and run the package's setup.py with any bdist command you wanted to. So, for any bdist_foo command that's already implemented in the distutils, you can already get pretty close to this functionality by using a short shell script that just calls easy_install followed by the setup.py. What you won't get without writing some more code is dependency support. You also have to deal with the issue of mapping PyPI names to the names used by the relevant packaging system. From ncoghlan at gmail.com Sun Apr 23 05:33:26 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 23 Apr 2006 13:33:26 +1000 Subject: [Python-Dev] With context, please In-Reply-To: <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> References: <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> Message-ID: <444AF586.20901@gmail.com> Phillip J. Eby wrote: > At 09:25 AM 4/22/2006 -0700, Aahz wrote: >> EXPRESSION returns a value that the with statement uses to create a >> context (a special kind of namespace). The context is used to >> execute the BLOCK. The block might end normally, get terminated by >> a break or return, or raise an exception. No matter which of those >> things happens, the context contains code to clean up after the >> block. >> >> The as NAME part is optional. If you include it, you can use NAME >> in your BLOCK >> >> Then a bit later: >> >> The protocol used by the with statement is called the context >> management protocol, and objects implementing it are context >> managers. > > Okay, which means that you agree with AMK and Paul Moore that the thing you > pass to "with" is a context manager, and the thing that controls execution > is a context. Was that conclusion independently arrived at, or based on > reading e.g. the docs I wrote? Obviously, if you guys came up with that > terminology on your own, that's a stronger vote in its favor. I think I've figured out where you and I went off in different directions with this - when you read "context management protocol" in the PEP you interpreted it as "has a __context__ method that produces an object with __enter__/__exit__ methods", but when I originally added the term "context management protocol" to the PEP what I actually meant was "has __enter__/__exit__ methods and a __context__ method that returns self" (the last part of that definition being added implicitly when the __context__ method was introduced). Starting from that point, I'm no longer surprised you considered the PEP to be inconsistent in its use of the terminology :) As far as I can tell, Aahz's book doesn't currently say anything that favours one interpretation over the other (which is probably a good thing from Aahz's point of view :). In case its not entirely clear why I think Aahz's wording is neutral, it's because in my intended interpretation the context manager sets up and tears down an abstract execution context which is distinct from the concrete context object that provided the manager in the first place. The manager is a mediator that translates from the data passively recorded in the context object to the appropriate active manipulations of the runtime state. I realise the overloading of the term 'context' is potentially confusing, and I didn't clearly acknowledge the distinction myself until the recent discussion. Not acknowledging that distinction appears to have been largely responsible for my manifest failure to document this properly in the PEP. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Sun Apr 23 05:35:30 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 23 Apr 2006 13:35:30 +1000 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <200604231319.44014.anthony@interlink.com.au> References: <e23n9h$pqi$1@sea.gmane.org> <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> <444ADBB3.3000007@gmail.com> <200604231319.44014.anthony@interlink.com.au> Message-ID: <444AF602.5020003@gmail.com> Anthony Baxter wrote: > On Sunday 23 April 2006 11:43, Nick Coghlan wrote: >> Maybe we need something that's the equivalent of alien (rpm -> dpkg >> converter), so that given an egg, one can easily get a native >> installer for that egg. > > An 'eggconvert' that used the existing bdist_foo machinery to build > system specific packages would probably be an ok summer of code > project, no? It sounds like an excellent one if you can find the right mentor. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From kirat.singh at gmail.com Sun Apr 23 06:05:30 2006 From: kirat.singh at gmail.com (Kirat Singh) Date: Sun, 23 Apr 2006 00:05:30 -0400 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash Message-ID: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> Hi, this is my first python dev post, so please forgive me if this topic has already been discussed. It seemed to me that removing me_hash from a dict entry would save 2/3 of the space used by dictionaries and also improve alignment of the entries since they'd be 8 bytes instead of 12. And sets end up having just 4 byte entries. I'm guessing that string dicts are the most common (hence the specialized lookupdict_string routine), and since strings already contain their hash, this would probably mitigate the performance impact. One could also add a hash to Tuples since they are immutable. If this isn't a totally stupid idea, I'd be happy to volunteer to try the experiment and run any suggested tests. thanks! -Kirat PS any opinion on making _Py_StringEq a macro? inline function would be nice but I hesitate to bring up the C/C++ debate, both languages suck in their own special way ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060423/3d5a1589/attachment.htm From ncoghlan at iinet.net.au Sun Apr 23 07:27:11 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Sun, 23 Apr 2006 15:27:11 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) Message-ID: <444B102F.5060201@iinet.net.au> For those not following along at home, I've now updated PEP 343 to clarify my originally intended meanings for various terms, and to record the fact that we don't currently have a consensus on python-dev that those are the right definitions. As written up in the PEP, I plan to propagate those interpretations throughout the documentation and implementation for 2.5a2, so we have at least one release using my original vision to see if the terminology actually all hangs together sensibly the way I believe it does :) Anthony, I'd also like it if we could include something specific in the release announcement asking folks to go over the relevant documentation so we have some feedback to work with on whether the updated documentation makes sense or not. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From pje at telecommunity.com Sun Apr 23 08:03:40 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 23 Apr 2006 02:03:40 -0400 Subject: [Python-Dev] With context, please In-Reply-To: <444AF586.20901@gmail.com> References: <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060423014519.01e3f340@mail.telecommunity.com> At 01:33 PM 4/23/2006 +1000, Nick Coghlan wrote: >Phillip J. Eby wrote: >>At 09:25 AM 4/22/2006 -0700, Aahz wrote: >>> EXPRESSION returns a value that the with statement uses to create a >>> context (a special kind of namespace). The context is used to >>> execute the BLOCK. The block might end normally, get terminated by >>> a break or return, or raise an exception. No matter which of those >>> things happens, the context contains code to clean up after the >>> block. >>> >>> The as NAME part is optional. If you include it, you can use NAME >>> in your BLOCK >>> >>>Then a bit later: >>> >>> The protocol used by the with statement is called the context >>> management protocol, and objects implementing it are context >>> managers. >As far as I can tell, Aahz's book doesn't currently say anything that >favours one interpretation over the other (which is probably a good thing >from Aahz's point of view :). Read the first sentence again: "EXPRESSION returns a value that the with statement uses to *create* a context" (emphasis added). It doesn't say that the value *is* the context, and if anything, the second excerpt supports that by implying that the context manager is the thing passed to the "with" statement. I could be wrong, Nick, but it really looks to me like you're the only person who's gotten this particular interpretation, and I at least don't understand what the supporting argument for your interpretation is, other than that it's what you always meant it to be. (Whereas there are obvious rationales for __context__ returning a "context" object, and for a context manager being longer lived than contexts.) I also haven't seen you explain your theory of why you should be able to take the object returned from __context__ and pass it into the "with" statement. Given those things, I'd suggest that the consensus is overwhelmingly in favor of the "with contextmanager" terminology, or something similar, and that should be what gets used. I don't think it's fair to say that there's a lack of consensus in the sense that it could go either way like a 50-50 or 60-40. So far, it's 5:1 against the object passed to "with" being called the "context", even if Guido abstains. Also, there haven't been any complaints of the a1 documentation being unclear on this; rather, it was the *PEP* that was considered confusing. So in effect, we already tested this in a1, with one interpretation in the docs and another in the PEP, and the documenation won. From martin at v.loewis.de Sun Apr 23 08:45:41 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 23 Apr 2006 08:45:41 +0200 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> Message-ID: <444B2295.6020306@v.loewis.de> Kirat Singh wrote: > Hi, this is my first python dev post, so please forgive me if this topic > has already been discussed. To my knowledge, this hasn't been discussed before. > It seemed to me that removing me_hash from a dict entry would save 2/3 > of the space used by dictionaries and also improve alignment of the > entries since they'd be 8 bytes instead of 12. And sets end up having > just 4 byte entries. How did you compute the saving of 2/3? If one field goes and two fields stay, it saves 1/3, no? Also, the "improved" alignment isn't that much of an improvement on a 32-bit system, AFAICT. Likewise, on a 64-bit system, me_hash consumes 8 bytes (regardless of whether a long is 4 or 8 bytes), so the saving would be 1/3, and the alignment would change (not improve) from 8 to 16. > I'm guessing that string dicts are the most common (hence the > specialized lookupdict_string routine), and since strings already > contain their hash, this would probably mitigate the performance impact. > One could also add a hash to Tuples since they are immutable. You implicitly give the reason for the me_hash field: performance. This is a classical space-vs-time trade-off, currently leaning towards more-space-less-time. You want the trade-off be the other way: less-space-more-time. As you might not be aware of how much more time it will take, consider these uses of the me_hash field: - this is open addressing, so when performing a lookup 1. computes the hash of the key 2. looks at the address according to the hash 3. If there is no key at that address, lookup fails (and a free slot is found) 4. if the hash of the entry at that address is equal, it compares the keys. If they are equal, lookup succeeds. 5. If the hashes are not equal, or the keys are not equal, an alternative address is computed, and search continues with step 3. Under your algorithm, you would have additional object comparisons in case of hash values that translate to the same address. This might slow down dictionaries that are close to being full. - when the dictionary is resized, all entries need to be rehashed. So if me_hash is removed, you have many more PyObject_Hash calls on resizing. This would likely slow down resizing significantly. > If this isn't a totally stupid idea, I'd be happy to volunteer to try > the experiment and run any suggested tests. I don't see much value in saving a few bytes, and I expect that performance will go down noticeably. Still, only an experiment could show what the real impact is. Regards, Martin From ncoghlan at gmail.com Sun Apr 23 09:36:50 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 23 Apr 2006 17:36:50 +1000 Subject: [Python-Dev] With context, please In-Reply-To: <5.1.1.6.0.20060423014519.01e3f340@mail.telecommunity.com> References: <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> <5.1.1.6.0.20060423014519.01e3f340@mail.telecommunity.com> Message-ID: <444B2E92.5010707@gmail.com> Phillip J. Eby wrote: > Read the first sentence again: > > "EXPRESSION returns a value that the with statement uses to *create* a > context" (emphasis added). > > It doesn't say that the value *is* the context, and if anything, the > second excerpt supports that by implying that the context manager is the > thing passed to the "with" statement. Things become confused because "context" gets used two different ways: to refer to the context object itself, and the runtime state that the context manager creates. Every case I've seen it used in user-oriented documentation it has referred to the latter. There may be only 2 concrete objects involved, but there's actually three entities of interest - the context object, the context manager, and the nebulous runtime state that makes up the actual "execution context" (i.e. the things the context manager *does* based on the current state of the context object). > Also, there haven't been any complaints of the a1 documentation being > unclear on this; rather, it was the *PEP* that was considered > confusing. So in effect, we already tested this in a1, with one > interpretation in the docs and another in the PEP, and the documenation > won. Except that it breaks down when you actually try to line it up with the code (which is where Paul got stuck when reviewing the documentation yesterday): - decimal.Context: - is not a context according to the current docs - decimal.Context.__context__() returns a decimal.ContextManager object - contextlib.contextmanager() - is actually used to define contexts according to the current docs - but returns a GeneratorContextManager object - contextlib.nested() - uses "contexts" as the name for its arguments - uses "context" as the iteration variable - uses "mgr" as the name for the result of context.__context__() - Compilation/evaluation chain - First clause in With statement is named "context_expr" - parser takes its cue from the AST definition - AST compiler refers to context.__exit__/__enter__ in a comment - ceval refers to context.__exit__ in a comment So the problem is not that the documentation disagrees with the PEP, it's that it disagrees with the *implementation*. contextlib.py, decimal.py, Python.asdl and ast.c consistently use "context" to refer to the original objects provided to the with statement and "context manager" to refer to the return value of the __context__ method. compile.c is confused either way, since it uses "context" for both objects, and ceval.c appears to be taking its cue from compile.c :) (threading.py and fileobject.c just provide the methods directly, so they don't mention the terminology at all) What I'm trying for alpha 2 is to bring the documentation in line with the implementation, because I think that's the root of all of the current confusion. If things still break down, then we can look at changing both the documentation and the implementation to use the alpha 1 terminology post-alpha 2. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From baptiste13 at altern.org Sun Apr 23 14:10:58 2006 From: baptiste13 at altern.org (Baptiste Carvello) Date: Sun, 23 Apr 2006 14:10:58 +0200 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) In-Reply-To: <e2eeua$kl0$1@sea.gmane.org> References: <20060418185518.0359E1E407C@bag.python.org><44462655.60603@gmail.com> <444636DF.8090506@gmail.com><20060419142105.GA32149@rogue.amk.ca><5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com><44477557.8090707@gmail.com><ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com><4448A677.5090905@gmail.com><5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com><4449A445.5020606@gmail.com> <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com><e2d2u0$1mq$1@sea.gmane.org> <444A20D9.5010607@gmail.com> <e2eeua$kl0$1@sea.gmane.org> Message-ID: <e2fr6f$q9c$1@sea.gmane.org> Terry Reedy a ?crit : > So I propose that the context maker be called just that: 'context maker'. > That should pretty clearly not be the context that manages the block > execution. > +1 for context maker. In fact, after reading the begining of the thread, I came up with the very same idea. > Similar, a context_maker function could be named any of 'context_maker', > 'context_manager', or 'context', with the latter two referring to the > return value. In the context of 'with ____ as name:', either of the latter > two reads better to me. > > I would call the decorator @contextmaker since that is what it turns the > decorated function into. > I'm confused here. Do we agree that the object with __enter__ and __exit__ is a context manager, and the object with just __context__ is a context maker ? If so , I would say the decorator still produces a context manager. Baptiste From andymac at bullseye.apana.org.au Sun Apr 23 13:52:01 2006 From: andymac at bullseye.apana.org.au (Andrew MacIntyre) Date: Sun, 23 Apr 2006 22:52:01 +1100 Subject: [Python-Dev] patch #1454481 - runtime tunable thread stack size In-Reply-To: <e2c24r$jo7$1@sea.gmane.org> References: <4448C1C2.9010407@bullseye.apana.org.au> <e2c24r$jo7$1@sea.gmane.org> Message-ID: <444B6A61.2040605@bullseye.apana.org.au> Terry Reedy wrote: > If you checked it in (after tests pass on your ?mac?, and while being ready > to revert), wouldn't the next buildbot cycle do the testing you need? > Isn't testing on 'other' platforms what buildbot is for? Only up to a point... In this case, I was after code review as much as the testing. Regards, Andrew. ------------------------------------------------------------------------- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac at pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From just at letterror.com Sun Apr 23 15:16:59 2006 From: just at letterror.com (Just van Rossum) Date: Sun, 23 Apr 2006 15:16:59 +0200 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) In-Reply-To: <e2fr6f$q9c$1@sea.gmane.org> Message-ID: <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]> Baptiste Carvello wrote: > Terry Reedy a ?crit : > > So I propose that the context maker be called just that: 'context > > maker'. That should pretty clearly not be the context that manages > > the block execution. > > > +1 for context maker. In fact, after reading the begining of the > thread, I came up with the very same idea. Or maybe "context factory"? Just From p.f.moore at gmail.com Sun Apr 23 15:20:23 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 23 Apr 2006 14:20:23 +0100 Subject: [Python-Dev] magic in setuptools (Was: setuptools in the stdlib) In-Reply-To: <5.1.1.6.0.20060422232641.01e3d1e8@mail.telecommunity.com> References: <e23n9h$pqi$1@sea.gmane.org> <1341C11F-6611-4CC6-86DE-BE4BB358163D@mac.com> <444ADBB3.3000007@gmail.com> <200604231319.44014.anthony@interlink.com.au> <5.1.1.6.0.20060422232641.01e3d1e8@mail.telecommunity.com> Message-ID: <79990c6b0604230620g451e2338ndd6a3c14d7d5e3c0@mail.gmail.com> On 4/23/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 01:19 PM 4/23/2006 +1000, Anthony Baxter wrote: > >On Sunday 23 April 2006 11:43, Nick Coghlan wrote: > > > Maybe we need something that's the equivalent of alien (rpm -> dpkg > > > converter), so that given an egg, one can easily get a native > > > installer for that egg. > > > >An 'eggconvert' that used the existing bdist_foo machinery to build > >system specific packages would probably be an ok summer of code > >project, no? > > That's probably not going to be the best way to get from an egg to a system > package, since a lot of the bdist_foo commands try to build things from > source, and an egg for the specific platform is already going to be built, > and won't include source (except for Python modules). Yes. And on Windows at least, building from source can be problematic. Given a binary egg with all the bits included, it's *far* better to just move them around and build an installer from the pieces. It might be possible to unpack the egg into the structure of a distutils "build" directory, and then use bdist_xxx to build the installer on that (on the basis that the build steps will be ignored as the objects are already there - you may have to fiddle a bit to fool distutils into *not* needing source present to check timestamps against...) > Probably you'd want to create something more like Vincenzo Di Massa's > "easy_deb" program, which uses easy_install to find and fetch a source > distribution, then builds a .deb from it. You can currently use: That's certainly not what I want, in two ways: 1. Finding and fetching things isn't needed - I'm happy starting with an egg file on the local filesystem 2. Building from source is not always possible, as I explained above. > So, for any bdist_foo command that's already implemented in the distutils, > you can already get pretty close to this functionality by using a short > shell script that just calls easy_install followed by the setup.py. ... assuming that setup.py build works for you. That's not the use case I am thinking of (which is a 3rd party grabbing a binary egg they couldn't build for themselves, and converting it to the local packaging system). > What you won't get without writing some more code is dependency > support. While I accept that dependency support is very valuable to a lot of people, some of us (the control freaks, maybe :-)) prefer to install dependencies by hand, based on documented requirements. (There's a separate rant here, about the possibility that people stop documenting requirements "because setup.py has them, and they get handled by easy_install", but I'll save that for another day...) But if you're thinking of converting egg dependency data to, for example, deb dependency metadata, then yes, that's an issue. For Windows installers the conversion is easy, of course - just delete the metadata :-) Paul. From p.f.moore at gmail.com Sun Apr 23 15:31:30 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 23 Apr 2006 14:31:30 +0100 Subject: [Python-Dev] With context, please In-Reply-To: <444B2E92.5010707@gmail.com> References: <20060422160535.GA27426@panix.com> <5.1.1.6.0.20060422121835.0448d0e8@mail.telecommunity.com> <5.1.1.6.0.20060422123231.01e55e28@mail.telecommunity.com> <5.1.1.6.0.20060423014519.01e3f340@mail.telecommunity.com> <444B2E92.5010707@gmail.com> Message-ID: <79990c6b0604230631g1c84b9cdib3a5b61048e0b5db@mail.gmail.com> On 4/23/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > - contextlib.contextmanager() > - is actually used to define contexts according to the current docs > - but returns a GeneratorContextManager object You may just be trying to avoid overcomplicating things by adding too much detail here, but you have missed my point about contextmanager - it's used in two, distinct, ways: 1. On a simple function, to define a context manager (that's fine, and in line with the docs) 2. On a __context__ method to define a "function that returns a context" The sticking point is (2), which is *neither a context, nor a context manager*. Purely by coincidence, a context manager will do here, but it's more than is needed. So I still believe that you need 2 separate decorators here (no matter how much implementation they share). I've not looked at your other examples, as I'm deliberately trying *not* to understand the details any further until I've read your proposed docs... Paul. From ncoghlan at gmail.com Sun Apr 23 17:02:41 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 01:02:41 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604230623g338180f1g18eda70a8129e36@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <79990c6b0604230623g338180f1g18eda70a8129e36@mail.gmail.com> Message-ID: <444B9711.4010303@gmail.com> Paul Moore wrote: > On 4/23/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote: >> For those not following along at home, I've now updated PEP 343 to clarify my >> originally intended meanings for various terms, and to record the fact that we >> don't currently have a consensus on python-dev that those are the right >> definitions. > > Is it OK if I *don't* read the PEP? I'd rather read the 2.5a2 docs > with a fresh mind, not with preconceived ideas based on the PEP. It > means the feedback gets delayed a bit, but I assume that's OK as > you're planning on including the change in the release anyway. That's fine - the PEP changes are mainly targeted at clearing things up that were ambiguous in the original PEP. Because they were ambiguous, the 2.5a1 implementation went with interpretation A, the documentation went with interpretation B, and so naturally everything fell apart in a confusing heap when the implementation and documentation met in the middle. I'm shifting everything to use the same interpretation as the implementation for alpha 2 (although AMK will have to decide what he wants to do with the what's new section, since that currently mixes and matches the two different interpretations). If people still find it confusing when PEP+implementation+documentation are all using the terminology the same way, then we can look at other possibilities (such as adding a completely new term like 'context definition' for the context objects). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From anthony at interlink.com.au Sun Apr 23 17:56:49 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Mon, 24 Apr 2006 01:56:49 +1000 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> References: <43FA4A1B.3030209@v.loewis.de> <44495513.3080406@v.loewis.de> <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> Message-ID: <200604240156.51865.anthony@interlink.com.au> On Saturday 22 April 2006 15:27, Neal Norwitz wrote: > In case it wasn't clear, the /Wp64 flag is available in icc > (Intel's C compiler). Is it worth turning this on for the icc ubuntu buildbot? Anyone got ideas on the best way to do this? Should I just set CFLAGS="-Wp64" before running the buildbot on the box (it's sitting 2 feet behind my head in the rack in my study(*)) Anthony (*) Yes, I have an almost-rack of machines in my house. And yes, this scares me. From martin at v.loewis.de Sun Apr 23 18:25:47 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 23 Apr 2006 18:25:47 +0200 Subject: [Python-Dev] Win64 AMD64 (aka x64) binaries available64 In-Reply-To: <200604240156.51865.anthony@interlink.com.au> References: <43FA4A1B.3030209@v.loewis.de> <44495513.3080406@v.loewis.de> <ee2a432c0604212227m3fb27556ha8caf72e0c895f1@mail.gmail.com> <200604240156.51865.anthony@interlink.com.au> Message-ID: <444BAA8B.8060407@v.loewis.de> Anthony Baxter wrote: > On Saturday 22 April 2006 15:27, Neal Norwitz wrote: >> In case it wasn't clear, the /Wp64 flag is available in icc >> (Intel's C compiler). > > Is it worth turning this on for the icc ubuntu buildbot? Anyone got > ideas on the best way to do this? Should I just set CFLAGS="-Wp64" > before running the buildbot on the box (it's sitting 2 feet behind my > head in the rack in my study(*)) > > Anthony > > (*) Yes, I have an almost-rack of machines in my house. And yes, this > scares me. You should be scared what people are doing with your machines :-) From http://www.python.org/dev/buildbot/trunk/x86%20Ubuntu%20dapper%20%28icc%29%20trunk/builds/229/step-compile/0 'OPT': '-Wp64 -g -O3' icc -pthread -c -fno-strict-aliasing -Wp64 -g -O3 -I. -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c Regards, Martin From martin at v.loewis.de Sun Apr 23 18:33:04 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 23 Apr 2006 18:33:04 +0200 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> <4443F47E.7000103@v.loewis.de> <4444D6C2.10606@bullseye.apana.org.au> <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> Message-ID: <444BAC40.6030404@v.loewis.de> Tim Peters wrote: >> I've never reported this as a Python bug > > If you do, I'll probably add a comment like the above ;-) > >> because I've considered the antivirus SW likely to be the culprit. > > Could be. FWIW, Norton AV was running during the above. I see a similar phenomenon (sp?) on XP SP2: test_mailbox fails to me, with permission denied in some (random) test. I believe this is due to Tortoise SVN: test_mailbox creates a few directories, then Tortoise detects them (thanks to file change notifications) and tries to walk them, to find out whether that directory is under subversion control. Before Tortoise is done, test_mailbox tries to delete the directories. This (somewhat?) fails - there apparently is a "delete pending" state somehow in the system (sysinternals filemon sometimes shows DELETE PEND as the result of a call). Then, the next test will fail with permission denied. This is a heisenbug: different runs will fail in different tests, and letting sysinternals perform complete tracing make the failure go away. Regards, Martin From ncoghlan at gmail.com Sun Apr 23 18:53:12 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 02:53:12 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444B102F.5060201@iinet.net.au> References: <444B102F.5060201@iinet.net.au> Message-ID: <444BB0F8.2060602@gmail.com> Nick Coghlan wrote: > For those not following along at home, I've now updated PEP 343 to clarify my > originally intended meanings for various terms, and to record the fact that we > don't currently have a consensus on python-dev that those are the right > definitions. > > As written up in the PEP, I plan to propagate those interpretations throughout > the documentation and implementation for 2.5a2, so we have at least one > release using my original vision to see if the terminology actually all hangs > together sensibly the way I believe it does :) Aside from the What's New document, this has now been done. My modifications consisted of terminology changes in the contextlib docs and the language reference to match the 2.5a1 implementation, a Context Types addition to the library reference similar to that for Iterator Types, and a very brief addition to Chapter 8 of the tutorial. It should all filter through the pipeline and appear on python.org in the next few hours. I also looked at the threading and decimal module documentation, but those didn't need any changes - they were correct regardless of which interpretation you applied to the previously ambiguous section in PEP 343. It's up to AMK to decide what he wants to do with What's New for alpha 2. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From tjreedy at udel.edu Sun Apr 23 20:10:54 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 23 Apr 2006 14:10:54 -0400 Subject: [Python-Dev] Why are contexts also managers? (wasr45544-peps/trunk/pep-0343.txt) References: <20060418185518.0359E1E407C@bag.python.org><44462655.60603@gmail.com> <444636DF.8090506@gmail.com><20060419142105.GA32149@rogue.amk.ca><5.1.1.6.0.20060419113050.040a73b8@mail.telecommunity.com><44477557.8090707@gmail.com><ca471dc20604200541q5249c47bhc9e3dab6144ed086@mail.gmail.com><4448A677.5090905@gmail.com><5.1.1.6.0.20060421124536.037e9008@mail.telecommunity.com><4449A445.5020606@gmail.com> <ca471dc20604220204i39aead1dxe3dbb1a66efc2972@mail.gmail.com><e2d2u0$1mq$1@sea.gmane.org> <444A20D9.5010607@gmail.com><e2eeua$kl0$1@sea.gmane.org> <e2fr6f$q9c$1@sea.gmane.org> Message-ID: <e2gfvc$mia$1@sea.gmane.org> "Baptiste Carvello" <baptiste13 at altern.org> wrote in message news:e2fr6f$q9c$1 at sea.gmane.org... >+1 for context maker. [me, Terry] >> I would call the decorator @contextmaker since that is what it turns the >> >> decorated function into. >I'm confused here. Do we agree that the object with __enter__ and > >__exit__ is a context manager, I believe so, either that or, for short, context >and the object with just __context__ is a context maker ? If that is what goes after 'with', yes. >If so , I would say the decorator still produces a context manager. If I misunderstood what the decorator produces, then @contextmaker would be wrong. Actually, I thought there were possibly two decorators under consideration, one for each. Terry Jan Reedy Baptiste _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org From tjreedy at udel.edu Sun Apr 23 20:17:01 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 23 Apr 2006 14:17:01 -0400 Subject: [Python-Dev] Why are contexts also managers? (wasr45544-peps/trunk/pep-0343.txt) References: <e2fr6f$q9c$1@sea.gmane.org> <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]> Message-ID: <e2ggar$ng5$1@sea.gmane.org> "Just van Rossum" <just at letterror.com> wrote in message news:r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]... >> +1 for context maker. >Or maybe "context factory"? Yes, I thought of that too. Latin 'facere' == 'to make'. I might even put a sentence in the doc explaining that a context maker is a context factory, in the CS sense of the term, just as generator functions are generator factories. Terry Jan Reedy From tim.peters at gmail.com Sun Apr 23 20:55:21 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 23 Apr 2006 14:55:21 -0400 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <444BAC40.6030404@v.loewis.de> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> <4443F47E.7000103@v.loewis.de> <4444D6C2.10606@bullseye.apana.org.au> <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> <444BAC40.6030404@v.loewis.de> Message-ID: <1f7befae0604231155n13820452hfa9f2f0f3bf15242@mail.gmail.com> [Martin] > I see a similar phenomenon (sp?) Yup! The plural is phenomena. > on XP SP2: test_mailbox fails to > me, with permission denied in some (random) test. I believe this > is due to Tortoise SVN: test_mailbox creates a few directories, > then Tortoise detects them (thanks to file change notifications) > and tries to walk them, to find out whether that directory is > under subversion control. > > Before Tortoise is done, test_mailbox tries to delete the directories. > This (somewhat?) fails - there apparently is a "delete pending" state > somehow in the system (sysinternals filemon sometimes shows DELETE PEND > as the result of a call). Then, the next test will fail with permission > denied. > > This is a heisenbug: different runs will fail in different tests, and > letting sysinternals perform complete tracing make the failure go away. I doubt Tortoise is at fault here, just because I never see that kind of failure on my XP Pro SP2 box. Tortoise is _always_ running here, and it's also my buildbot box: there have been over 400 buildbot runs on it, none with "mystery failures" of this kind, and I routinely run -uall release-build tests on it too. It's not that Tortoise is inactive, either. To the contrary, it's a major pig: since I rebooted yesterday, TSVNCache.exe has sucked up > 17 minutes(!) of CPU time, and read up well over a gigabyte from disk. My box isn't magically immune to this kind of problem: I have Copernic Desktop Search installed, and if I enable its "Index new and modified files on the fly" option, then all sorts of things fail in intermittent "permission denied" ways -- from Python tests to CVS updates (unsure about SVN updates -- haven't specifically tried that). IIRC, of the similar reports on c.l.py over the last year that got resolved, all were pinned on background indexing apps (like Copernic) or antivirus software. None of which proves that Tortoise isn't to blame on your box :-) From andymac at bullseye.apana.org.au Sun Apr 23 14:31:42 2006 From: andymac at bullseye.apana.org.au (Andrew MacIntyre) Date: Sun, 23 Apr 2006 23:31:42 +1100 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> References: <ee2a432c0604162111ra1d9055p6b1f7660b2fc913a@mail.gmail.com> <1f7befae0604162240m583eb54ct18982d5a36012aef@mail.gmail.com> <44438294.7070806@v.loewis.de> <1f7befae0604171237r39e693av3704f46082c2c180@mail.gmail.com> <4443F47E.7000103@v.loewis.de> <4444D6C2.10606@bullseye.apana.org.au> <1f7befae0604221941s50314eb6rd6b96a3e25c604d7@mail.gmail.com> Message-ID: <444B73AE.703@bullseye.apana.org.au> Tim Peters wrote: > [Andrew MacIntyre] Hmm... I don't appear to have explained what I meant very well :-| {...} > This really needs an executable example. Here's my best guess about > what you mean, but I don't see any of what you're describing on WinXP > Pro SP2: And a pretty good guess it was too! Although I hadn't expected/intended any research to be applied to my note (except if the original problem involved undeletable files on the buildbots), I'm relieved your experiment turned up no problem; it makes the case stronger against something in the local setup. FWIW, the particular cases I remember seeing involved applications built with ctypes and Venster (Win32 GUI wrapper over ctypes) with a lot of COM calls, with Python 2.2.3. >> I've never reported this as a Python bug > > If you do, I'll probably add a comment like the above ;-) ;-) >> because I've considered the antivirus SW likely to be the culprit. > > Could be. FWIW, Norton AV was running during the above. The anti-virus on my work XP Pro SP2 system has been a source of considerable annoyance in many situations, so its the first thing I blame... >> I don't recall seeing this with Windows 2000, but much was changed >> in the transition from the Win2k SOE to the WinXP SOE. > > What's that? Shitty Out-of-box Experience ;-)? That too! Our outsourced IT supplier seems to think it means Standard Operating Environment, but I haven't figured out what's "standard" about it... Cheers, Andrew. ------------------------------------------------------------------------- Andrew I MacIntyre "These thoughts are mine alone..." E-mail: andymac at bullseye.apana.org.au (pref) | Snail: PO Box 370 andymac at pcug.org.au (alt) | Belconnen ACT 2616 Web: http://www.andymac.org/ | Australia From skip at pobox.com Sun Apr 23 21:59:42 2006 From: skip at pobox.com (skip at pobox.com) Date: Sun, 23 Apr 2006 14:59:42 -0500 Subject: [Python-Dev] PEP 8 pylintrc? Message-ID: <17483.56494.697655.929090@montanaro.dyndns.org> I had the possibly stupid idea today of running the stdlib through pylint. Has anybody written a pylintrc file that attempts to reflect the recommendations of PEP 8 the extent possible? Thx, Skip From tim.peters at gmail.com Sun Apr 23 22:47:23 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 23 Apr 2006 16:47:23 -0400 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> Message-ID: <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> [Kirat Singh] > Hi, this is my first python dev post, so please forgive me if this topic has > already been discussed. It's hard to find one that hasn't -- but it's even harder to find the old discussions ;-) > It seemed to me that removing me_hash from a dict entry would save 2/3 of > the space 1/3, right? > used by dictionaries and also improve alignment of the entries > since they'd be 8 bytes instead of 12. How would that help? On 32-bit boxes, we have 3 4-byte members in PyDictEntry, and they'll all 4-byte aligned. In what respect related to alignment is that sub-optimal? > And sets end up having just 4 byte entries. > > I'm guessing that string dicts are the most common (hence the specialized > lookupdict_string routine), String dicts were the only kind at first, and their speed is critical because Python itself makes heavy use of them (e.g., to implement instance and module namespaces, and keyword arguments). > and since strings already contain their hash, this would probably mitigate > the performance impact. No slowdown in string dicts would be welcome, but since strings already cache their own hash, they seem unaffected by this. It's the speed of other key types that would suffer, and for classes that define their own __hash__ method that could well be deadly (see Martin's reply for more detail). > One could also add a hash to Tuples since they are immutable. A patch to do that was recently rejected. You can read its comments for some of the reasons: http://www.python.org/sf/1462796 More reasons were given in a python-dev thread about the same thing earlier this month: http://mail.python.org/pipermail/python-dev/2006-April/063275.html > If this isn't a totally stupid idea, I'd be happy to volunteer to try the > experiment and run any suggested tests. I'd be -1 if it slowed dict operations for classes that define their own __hash__. I do a lot of that ;-) > PS any opinion on making _Py_StringEq a macro? Yes: don't bother unless it provably speeds something "important" :-) It's kinda messy for a macro otherwise, macros always make debugging harder (can't step through the source expansion in a debugger w/o a lot of pain), etc. > inline function would be nice but I hesitate to bring up the C/C++ debate, both > languages suck in their own special way ;-) Does the Python source even compile as C++ now? People have been working toward that, but my last impression was that it's not there yet. From skip at pobox.com Sun Apr 23 22:56:22 2006 From: skip at pobox.com (skip at pobox.com) Date: Sun, 23 Apr 2006 15:56:22 -0500 Subject: [Python-Dev] Compiling w/ C++ (was: Reducing memory overhead for dictionaries by removing me_hash) In-Reply-To: <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> Message-ID: <17483.59894.182316.597036@montanaro.dyndns.org> Tim> Does the Python source even compile as C++ now? People have been Tim> working toward that, but my last impression was that it's not there Tim> yet. Yes, not there yet. I can build the base interpreter, but there are lots of module build problems. I got distracted, but will try to put some more time in on it over the next little while. Skip From kirat.singh at gmail.com Sun Apr 23 23:03:37 2006 From: kirat.singh at gmail.com (Kirat Singh) Date: Sun, 23 Apr 2006 17:03:37 -0400 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> Message-ID: <381244830604231403j1897f035h6e7c9a4fdf4c906f@mail.gmail.com> Interesting, thanks for the responses. And yeah I meant 1/3, I always mix up negatives. Agree that as you point out the biggest slowdown will be on classes that define their own __hash__, however since classes use instancedicts and this would reduce the dict size from 96 -> 64 bytes, we could blow 4 bytes to cache the hash on the object. In fact PyObject_Hash could 'intern' the result of __hash__ into a __hashvalue__ member of the class dict. This might be the best of both worlds since it'll only use space for the hashvalue if its needed. Oh and the reason I brought up strings was that one can grab the ob_shash from the stringobject in lookupdict_string to avoid even the function call to get the hash for a string, so its just the same as storing the hash on the entry for strings. The reason I looked into this to begin with was that my code used up a bunch of memory which was traceable to lots of little objects with instance dicts, so it seemed that if instancedicts took less memory I wouldn't have to go and add __slots__ to a bunch of my classes, or rewrite things as tuples/lists, etc. thanks! -Kirat On 4/23/06, Tim Peters <tim.peters at gmail.com> wrote: > > [Kirat Singh] > > Hi, this is my first python dev post, so please forgive me if this topic > has > > already been discussed. > > It's hard to find one that hasn't -- but it's even harder to find the > old discussions ;-) > > > It seemed to me that removing me_hash from a dict entry would save 2/3 > of > > the space > > 1/3, right? > > > used by dictionaries and also improve alignment of the entries > > since they'd be 8 bytes instead of 12. > > How would that help? On 32-bit boxes, we have 3 4-byte members in > PyDictEntry, and they'll all 4-byte aligned. In what respect related > to alignment is that sub-optimal? > > > And sets end up having just 4 byte entries. > > > > I'm guessing that string dicts are the most common (hence the > specialized > > lookupdict_string routine), > > String dicts were the only kind at first, and their speed is critical > because Python itself makes heavy use of them (e.g., to implement > instance and module namespaces, and keyword arguments). > > > and since strings already contain their hash, this would probably > mitigate > > the performance impact. > > No slowdown in string dicts would be welcome, but since strings > already cache their own hash, they seem unaffected by this. > > It's the speed of other key types that would suffer, and for classes > that define their own __hash__ method that could well be deadly (see > Martin's reply for more detail). > > > One could also add a hash to Tuples since they are immutable. > > A patch to do that was recently rejected. You can read its comments > for some of the reasons: > > http://www.python.org/sf/1462796 > > More reasons were given in a python-dev thread about the same thing > earlier this month: > > http://mail.python.org/pipermail/python-dev/2006-April/063275.html > > > If this isn't a totally stupid idea, I'd be happy to volunteer to try > the > > experiment and run any suggested tests. > > I'd be -1 if it slowed dict operations for classes that define their > own __hash__. I do a lot of that ;-) > > > PS any opinion on making _Py_StringEq a macro? > > Yes: don't bother unless it provably speeds something "important" :-) > It's kinda messy for a macro otherwise, macros always make debugging > harder (can't step through the source expansion in a debugger w/o a > lot of pain), etc. > > > inline function would be nice but I hesitate to bring up the C/C++ > debate, both > > languages suck in their own special way ;-) > > Does the Python source even compile as C++ now? People have been > working toward that, but my last impression was that it's not there > yet. > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060423/518a4e11/attachment.htm From thomas at python.org Sun Apr 23 23:36:09 2006 From: thomas at python.org (Thomas Wouters) Date: Sun, 23 Apr 2006 23:36:09 +0200 Subject: [Python-Dev] Tkinter lockups. Message-ID: <9e804ac0604231436q491d7289m97bb165dc42a5de@mail.gmail.com> For a while now, I've noticed test_tcl locking up when trying to refleaktest it. I was able to reproduce it quite simply: import Tkinter import os if "DISPLAY" in os.environ: del os.environ["DISPLAY"] tcl = Tkinter.Tcl() try: tcl.loadtk() except Exception, e: print e tcl.loadtk() Or, more directly, using _tkinter directly: import _tkinter import os if "DISPLAY" in os.environ: del os.environ["DISPLAY"] tk = _tkinter.create(None, "test", "Tk", 0, 1, 0) try: tk.loadtk() except: pass tk.loadtk() In either case, the second loadtk never finishes. It seems that, on my platform at least, Tk_Init() doesn't like being called twice even when the first call resulted in an error. That's Tcl and Tk 8.4.12. Tkapp_Init() (which is the Tkinter part that calls Tk_Init()) does its best to guard against calling Tk_Init() twice when the first call was succesful, but it doesn't remember failure cases. I don't know enough about Tcl/Tk or Tkinter how this is best handled, but it would be mightily convenient if it were. ;-) I've created a bugreport on it, and I hope someone with Tkinter knowledge can step in and fix it. (It looks like SF auto-assigned it to Martin already, hmm.) http://sourceforge.net/tracker/index.php?func=detail&aid=1475162&group_id=5470&atid=105470 -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060423/75fd287b/attachment.html From martin at v.loewis.de Sun Apr 23 23:56:40 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 23 Apr 2006 23:56:40 +0200 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <381244830604231403j1897f035h6e7c9a4fdf4c906f@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> <381244830604231403j1897f035h6e7c9a4fdf4c906f@mail.gmail.com> Message-ID: <444BF818.4050509@v.loewis.de> Kirat Singh wrote: > The reason I looked into this to begin with was that my code used up a > bunch of memory which was traceable to lots of little objects with > instance dicts, so it seemed that if instancedicts took less memory I > wouldn't have to go and add __slots__ to a bunch of my classes, or > rewrite things as tuples/lists, etc. Ah. In that case, I would be curious if tuning PyDict_MINSIZE could help. If you have many objects of the same type, am I right assuming they all have the same number of dictionary keys? If so, what is the dictionary size? Do they use ma_smalltable, or do they have an extra ma_table? Regards, Martin From p.f.moore at gmail.com Mon Apr 24 00:47:28 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 23 Apr 2006 23:47:28 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444BB0F8.2060602@gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> Message-ID: <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> > Aside from the What's New document, this has now been done. My modifications > consisted of terminology changes in the contextlib docs and the language > reference to match the 2.5a1 implementation, a Context Types addition to the > library reference similar to that for Iterator Types, and a very brief > addition to Chapter 8 of the tutorial. It should all filter through the > pipeline and appear on python.org in the next few hours. OK, after a *very* quick look, here's my initial impression. I'll do a better look tomorrow. - It seems self-consistent, now, at least. - Surely the __context__ method should be called __contextmgr__ now that it's producing a context manager? (Same naming issue, just the other side of it...) - The addition of the section on context types is useful, no matter what the final naming is. - I don't see why context managers (the ones with __enter__ and __exit__) need a __context__ method as well. To allow both contexts and context managers to be used with the "with" statement, it says. But there's no explanation of why that's a good thing. I see the parallel with "all iterators are iterables", but in the absence of the analog of the iter() builtin, I'm not so sure how I'd ever end up with a raw context manager (short of calling __context__ manually) to use in a with statement. So for my money, the iterator/context analogy breaks down here, precisely because raw iterators are far more common than raw context managers. (Oh, and by the way, writing this sentence reinforced the reason I perfer the old terminology - "context manager" feels like the name of something more concrete than "context", and yet under your new terminology, the "context" is more concrete than the "context manager"!) OK, that's all I'll say for now. As I say, I'll do a better read-through tomorrow. Paul. From jepler at unpythonic.net Mon Apr 24 03:18:15 2006 From: jepler at unpythonic.net (Jeff Epler) Date: Sun, 23 Apr 2006 20:18:15 -0500 Subject: [Python-Dev] Tkinter lockups. In-Reply-To: <9e804ac0604231436q491d7289m97bb165dc42a5de@mail.gmail.com> References: <9e804ac0604231436q491d7289m97bb165dc42a5de@mail.gmail.com> Message-ID: <20060424011815.GB12580@unpythonic.net> I just read the manpage for Tk_Init(3) (fc4 package tk-8.4.9-3) and it does not say that Tk_Init() may only be called once. While this doesn't mean Python shouldn't work around it, I think the behavior should be considered a bug in Tk, not _tkinter. However, on this system, I couldn't recreate the problem you reported with either the "using _tkinter directly" instructions, or using this "C" test program: #include <tcl.h> #include <tk.h> int main(void) { Tcl_Interp *trp; unsetenv("DISPLAY"); trp = Tcl_CreateInterp(); printf("%d\n", Tk_Init(trp)); printf("%d\n", Tk_Init(trp)); return 0; } Jeff From jafo-python-dev at tummy.com Mon Apr 24 03:55:53 2006 From: jafo-python-dev at tummy.com (Sean Reifschneider) Date: Sun, 23 Apr 2006 19:55:53 -0600 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. Message-ID: <20060424015553.GA842@tummy.com> A friend of mine is learning Python, and had a problem with the exit builtin. I like that in the interpreter it gives useful information, but he was writing a program in a file and tried "exit(0)", and was presented with the non-obvious error: TypeError: 'str' object is not callable What about something like: >>> class ExitClass: ... def __repr__(self): ... return('Hey, press control-D') ... def __call__(self, value): ... raise SyntaxError, 'You want to use sys.exit' ... >>> exit = ExitClass() >>> exit Hey, press control-D >>> exit(1) Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 5, in __call__ SyntaxError: You want to use sys.exit Jerub on #python thinks that maybe it needs to subclass the string object instead, but in general it seems like it might be an improvement. Thoughts? Thanks, Sean -- Peppermint Patty gets a DSL line in "YOU'D TELL ME IF YOU WERE IN A GERMAN SCHEISSE VIDEO WOULDN'T YOU, CHARLIE BROWN" Sean Reifschneider, Member of Technical Staff <jafo at tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability From crutcher at gmail.com Mon Apr 24 04:12:31 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Sun, 23 Apr 2006 19:12:31 -0700 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. In-Reply-To: <20060424015553.GA842@tummy.com> References: <20060424015553.GA842@tummy.com> Message-ID: <d49fe110604231912p348423eerc4663a11add25c50@mail.gmail.com> On 4/23/06, Sean Reifschneider <jafo-python-dev at tummy.com> wrote: > A friend of mine is learning Python, and had a problem with the exit > builtin. I like that in the interpreter it gives useful information, but > he was writing a program in a file and tried "exit(0)", and was presented > with the non-obvious error: > > TypeError: 'str' object is not callable > > What about something like: > > >>> class ExitClass: > ... def __repr__(self): > ... return('Hey, press control-D') > ... def __call__(self, value): > ... raise SyntaxError, 'You want to use sys.exit' > ... > >>> exit = ExitClass() > >>> exit > Hey, press control-D > >>> exit(1) > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "<stdin>", line 5, in __call__ > SyntaxError: You want to use sys.exit > > Jerub on #python thinks that maybe it needs to subclass the string object > instead, but in general it seems like it might be an improvement. Why don't we just not define 'exit' in non-interactive environments? -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From ncoghlan at gmail.com Mon Apr 24 04:44:09 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 12:44:09 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> Message-ID: <444C3B79.3020405@gmail.com> Paul Moore wrote: >> Aside from the What's New document, this has now been done. My modifications >> consisted of terminology changes in the contextlib docs and the language >> reference to match the 2.5a1 implementation, a Context Types addition to the >> library reference similar to that for Iterator Types, and a very brief >> addition to Chapter 8 of the tutorial. It should all filter through the >> pipeline and appear on python.org in the next few hours. > > OK, after a *very* quick look, here's my initial impression. I'll do a > better look tomorrow. > > - It seems self-consistent, now, at least. > > - Surely the __context__ method should be called __contextmgr__ now > that it's producing a context manager? (Same naming issue, just the > other side of it...) The __iter__ method isn't called __iterator__, so why would the __context__ method need to be called "__contextmgr__"? However, I'm definitely starting to agree with Just & Terry and whoever else said it that the basic object needs to be called something other than "context object". "context specifier" or something like that. (in fact, I might make a pass through the PEP & documentation to see how that looks. . .) > - I don't see why context managers (the ones with __enter__ and > __exit__) need a __context__ method as well. To allow both contexts > and context managers to be used with the "with" statement, it says. > But there's no explanation of why that's a good thing. I see the > parallel with "all iterators are iterables", but in the absence of the > analog of the iter() builtin, I'm not so sure how I'd ever end up with > a raw context manager (short of calling __context__ manually) to use > in a with statement. So for my money, the iterator/context analogy > breaks down here, precisely because raw iterators are far more common > than raw context managers. Context objects in the 2.5 standard library: decimal.Context Context managers in the 2.5 standard library: file threading.Lock threading.RLock threading.Condition threading.Semaphore threading.BoundedSemaphore Code that manipulates both context objects and context managers (like contextlib.nested, and the with statement itself) needs to be able to ignore the distinction. And yes, such code does indeed call __context__() directly in order to get hold of the context manager. (Oh, and by the way, writing this sentence > reinforced the reason I perfer the old terminology - "context manager" > feels like the name of something more concrete than "context", and yet > under your new terminology, the "context" is more concrete than the > "context manager"!) This is why I'm starting to think Terry and Just are right - the name of that type needs to be something other than "context object" in order to reduce the current ambiguity. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Mon Apr 24 04:49:33 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 12:49:33 +1000 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) In-Reply-To: <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]> References: <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]> Message-ID: <444C3CBD.2070208@gmail.com> Just van Rossum wrote: > Baptiste Carvello wrote: > >> Terry Reedy a ?crit : >>> So I propose that the context maker be called just that: 'context >>> maker'. That should pretty clearly not be the context that manages >>> the block execution. >>> >> +1 for context maker. In fact, after reading the begining of the >> thread, I came up with the very same idea. > > Or maybe "context factory"? That would be fine if we used __call__ to retrieve the context manager - but "factory" is too tightly bound to "factory function" in my mind. I'm going to try a pass through the docs using "context specifier", which gives three separate terms: - context specifier: An object with a __context__ method that produces a context manager object to manipulate the runtime context - context manager: An object with __enter__ and __exit__ methods that manipulate the runtime context. - context (or runtime context): The actual changes made to the runtime state by the context manager based on the current state of the context specifier This removes the ambiguity between "context object" and "runtime context". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at iinet.net.au Mon Apr 24 05:44:52 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Mon, 24 Apr 2006 13:44:52 +1000 Subject: [Python-Dev] Proposed addition to threading module - released Message-ID: <444C49B4.2030502@iinet.net.au> Do we want to add a "released" context manager to the threading module for 2.5? It was mentioned using the name "unlocked" in PEP 343, but never spelt out: class released(object): def __init__(self, lock): self.lock = lock def __enter__(self): self.lock.release() def __exit__(self, *exc_info): self.lock.acquire() (This context manager is the equivalent of PEP 319's asynchronize keyword) Usage would be: from threading import RLock, released sync_lock = RLock() def thread_safe(): with sync_lock: # This is thread-safe with released(sync_lock): # Perform long-running or blocking operation # that doesn't need to hold the lock # We have the lock back here (This particular example could be handled by two separate "with sync_lock" statements with the asynchronous operation between them, but other cases put the asynchronous operation inside a loop or a conditional statement which means that particular trick won't work). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From mhammond at skippinet.com.au Mon Apr 24 06:21:16 2006 From: mhammond at skippinet.com.au (Mark Hammond) Date: Mon, 24 Apr 2006 14:21:16 +1000 Subject: [Python-Dev] windows buildbot failures In-Reply-To: <1f7befae0604231155n13820452hfa9f2f0f3bf15242@mail.gmail.com> Message-ID: <DAELJHBGPBHPJKEBGGLNEEFBOEAD.mhammond@skippinet.com.au> Tim: > [Martin] > > on XP SP2: test_mailbox fails to > > me, with permission denied in some (random) test. I believe this > > is due to Tortoise SVN: test_mailbox creates a few directories, > > then Tortoise detects them (thanks to file change notifications) > > and tries to walk them, to find out whether that directory is > > under subversion control. ... > I doubt Tortoise is at fault here, just because I never see that kind > of failure on my XP Pro SP2 box. Tortoise is _always_ running here, > and it's also my buildbot box: there have been over 400 buildbot runs FWIW, I've seen similar issues caused by the MS 'Index Server'. Disabling the Index Server service made my problem go away. Mark From martin at v.loewis.de Mon Apr 24 08:31:19 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 24 Apr 2006 08:31:19 +0200 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. In-Reply-To: <20060424015553.GA842@tummy.com> References: <20060424015553.GA842@tummy.com> Message-ID: <444C70B7.7090909@v.loewis.de> Sean Reifschneider wrote: > Thoughts? In Python 2.5, exit(0) exits. Regards, Martin From martin at v.loewis.de Mon Apr 24 08:37:24 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 24 Apr 2006 08:37:24 +0200 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <444C49B4.2030502@iinet.net.au> References: <444C49B4.2030502@iinet.net.au> Message-ID: <444C7224.1010902@v.loewis.de> Nick Coghlan wrote: > Do we want to add a "released" context manager to the threading module for > 2.5? I don't think that should be added. I would consider it a dangerous programming style: if the lock merely doesn't "need" to be held (i.e. if it isn't necessary, but won't hurt), one should just keep holding the lock. If it is essential to release the lock, because the code would otherwise deadlock, the code should be dramatically revised to avoid that situation, e.g. by redefining the granularity of the lock, and moving the with statements accordingly. Regards, Martin From ncoghlan at gmail.com Mon Apr 24 08:51:14 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 16:51:14 +1000 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. In-Reply-To: <20060424015553.GA842@tummy.com> References: <20060424015553.GA842@tummy.com> Message-ID: <444C7562.2070903@gmail.com> Sean Reifschneider wrote: > Thoughts? As Martin pointed out, this was fixed for 2.5a1: C:\>type demo.bat @c:\python%1\python -c "exit()" @echo %ERRORLEVEL% C:\>demo 24 Traceback (most recent call last): File "<string>", line 1, in ? TypeError: 'str' object is not callable 1 C:\>demo 25 0 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Mon Apr 24 09:01:00 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 17:01:00 +1000 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <444C7224.1010902@v.loewis.de> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> Message-ID: <444C77AC.2010305@gmail.com> Martin v. L?wis wrote: > Nick Coghlan wrote: >> Do we want to add a "released" context manager to the threading module for >> 2.5? > > I don't think that should be added. I would consider it a dangerous > programming style: if the lock merely doesn't "need" to be held (i.e. > if it isn't necessary, but won't hurt), one should just keep holding > the lock. If it is essential to release the lock, because the code > would otherwise deadlock, the code should be dramatically revised > to avoid that situation, e.g. by redefining the granularity of the > lock, and moving the with statements accordingly. That isn't always possible or practical, though - Python's own GIL is a case where releasing it around long-running operations (such as blocking I/O operations) that don't need it provides significant benefits for threaded code, but redesigning the lock to use finer granularity causes its own problems. I'm not going to argue particularly strongly (or at all, really) for this one, since I think threading.Thread + Queue.Queue is a much better way to write threaded Python programs. The blocking IO 'asynchronize' use case in PEP 319 was just something I happened to notice in looking back at the various PEP's that were rejected in favour of PEP 343. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From phd at mail2.phd.pp.ru Mon Apr 24 09:09:18 2006 From: phd at mail2.phd.pp.ru (Oleg Broytmann) Date: Mon, 24 Apr 2006 11:09:18 +0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444C3B79.3020405@gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> Message-ID: <20060424070918.GB22640@phd.pp.ru> On Mon, Apr 24, 2006 at 12:44:09PM +1000, Nick Coghlan wrote: > Paul Moore wrote: > > - Surely the __context__ method should be called __contextmgr__ now > > that it's producing a context manager? (Same naming issue, just the > > other side of it...) > > The __iter__ method isn't called __iterator__, so why would the __context__ > method need to be called "__contextmgr__"? It should be __ctxmgr__ to be in par with __iter__, __len__, dict and so on ;) Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From martin at v.loewis.de Mon Apr 24 09:19:09 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 24 Apr 2006 09:19:09 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <44491299.9040607@v.loewis.de> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> Message-ID: <444C7BED.9080204@v.loewis.de> Martin v. L?wis wrote: > - Paul Moore has contributed a Python build procedure for the > free version of the 2003 compiler. This one is without IDE, > but still, it should allow people without a VS 2003 license > to work on Python itself; it should also be possible to develop > extensions with that compiler (although I haven't verified > that distutils would pick that up correctly). Apparently, the status of this changed right now: it seems that the 2003 compiler is not available anymore; the page now says that it was replaced with the 2005 compiler. Should we reconsider? Regards, Martin From ncoghlan at iinet.net.au Mon Apr 24 09:25:38 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Mon, 24 Apr 2006 17:25:38 +1000 Subject: [Python-Dev] Buildbot messages and the build svn revision number Message-ID: <444C7D72.2050704@iinet.net.au> Would it be possible to get the buildbot error message subject lines to include the svn revision number of the build that failed? I committed some non-portable tests earlier today, but they should all be fixed in rev 45687. However, the various buildbots are still working through the different checkins that: a) included tests that weren't reliably correct on all platforms b) reverted the unreliable tests c) checked in more reliable tests that should work everywhere Until I open the message, click through to the details page and scroll down to the record of the svn update step, I don't actually know whether the buildbot is reporting a real problem or not. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From nyamatongwe at gmail.com Mon Apr 24 09:48:27 2006 From: nyamatongwe at gmail.com (Neil Hodgson) Date: Mon, 24 Apr 2006 17:48:27 +1000 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <444C7BED.9080204@v.loewis.de> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> Message-ID: <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> Martin v. L?wis: > Apparently, the status of this changed right now: it seems that > the 2003 compiler is not available anymore; the page now says > that it was replaced with the 2005 compiler. > > Should we reconsider? I expect Microsoft means that Visual Studio Express will be available free forever, not that you will always be able to download Visual Studio 2005 Express. They normally only provide a particular product version for a limited time after it has been superceded. Neil From p.f.moore at gmail.com Mon Apr 24 10:03:12 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 09:03:12 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444C3B79.3020405@gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> Message-ID: <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> On 4/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote: [...] > The __iter__ method isn't called __iterator__, so why would the __context__ > method need to be called "__contextmgr__"? Because......... Hmm, Oleg already responded to this, and to be honest, I think the whole issue is a nitpick. Apologies for bringing it up, it doesn't help. > However, I'm definitely starting to agree with Just & Terry and whoever else > said it that the basic object needs to be called something other than "context > object". "context specifier" or something like that. (in fact, I might make a > pass through the PEP & documentation to see how that looks. . .) [...] > > (Oh, and by the way, writing this sentence > > reinforced the reason I perfer the old terminology - "context manager" > > feels like the name of something more concrete than "context", and yet > > under your new terminology, the "context" is more concrete than the > > "context manager"!) > > This is why I'm starting to think Terry and Just are right - the name of that > type needs to be something other than "context object" in order to reduce the > current ambiguity. Right. I'll still do as I promised, and have a better look through the latest documentation, but my gut feel is that this whole thing is getting way out of proportion. Naming and terminology is important, but we've now on our 3rd version of the docuentation. My instinct is STILL that the original version of the docs were fine. I can't express this in analytical terms, for which I apologise, but I'm not sure it's an analytical issue. So if the following sounds wooly, it's the best I can do :-) The terms "context" (an abstract thingy relating to the environment in which a block of code executes) and "context manager" (a concrete object that manages contexts) seem natural to me. The fact that you pass "context managers" to the "with" statement emphasizes the idea that they are the concrete one of the pair. Your version swaps the terms around, and so messes up my intuition. I concede that your version is self-consistent, but it doesn't match my intuition. And in my view, intuition matters here, so I have to prefer the variant that matches my intuition. Your point about the objects in the stdlib doesn't persuade me, I'm afraid. All the threading examples have __context__ return self, so they are, in some sense, both contexts and context managers (contexts which are their own managers, if you like). Only decimal.Context is a genuine context manager which creates independent context objects - I concede that the name doesn't match my theory here, but I'm afraid *I don't care*. I emphasize this, because it's an intuition thing again - I know my terminology is (slightly) inconsistent with respect to the decimal.Context class, but it doesn't disrupt my understanding. Worse (in terms of explaining myself!) it still feels natural to stick with decimal.Context even though I *know* that in my terms, it is formally a context manager. (The best explanation I can give is that it sits happily alongside the other places where I don't mind ignoring the difference between "function returning X" and "X" when describing things). OK, I really can't explain that any better, but the library object naming issue doesn't sway me. Right, this is my final position: I'm happy with the alpha-1 terminology. I would strongly prefer that we go back to that. I'd like to see the @contextmanager decorator split (conceptually, if not actually) into 2 parts, one for use with generator functions which are to be context managers, and one for use with __context__ functions (generator functions which are to be functions returning contexts). I'm willing to write up a patch doing the code changes for this, but I don't hold out much hope of being able to write docs any better than you or anyone else can! I won't argue against any self-consistent terminology. But that's because I'm not convinced of the value of having another round of change/review on the docs - it doesn't mean I support it over the alpha-1 version (and I'll probably still think in alpha-1 terms, whatever ends up being agreed). I think I've now read enough on the subject that my value as an unbiased reader is being lost... Paul. PS Is anyone else arguing with Nick on this? Now that I'm reduced to intuitive "well, I prefer it" arguments, it would be nice to hear what people's gut feelings are - "contexts create context managers", "context managers create contexts", "so confused I don't know"... :-) ? From p.f.moore at gmail.com Mon Apr 24 10:15:52 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 09:15:52 +0100 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) In-Reply-To: <444C3CBD.2070208@gmail.com> References: <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@10.0.0.24> <444C3CBD.2070208@gmail.com> Message-ID: <79990c6b0604240115q5d782551l3883847bf8ee937@mail.gmail.com> On 4/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > I'm going to try a pass through the docs using "context specifier", which > gives three separate terms: [...] > This removes the ambiguity between "context object" and "runtime context". That might just work. At the very least, I'd much rather see this as the alpha-2 alternative than what's there at the moment. Paul. From p.f.moore at gmail.com Mon Apr 24 10:24:00 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 09:24:00 +0100 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> Message-ID: <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> On 4/24/06, Neil Hodgson <nyamatongwe at gmail.com> wrote: > Martin v. L?wis: > > > Apparently, the status of this changed right now: it seems that > > the 2003 compiler is not available anymore; the page now says > > that it was replaced with the 2005 compiler. > > > > Should we reconsider? > > I expect Microsoft means that Visual Studio Express will be > available free forever, not that you will always be able to download > Visual Studio 2005 Express. They normally only provide a particular > product version for a limited time after it has been superceded. No. Martin means that http://msdn.microsoft.com/visualc/vctoolkit2003/ no longer points to a downloadable version of MSVC which includes the optimizer, and generates VC 7.1 compatible binaries. This means that unless you've already downloaded it, or it's acceptable for someone else to host it, there's once again no way to build Python with free tools :-( (Is it worth the PSF asking MS if it's acceptable for python.org to host a copy of the toolkit compiler? As MS donated copies of MSVC 7.1 to the Python project, they may be willing to consider this...) Paul. From sylvain.thenault at logilab.fr Mon Apr 24 10:33:35 2006 From: sylvain.thenault at logilab.fr (Sylvain =?iso-8859-1?Q?Th=E9nault?=) Date: Mon, 24 Apr 2006 10:33:35 +0200 Subject: [Python-Dev] PEP 8 pylintrc? In-Reply-To: <17483.56494.697655.929090@montanaro.dyndns.org> References: <17483.56494.697655.929090@montanaro.dyndns.org> Message-ID: <20060424083334.GA4036@logilab.fr> On Sunday 23 April ? 14:59, skip at pobox.com wrote: > I had the possibly stupid idea today of running the stdlib through pylint. > Has anybody written a pylintrc file that attempts to reflect the > recommendations of PEP 8 the extent possible? default pylint configuration should not be that far from PEP8. I would be actually interested about what you think is not conformant, or even which test are missing for a better "PEP8 compliance test". -- Sylvain Th?nault LOGILAB, Paris (France). http://www.logilab.com http://www.logilab.fr http://www.logilab.org From ncoghlan at gmail.com Mon Apr 24 10:50:10 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 24 Apr 2006 18:50:10 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> Message-ID: <444C9142.5050409@gmail.com> Paul Moore wrote: > Right. I'll still do as I promised, and have a better look through the > latest documentation, but my gut feel is that this whole thing is > getting way out of proportion. Naming and terminology is important, > but we've now on our 3rd version of the docuentation. Only the 2nd, really - prior to the work Phillip put into it, we didn't have any user-oriented documentation at all. (the changes I made really were just moving a bit of terminology). I'm not counting the 'context object' -> 'context specifier' change as a different version, as it was the process of updating the docs to match the implementation that made me realise the term 'context object' was too imprecise (the various comments here naturally helped in coming to that understanding). > My instinct is STILL that the original version of the docs were fine. The original version of the docs moved the problem around, but it didn't solve it - the fundamental issue was that we had two names ("context manager" and "context"), but three entities (the original object with the __context__ method, the object it returned with __enter__ and __exit__ methods, and the 'runtime context' that the __enter__ and __exit__ methods manipulated). The latest version of the docs uses "context specifier", "context manager" and "context" for the three different things, and that seems to work reasonalbly well. > The terms "context" (an abstract thingy relating to the environment in > which a block of code executes) and "context manager" (a concrete > object that manages contexts) seem natural to me. The fact that you > pass "context managers" to the "with" statement emphasizes the idea > that they are the concrete one of the pair. Agreed. In the latest version of the docs, you pass in either a context specifier or a context manager to the with statement. A specifier just provides a __context__ method, a manager provides __enter__ and __exit__ as well. > Your version swaps the terms around, and so messes up my intuition. I > concede that your version is self-consistent, but it doesn't match my > intuition. And in my view, intuition matters here, so I have to prefer > the variant that matches my intuition. Also agreed - the term 'context' needs to be reserved for the 'runtime context', so I don't think we can use it for *any* of the concrete objects involved. I guess the python-dev discussion way back in the dim dark mists of time that rejected "context" as the name for the pre-__context__ method context managers was onto something :) > I know my terminology is (slightly) inconsistent with respect to the > decimal.Context class, but it doesn't disrupt my understanding. Worse > (in terms of explaining myself!) it still feels natural to stick with > decimal.Context even though I *know* that in my terms, it is formally > a context manager. With "context" no longer referring to any particular kind of object, this naming issue goes away. The decimal.Context object is just a context specifier for the active decimal context. > OK, I really can't explain that any better, but the library object > naming issue doesn't sway me. It swayed me, but into (sort of) agreeing with you :) > I'd like to see the @contextmanager decorator split (conceptually, if > not actually) into 2 parts, one for use with generator functions which > are to be context managers, and one for use with __context__ functions > (generator functions which are to be functions returning contexts). I believe the current docs avoid the need for doing this, as: - the with statement expects a context specifier - a context specifier's __context__ method must return a context manager - all context managers can be used as context specifiers So creating a standalone context manager properly parallels the creation of a standalone iterator and creating a special purpose context manager as a context specifier's __context__ method properly parallels creation of a special purpose iterator as an iterable's __iter__ method. This problem with the @contextmanager decorator is what clued me in to the fact that the alpha 1 documentation just moved the terminology problem around rather than actually fixing it. > I think I've now read enough on the subject that my value as an > unbiased reader is being lost... Your input really helped me figure out where the problem was, though. Trying to describe 3 different things using only 2 distinct terms was a recipe for confusion in anybody's book :) > PS Is anyone else arguing with Nick on this? Now that I'm reduced to > intuitive "well, I prefer it" arguments, it would be nice to hear what > people's gut feelings are - "contexts create context managers", > "context managers create contexts", "so confused I don't know"... :-) > ? With the current docs: "context specifiers create context managers" "context specifiers define a runtime context" "context managers enter and exit a runtime context" "context managers set up and tear down a runtime context" The main thing this discussion showed me is that there was a real problem in the PEP - it used the term "context" for two different things (context specifiers and runtime contexts). Phillip noticed that problem, and tried to fix it in the documentation. Unfortunately, the attempted fix just moved the problem around because "context" was still used to refer to two different kinds of thing (only now the second kind of thing was context managers rather than context specifiers). The changes I made this weekend are designed to give everything its own name, leaving "context" free to refer to either the runtime context or a context specifier or whatever else makes sense in context. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From thomas at python.org Mon Apr 24 11:11:27 2006 From: thomas at python.org (Thomas Wouters) Date: Mon, 24 Apr 2006 11:11:27 +0200 Subject: [Python-Dev] Tkinter lockups. In-Reply-To: <20060424011815.GB12580@unpythonic.net> References: <9e804ac0604231436q491d7289m97bb165dc42a5de@mail.gmail.com> <20060424011815.GB12580@unpythonic.net> Message-ID: <9e804ac0604240211l1bd09d5eu413388503f217000@mail.gmail.com> On 4/24/06, Jeff Epler <jepler at unpythonic.net> wrote: > > I just read the manpage for Tk_Init(3) (fc4 package tk-8.4.9-3) and it > does not say that Tk_Init() may only be called once. While this doesn't > mean Python shouldn't work around it, I think the behavior should be > considered a bug in Tk, not _tkinter. FWIW, the Tk_Init manpage says "the Tk interpreter should not already be loaded", and it then goes on to say that if Tk_Init fails to *initialize* the interpreter, an error is returned. So it's unclear to me whether Tk_Init really "loads the interpreter", and whether it unloads after an error occurs (apparently not, I'd say ;) http://www.tcl.tk/man/tcl8.4/TkLib/Tk_Init.htm However, on this system, I couldn't recreate the problem you reported > with either the "using _tkinter directly" instructions, or using this > "C" test program: > > #include <tcl.h> > #include <tk.h> > > int main(void) { > Tcl_Interp *trp; > unsetenv("DISPLAY"); > trp = Tcl_CreateInterp(); > printf("%d\n", Tk_Init(trp)); > printf("%d\n", Tk_Init(trp)); > return 0; > } Yes, this C snippet locks up on my systems, just as the python snippet does. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060424/88b957eb/attachment.html From p.f.moore at gmail.com Mon Apr 24 11:26:37 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 10:26:37 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444C9142.5050409@gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> Message-ID: <79990c6b0604240226t6da6bc42m1268f356ceb251af@mail.gmail.com> On 4/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > > I think I've now read enough on the subject that my value as an > > unbiased reader is being lost... > > Your input really helped me figure out where the problem was, though. Trying > to describe 3 different things using only 2 distinct terms was a recipe for > confusion in anybody's book :) Glad it was of some use :-) > > PS Is anyone else arguing with Nick on this? Now that I'm reduced to > > intuitive "well, I prefer it" arguments, it would be nice to hear what > > people's gut feelings are - "contexts create context managers", > > "context managers create contexts", "so confused I don't know"... :-) > > ? [...] > The changes I made this weekend are designed to give everything its own name, > leaving "context" free to refer to either the runtime context or a context > specifier or whatever else makes sense in context. OK. At this point, the discussion seems to have mutated from a "Phillip vs Nick" debate to a "Paul vs Nick" debate. I think I need to duck out now and we'll see if this version passes by general acclaim (or sheer exhaustion :-)) Paul. From vys at renet.ru Mon Apr 24 11:51:58 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Mon, 24 Apr 2006 13:51:58 +0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> Message-ID: <444C9FBE.2060709@renet.ru> I would like to participate in Google Summer of Code. The idea consists in creation of a specialized class for work with binary trees AVL and RB. The part of ideas is taken from Parrot (Perl6) where for pair values the specialized type is stipulated. As advantages it is possible to note: * High speed. On simple types at quantity of elements up to 10000 speed of work concedes to the standard dictionary of all in 2.7 times. * Safety of work in a multithread operating mode. * The method for frosts of object is stipulated. * The data storage is carried out in the sorted kind. * A high overall performance `item' and `iteritem' methods as it is not necessary to create new objects. * At lots of objects in a tree memory is used much less intensively. * Absence of collisions. As consequence, it is impossible to generate bad data for creation DoS of the attacks influencing the dictionary of transferred arguments. * For objects existence of a method __ hash __ is not necessary. * The same kind of the dictionary by means of the overloaded operations probably change of its behaviour so that it supported the multivariational representation based on stratification given and subsequent recoils. Additional requirements to key objects: * The opportunity of comparison of key objects function cmp is necessary. There are the reasons, braking development of this project: * Compulsion of heading of a module `gc' for each compound object (this structure is unessential to some objects, but updating `gc' the module is required). * Absence of standard object the pair, probably depends on the previous item. Lacks of a binary tree: * Average time of search for hash - O (1), and for trees - O (log2 N). * A lot of memory under a single element of a tree: (3*sizeof (void *) + sizeof (int))*2, one element is used rather - the pair, the second site of memory is allocated under node of a tree. In protection of object "pair": * The logic of methods of the given object noticeably is easier than tuple, that as a result can affect speed of work of the program in the best party. * Alignment on 8 bytes has no big sense at present architecture where in cache sample a minimum on 64 bytes is made. Use of type "pair" will give an appreciable prize at use of alignment in 4 bytes. Otherwise on 64-bit platforms it is much more favourable to use tuple object. The given project can demand processing of the module `gcmodule.c' and `tupleobject.c'. It is necessary to reduce the size of static objects, for this purpose the opportunity is necessary is transparent to pass objects not having direct support from the module `gcmodule.c'. Also it will be partially necessary to process the module `obmalloc.c' for more effective distribution of memory. I shall be glad to answer questions on this theme. From jjl at pobox.com Mon Apr 24 14:13:44 2006 From: jjl at pobox.com (John J Lee) Date: Mon, 24 Apr 2006 13:13:44 +0100 (GMT Standard Time) Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> Message-ID: <Pine.WNT.4.64.0604241311360.1196@shaolin> On Mon, 24 Apr 2006, Paul Moore wrote: > On 4/24/06, Neil Hodgson <nyamatongwe at gmail.com> wrote: >> Martin v. L?wis: >> >>> Apparently, the status of this changed right now: it seems that >>> the 2003 compiler is not available anymore; the page now says >>> that it was replaced with the 2005 compiler. >>> >>> Should we reconsider? [...] > No. Martin means that http://msdn.microsoft.com/visualc/vctoolkit2003/ > no longer points to a downloadable version of MSVC which includes the > optimizer, and generates VC 7.1 compatible binaries. > > This means that unless you've already downloaded it, or it's > acceptable for someone else to host it, there's once again no way to > build Python with free tools :-( [...] Actually, it's apparently still there, just at a different URL. Somebody posted the new URL on c.l.py a day or two back (Alex Martelli started the thread, IIRC). I'm off to the dentist, no time to Google for it! John From vys at renet.ru Mon Apr 24 14:55:02 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Mon, 24 Apr 2006 16:55:02 +0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. Message-ID: <444CCAA6.2090805@renet.ru> I would like to participate in Google Summer of Code. The idea consists in creation of a specialized class for work with binary trees AVL and RB. The part of ideas is taken from Parrot (Perl6) where for pair values the specialized type is stipulated. As advantages it is possible to note: * High speed. On simple types at quantity of elements up to 10000 speed of work concedes to the standard dictionary of all in 2.7 times. * Safety of work in a multithread operating mode. * The method for frosts of object is stipulated. * The data storage is carried out in the sorted kind. * A high overall performance `item' and `iteritem' methods as it is not necessary to create new objects. * At lots of objects in a tree memory is used much less intensively. * Absence of collisions. As consequence, it is impossible to generate bad data for creation DoS of the attacks influencing the dictionary of transferred arguments. * For objects existence of a method __ hash __ is not necessary. * The same kind of the dictionary by means of the overloaded operations probably change of its behaviour so that it supported the multivariational representation based on stratification given and subsequent recoils. Additional requirements to key objects: * The opportunity of comparison of key objects function cmp is necessary. There are the reasons, braking development of this project: * Compulsion of heading of a module `gc' for each compound object (this structure is unessential to some objects, but updating `gc' the module is required). * Absence of standard object the pair, probably depends on the previous item. Lacks of a binary tree: * Average time of search for hash - O (1), and for trees - O (log2 N). * A lot of memory under a single element of a tree: (3*sizeof (void *) + sizeof (int))*2, one element is used rather - the pair, the second site of memory is allocated under node of a tree. In protection of object "pair": * The logic of methods of the given object noticeably is easier than tuple, that as a result can affect speed of work of the program in the best party. * Alignment on 8 bytes has no big sense at present architecture where in cache sample a minimum on 64 bytes is made. Use of type "pair" will give an appreciable prize at use of alignment in 4 bytes. Otherwise on 64-bit platforms it is much more favourable to use tuple object. The given project can demand processing of the module `gcmodule.c' and `tupleobject.c'. It is necessary to reduce the size of static objects, for this purpose the opportunity is necessary is transparent to pass objects not having direct support from the module `gcmodule.c'. Also it will be partially necessary to process the module `obmalloc.c' for more effective distribution of memory. I shall be glad to answer questions on this theme. From sylvain.thenault at logilab.fr Mon Apr 24 15:54:14 2006 From: sylvain.thenault at logilab.fr (Sylvain =?iso-8859-1?Q?Th=E9nault?=) Date: Mon, 24 Apr 2006 15:54:14 +0200 Subject: [Python-Dev] gettext.py bug #1448060 Message-ID: <20060424135413.GC4036@logilab.fr> Hi there, while playing with gettext catalog handling of a complex application, I've hit bug #1448060. This bug seems serious to me since gettext.py is not able to handle some .po files generated by msgcat/msgmerge due to lines like "#-#-#-#-# fr.po (2.0) #-#-#-#-#\n" in the metadata. I've posted a patch (#1475523) for this and assigned it to Martin Von Loewis since he was the core developper who has made some followup on the original bug. Could someone (Martin or someone else) quick review this patch ? I really need a fix for this, so if anyone feels my patch is not correct, and explain why and what should be done, I can rework on it. regards, -- Sylvain Th?nault LOGILAB, Paris (France). http://www.logilab.com http://www.logilab.fr http://www.logilab.org From fuzzyman at voidspace.org.uk Mon Apr 24 15:34:19 2006 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 24 Apr 2006 14:34:19 +0100 Subject: [Python-Dev] Python Grammar Ambiguity Message-ID: <444CD3DB.7050704@voidspace.org.uk> Hello all, I'm working on a parser for part of the Python language (expressions but not statements basically). I'm using PLY to generate the parser and it's mostly done. I've hit on what looks like a fundamental ambiguity in the Python grammar which is difficult to get round with PLY; and I'm wondering *why* the grammar is defined in this way. It's possible there is a reason that I've missed, which means I need to rethink my workaround. List displays (list comprehensions) are defined as (from http://docs.python.org/ref/lists.html ) : test ::= and_test ( "or" and_test )* | lambda_form testlist ::= test ( "," test )* [ "," ] list_display ::= "[" [listmaker] "]" listmaker ::= expression ( list_for | ( "," expression )* [","] ) list_iter ::= list_for | list_if list_for ::= "for" expression_list "in" testlist [list_iter] list_if ::= "if" test [list_iter] The problem is that list_for is defined as : "for" expression_list "in" testlist This allows arbitrary expressions in the 'assignment' part of a list comprehension. As a result, the following is valid syntax according to the grammar : [x for x + 1 in y] Obviously it isn't valid ! This parses to an ast, but the syntax error is thrown when you compile the resulting ast. The problem is that for the basic case of a list comprehension ( ``[x for x in y]``), ``x in y`` is a valid expression. That makes it extremely hard to disambiguate the grammar so that the ``in`` is treated correctly, and not part of an expression. My question is, why are arbitrary expressions allowed here in the grammar ? As far as I can tell, only identifiers (nested in parentheses or brackets) are valid here. I've got round the problem by creating a new node 'identifier_list' and just having that return the expected syntax tree (actually an expression list). This gets round the ambiguity [#]_. It worries me that there might be a valid expression allowed here that I haven't thought of. My current rules allow anything that looks like ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything else be allowed ? If not, why not modify the grammar so that the compiler has less possible invalid syntax trees to work with ? (Also the grammar definition of string conversion is wrong as it states that a trailing comma is valid, which isn't the case. As far as I can tell that is necessary to allow nesting string conversions.) Fuzzyman http://www.voidspace.org.uk/python/index.shtml .. [#] If I could make precedence work in PLY I could also solve it I guess. However I can't. :-) From guido at python.org Mon Apr 24 16:12:40 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 24 Apr 2006 07:12:40 -0700 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <444CD3DB.7050704@voidspace.org.uk> References: <444CD3DB.7050704@voidspace.org.uk> Message-ID: <ca471dc20604240712h64fc5741r43c891464944e1be@mail.gmail.com> Well, yes, the syntax is supposed to be something like "for varlist in testlist". Could you report this as a doc bug (if you found this information in the docs)? On 4/24/06, Michael Foord <fuzzyman at voidspace.org.uk> wrote: > Hello all, > > I'm working on a parser for part of the Python language (expressions but > not statements basically). I'm using PLY to generate the parser and it's > mostly done. > > I've hit on what looks like a fundamental ambiguity in the Python > grammar which is difficult to get round with PLY; and I'm wondering > *why* the grammar is defined in this way. It's possible there is a > reason that I've missed, which means I need to rethink my workaround. > > List displays (list comprehensions) are defined as (from > http://docs.python.org/ref/lists.html ) : > > > test ::= and_test ( "or" and_test )* | lambda_form > testlist ::= test ( "," test )* [ "," ] > list_display ::= "[" [listmaker] "]" > listmaker ::= expression ( list_for | ( "," expression )* [","] ) > list_iter ::= list_for | list_if > list_for ::= "for" expression_list "in" testlist [list_iter] > list_if ::= "if" test [list_iter] > > The problem is that list_for is defined as : > > "for" expression_list "in" testlist > > This allows arbitrary expressions in the 'assignment' part of a list > comprehension. > > As a result, the following is valid syntax according to the grammar : > > [x for x + 1 in y] > > Obviously it isn't valid ! This parses to an ast, but the syntax error > is thrown when you compile the resulting ast. > > The problem is that for the basic case of a list comprehension ( ``[x > for x in y]``), ``x in y`` is a valid expression. That makes it > extremely hard to disambiguate the grammar so that the ``in`` is treated > correctly, and not part of an expression. > > My question is, why are arbitrary expressions allowed here in the > grammar ? As far as I can tell, only identifiers (nested in parentheses > or brackets) are valid here. I've got round the problem by creating a > new node 'identifier_list' and just having that return the expected > syntax tree (actually an expression list). This gets round the ambiguity > [#]_. > > It worries me that there might be a valid expression allowed here that I > haven't thought of. My current rules allow anything that looks like > ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything > else be allowed ? > > If not, why not modify the grammar so that the compiler has less > possible invalid syntax trees to work with ? > > (Also the grammar definition of string conversion is wrong as it states > that a trailing comma is valid, which isn't the case. As far as I can > tell that is necessary to allow nesting string conversions.) > > Fuzzyman > http://www.voidspace.org.uk/python/index.shtml > > > .. [#] If I could make precedence work in PLY I could also solve it I > guess. However I can't. :-) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Mon Apr 24 16:18:23 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 10:18:23 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604240226t6da6bc42m1268f356ceb251af@mail.gmail.co m> References: <444C9142.5050409@gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> Message-ID: <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> At 10:26 AM 4/24/2006 +0100, Paul Moore wrote: >OK. At this point, the discussion seems to have mutated from a >"Phillip vs Nick" debate to a "Paul vs Nick" debate. I only stepped aside so that other people would chime in. I still don't think the new terminology makes anything clearer, and would rather see tweaks to address the one minor issue of @contextmanager producing an object that's also a context than a complete reworking of the documentation. That was the only thing that was unclear in the a1 terminology and docs, and it's an extremely minor point that could easily be addressed. Throwing away an intuitive terminology because of a minor implementation issue in favor of a non-intutitive terminology that happens to be super-precise seems penny-wise and pound-foolish to me. From aleaxit at gmail.com Mon Apr 24 16:49:25 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Mon, 24 Apr 2006 07:49:25 -0700 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <Pine.WNT.4.64.0604241311360.1196@shaolin> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> <Pine.WNT.4.64.0604241311360.1196@shaolin> Message-ID: <6ED62838-6437-4FE9-B338-3B607B17E809@gmail.com> On Apr 24, 2006, at 5:13 AM, John J Lee wrote: > On Mon, 24 Apr 2006, Paul Moore wrote: >> On 4/24/06, Neil Hodgson <nyamatongwe at gmail.com> wrote: >>> Martin v. L?wis: >>> >>>> Apparently, the status of this changed right now: it seems that >>>> the 2003 compiler is not available anymore; the page now says >>>> that it was replaced with the 2005 compiler. >>>> >>>> Should we reconsider? > [...] >> No. Martin means that http://msdn.microsoft.com/visualc/ >> vctoolkit2003/ >> no longer points to a downloadable version of MSVC which includes the >> optimizer, and generates VC 7.1 compatible binaries. >> >> This means that unless you've already downloaded it, or it's >> acceptable for someone else to host it, there's once again no way to >> build Python with free tools :-( > [...] > > Actually, it's apparently still there, just at a different URL. > Somebody posted the new URL on c.l.py a day or two back (Alex > Martelli started the thread, IIRC). I'm off to the dentist, no > time to Google for it! Yep, I was the one looking for that URL, and then at somebody else's request reposted it and also tinyurled it (since it's a very long URL it gives somebody problems). For the Toolkit 2003: http://tinyurl.com/gv8wr Also, for the Net SDK 1.1 (the 2.0 one apparently now is lacking the msvcrt.lib for x86...): http://tinyurl.com/5flob (original Url for the latter kindly supplied by Martin, btw). Martin also suggested using mingw instead, on that same thread. Alex From aleaxit at gmail.com Mon Apr 24 16:53:09 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Mon, 24 Apr 2006 07:53:09 -0700 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <444C7BED.9080204@v.loewis.de> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> Message-ID: <1137CFBB-EA21-4A5A-A238-4DDFB82A6533@gmail.com> On Apr 24, 2006, at 12:19 AM, Martin v. L?wis wrote: > Martin v. L?wis wrote: >> - Paul Moore has contributed a Python build procedure for the >> free version of the 2003 compiler. This one is without IDE, >> but still, it should allow people without a VS 2003 license >> to work on Python itself; it should also be possible to develop >> extensions with that compiler (although I haven't verified >> that distutils would pick that up correctly). > > Apparently, the status of this changed right now: it seems that > the 2003 compiler is not available anymore; the page now says > that it was replaced with the 2005 compiler. > > Should we reconsider? Personally, being a cheapskate, and with Windows only my tertiary system, I'm in favor of anything that makes it simpler and/or cheaper for people to work on Python and extensions. However, by the same token I cannot really gauge how stable and solid VS 2005 is -- if as current Windows experts you think it's still inferior to VS 2003, then that's a very big point against it. Alex From aleaxit at gmail.com Mon Apr 24 16:56:31 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Mon, 24 Apr 2006 07:56:31 -0700 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> Message-ID: <9E5E0E46-E7B6-49A0-A52E-93DD64AEA446@gmail.com> On Apr 24, 2006, at 12:48 AM, Neil Hodgson wrote: > Martin v. L?wis: > >> Apparently, the status of this changed right now: it seems that >> the 2003 compiler is not available anymore; the page now says >> that it was replaced with the 2005 compiler. >> >> Should we reconsider? > > I expect Microsoft means that Visual Studio Express will be > available free forever, not that you will always be able to download > Visual Studio 2005 Express. They normally only provide a particular > product version for a limited time after it has been superceded. Yeah, that does sound like a normal commercial policy. If we want some compiler to be available "forever", we have to choose one that we can get permission to redistribute, and host it somewhere ourselves. Alex From aleaxit at gmail.com Mon Apr 24 16:59:24 2006 From: aleaxit at gmail.com (Alex Martelli) Date: Mon, 24 Apr 2006 07:59:24 -0700 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> Message-ID: <5FE2799F-48E8-4AEB-9EDA-240BBA0F27B7@gmail.com> On Apr 24, 2006, at 1:24 AM, Paul Moore wrote: > On 4/24/06, Neil Hodgson <nyamatongwe at gmail.com> wrote: >> Martin v. L?wis: >> >>> Apparently, the status of this changed right now: it seems that >>> the 2003 compiler is not available anymore; the page now says >>> that it was replaced with the 2005 compiler. >>> >>> Should we reconsider? >> >> I expect Microsoft means that Visual Studio Express will be >> available free forever, not that you will always be able to download >> Visual Studio 2005 Express. They normally only provide a particular >> product version for a limited time after it has been superceded. > > No. Martin means that http://msdn.microsoft.com/visualc/vctoolkit2003/ > no longer points to a downloadable version of MSVC which includes the > optimizer, and generates VC 7.1 compatible binaries. > > This means that unless you've already downloaded it, or it's > acceptable for someone else to host it, there's once again no way to > build Python with free tools :-( I've posted a couple of tinyurl's that may help (no time right now to try everything out again myself). > > (Is it worth the PSF asking MS if it's acceptable for python.org to > host a copy of the toolkit compiler? As MS donated copies of MSVC 7.1 > to the Python project, they may be willing to consider this...) Small as the chance may be, it's still most definitely worth asking, so that everything doesn't suddenly break the instant MS wants to withdraw the URLs that currently still work. Also, we'd need the ability to redistribute the 1.1 SDK, as the 2.0 one seems to lack the key msvcrt.lib for x86. Alex From alan.mcintyre at gmail.com Mon Apr 24 18:30:12 2006 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Mon, 24 Apr 2006 12:30:12 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" Message-ID: <444CFD14.70601@gmail.com> Hi all, I would like to participate in the Summer of Code as a student. At the moment it looks like the Python tracker on SF has about 2100 open bugs and patches, going back to late 2000. I'm assuming that a fair number of these are no longer be applicable, have been fixed/implemented already, etc., and somebody just needs to slog through the list and figure out what to do with them. My unglamorous proposal is to review bugs & patches (starting with the oldest) and resolve at least 200 of them. Is that too much? Too few? I'll fix as many as possible during the SoC time frame, but I wanted to set a realistically achievable minimum for the proposal. If anybody can offer helpful feedback on a good minimum number I'd appreciate it. Not-guru-ish-enough-to-found-a-new-web-framework'ly yours, Alan McIntyre From p.f.moore at gmail.com Mon Apr 24 18:39:33 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 17:39:33 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> Message-ID: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> On 4/24/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 10:26 AM 4/24/2006 +0100, Paul Moore wrote: > >OK. At this point, the discussion seems to have mutated from a > >"Phillip vs Nick" debate to a "Paul vs Nick" debate. > > I only stepped aside so that other people would chime in. I still don't > think the new terminology makes anything clearer, and would rather see > tweaks to address the one minor issue of @contextmanager producing an > object that's also a context than a complete reworking of the > documentation. That was the only thing that was unclear in the a1 > terminology and docs, and it's an extremely minor point that could easily > be addressed. > > Throwing away an intuitive terminology because of a minor implementation > issue in favor of a non-intutitive terminology that happens to be > super-precise seems penny-wise and pound-foolish to me. That's *exactly* my feeling (thanks for expressing it better than I did). So I guess the 2 questions remaining are: 1. Does anyone feel that Nick's (re-)wording is better than the a1 version (apart from the @contextmanager issue, which we all agree needs a fix)? 2. Nick, what can we do to persuade you to go back to the a1 version, and simply look at @contextmanager? I've proposed splitting it into two, but that seems not to suit you (you've never responded to it specifically, so I may be misreading your silence here). As an alternative, I could just think about how to reword the docs. But I'm not confident I'm the best person to do that. One thing I would say is that Nick has added a section on context types to the "Built-in types" section of the libref. At the moment, it reflects his terminology, but even if the terminology gets reverted to a1 style, I'd like to see that stay (suitably reworded, of course!) It's a useful addition. Oh, and if there's a huge group of people who prefer Nick's terminology, now is the time to shout! Paul. From skip at pobox.com Mon Apr 24 18:44:22 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 24 Apr 2006 11:44:22 -0500 Subject: [Python-Dev] PEP 8 pylintrc? In-Reply-To: <20060424083334.GA4036@logilab.fr> References: <17483.56494.697655.929090@montanaro.dyndns.org> <20060424083334.GA4036@logilab.fr> Message-ID: <17485.102.840211.881680@montanaro.dyndns.org> >> Has anybody written a pylintrc file that attempts to reflect the >> recommendations of PEP 8 the extent possible? Sylvain> I would be actually interested about what you think is not Sylvain> conformant, or even which test are missing for a better "PEP8 Sylvain> compliance test". One concrete (but trivial) difference is that PyLint wants line length to max out at 80 columns while PEP 8 says 79. Since PEP 8 doesn't specify everything, there's going to be some fudging required. For things which are unspecified by PEP 8, I think you have to consider the body of code (the standard library) and the probably audience (people on this list) to consider how to set things. PyLint complains about the use of apply(), but it also calls the use of *args and **kwds "magic". The first complaint is correct in my opinion since apply() is deprecated, but the use of *args and **kwds is definitely not magic for this group. Skip From skip at pobox.com Mon Apr 24 19:16:00 2006 From: skip at pobox.com (skip at pobox.com) Date: Mon, 24 Apr 2006 12:16:00 -0500 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <444CD3DB.7050704@voidspace.org.uk> References: <444CD3DB.7050704@voidspace.org.uk> Message-ID: <17485.2000.827380.667816@montanaro.dyndns.org> Michael> I've hit on what looks like a fundamental ambiguity in the Michael> Python grammar which is difficult to get round with PLY; and Michael> I'm wondering *why* the grammar is defined in this way. Michael, You refer to the ref manual documentation: Michael> List displays (list comprehensions) are defined as (from Michael> http://docs.python.org/ref/lists.html ) Note that the BNF there is mostly designed for human consumption. Have you verified that the ambiguity is also present in the Grammar file? Skip From ClassDevelopment at A-SoftTech.com Sun Apr 23 12:36:52 2006 From: ClassDevelopment at A-SoftTech.com (Andreas Nahr) Date: Sun, 23 Apr 2006 12:36:52 +0200 Subject: [Python-Dev] [Mono-dev] IronPython Performance References: <20060421230154.1586.qmail@web81205.mail.mud.yahoo.com> Message-ID: <001201c666c1$d6bcda60$6464a8c0@ansirua> Obviously all three might add to that. However I think that number 1 COULD already be enough for the noticed slowdown of 3 times. It depends a lot on which Classes/ Methods IronPython is using (And I guess it is now optimized towards the MS CLR). If you just have pure integer or floating point math in simple loops the difference between the CLR and Mono istn't very big (some percent usually), but as soon as you use other classes (even from the very core of the BCL) you will often see huge differences. It seems that the current goal of mono is to get everything running (In a way as simple as possible) instead of trying to optimize performance (e.g. I posted a patch some time ago to improve performance of common string operations of up to 100% which was turned down because it would have added about 100 LOC). Or another example: if your benchmark writes numbers to files or the console the culpit might be NumberFormatter, which itself is usually more than 5 times slower than on the CLR. But there are several others which could be the cause of that. By the way: Which optimizations did you enable when running IronPython on Mono? These also could potentially make a big speed difference. Also I guess number 2 could also make some difference in speed. I am aware of the following issues that might affect the results: 1) Mono vs. Microsoft's CLR. 2) Python 2.1 vs. Python 2.4 3) Changes in IronPython over the last two years. From michael.foord at resolversystems.com Mon Apr 24 15:28:50 2006 From: michael.foord at resolversystems.com (Michael Foord) Date: Mon, 24 Apr 2006 14:28:50 +0100 Subject: [Python-Dev] Python Grammar Ambiguity Message-ID: <444CD292.6080506@resolversystems.com> Hello all, I'm working on a parser for part of the Python language (expressions but not statements basically). I'm using PLY to generate the parser and it's mostly done. I've hit on what looks like a fundamental ambiguity in the Python grammar which is difficult to get round with PLY; and I'm wondering *why* the grammar is defined in this way. It's possible there is a reason that I've missed, which means I need to rethink my workaround. List displays (list comprehensions) are defined as (from http://docs.python.org/ref/lists.html ) : test ::= and_test ( "or" and_test )* | lambda_form testlist ::= test ( "," test )* [ "," ] list_display ::= "[" [listmaker] "]" listmaker ::= expression ( list_for | ( "," expression )* [","] ) list_iter ::= list_for | list_if list_for ::= "for" expression_list "in" testlist [list_iter] list_if ::= "if" test [list_iter] The problem is that list_for is defined as : "for" expression_list "in" testlist This allows arbitrary expressions in the 'assignment' part of a list comprehension. As a result, the following is valid syntax according to the grammar : [x for x + 1 in y] Obviously it isn't valid ! This parses to an ast, but the syntax error is thrown when you compile the resulting ast. The problem is that for the basic case of a list comprehension ( ``[x for x in y]``), ``x in y`` is a valid expression. That makes it extremely hard to disambiguate the grammar so that the ``in`` is treated correctly, and not part of an expression. My question is, why are arbitrary expressions allowed here in the grammar ? As far as I can tell, only identifiers (nested in parentheses or brackets) are valid here. I've got round the problem by creating a new node 'identifier_list' and just having that return the expected syntax tree (actually an expression list). This gets round the ambiguity [#]_. It worries me that there might be a valid expression allowed here that I haven't thought of. My current rules allow anything that looks like ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything else be allowed ? If not, why not modify the grammar so that the compiler has less possible invalid syntax trees to work with ? (Also the grammar definition of string conversion is wrong as it states that a trailing comma is valid, which isn't the case. As far as I can tell that is necessary to allow nesting string conversions.) Fuzzyman http://www.voidspace.org.uk/python/index.shtml .. [#] If I could make precedence work in PLY I could also solve it I guess. However I can't. :-) From dinov at exchange.microsoft.com Mon Apr 24 17:47:19 2006 From: dinov at exchange.microsoft.com (Dino Viehland) Date: Mon, 24 Apr 2006 08:47:19 -0700 Subject: [Python-Dev] [IronPython] [Mono-dev] IronPython Performance In-Reply-To: <20060424095239.GQ3151@debian.org> Message-ID: <4039D552ADAB094BB1EA670F3E96214E02970979@df-foxhound-msg.exchange.corp.microsoft.com> On the recursion limits: Until beta 6 IronPython didn't have proper support for limiting recursion depth. There was some minor support there, but it wasn't right. In beta 6 we have full support for limiting recursion depth, but by default we allow infinite recursion. If the user explicitly sets the recursion limit then we'll go ahead and enforce it. But all the reasons you outline below are great explanations of the differences. Do you want to help develop Dynamic languages on CLR? (http://members.microsoft.com/careers/search/details.aspx?JobID=6D4754DE-11F0-45DF-8B78-DC1B43134038) -----Original Message----- From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Paolo Molaro Sent: Monday, April 24, 2006 2:53 AM To: Brent Fulgham Cc: shootout-list at lists.alioth.debian.org; python-dev at python.org; users at lists.ironpython.com; mono-devel-list at lists.ximian.com Subject: Re: [IronPython] [Mono-dev] IronPython Performance [I continued your large cross-post, even if I'm not subscribed to python-dev: hopefully the moderators will approve the post if needed.] On 04/21/06 Brent Fulgham wrote: > A while ago (nearly two years) an interesting paper was published by Jim Hugunin > (http://www.python.org/pycon/dc2004/papers/9/) crowing about the significant > speed advantage observed in this interpreter running on top of Microsoft's .NET > VM stack. I remember being surprised by these results, as Python has always > seemed fairly fast for an interpreted language. > > I've been working on the Programming Language Shootout for a few years now, and > after growing tired of the repeated requests to include scores for IronPython I > finally added the implementation this week. > > Comparison of IronPython (1.0 Beta 5) to Python 2.4.3 [1] and IronPython (1.0 Beta 1) > to Python 2.4.2 [2] do not match the results reported in the 2004 paper. In fact, > IronPython is consistenly 3 to 4 times slower (in some cases worse than that), and > seemed to suffer from recursion/stack limitations. You're comparing different benchmarks, so it's not a surprise that you get different numbers. I only tried two benchmarks from the paper, pystone and the globals one (attached) and the results are largely equivalent to the ones from the paper: time mono IronPython-1.0-Beta6/IronPythonConsole.exe -O python-globals-bench2.py real 0m0.861s user 0m0.815s sys 0m0.043s (Note the above includes the startup time that I guess was not considered in the paper and is 0.620 seconds running test(1): just increasing the number of iterations will make the difference converge at about 10x faster mono vs python2.4). time python2.4 -O python-globals-bench2.py real 0m2.239s user 0m2.202s sys 0m0.025s python2.4 -O /usr/lib/python2.4/test/pystone.py Pystone(1.1) time for 50000 passes = 1.4 This machine benchmarks at 35714.3 pystones/second mono IronPython-1.0-Beta6/IronPythonConsole.exe -O /usr/lib/python2.4/test/pystone.py Pystone(1.1) time for 50000 passes = 0.989288 This machine benchmarks at 50541.4 pystones/second So IronPython is 40+% faster than CPython, like in the paper. The mono version I used is roughtly 1.1.14. As for the recursion limitations: what are they? I took the recursive benchmark from the shootout page and both CPython and IronPython need the call to setrecursionlimit (actually cpython needed it earlier for me, anyway). This might be a difference between IronPython beta 5 and the beta6 I'm using, though. It is also interesting because on my system running the attached python-ack.py with 8 as argument, mono/IronPython is 10% faster than CPython, while the shootout page reports CPython 2.4 50% faster. Feel free to investigate if this is because of improvements in mono or IronPython:) > I am aware of the following issues that might affect the results: > 1) Mono vs. Microsoft's CLR. This can certainly be a factor: some things in the MS runtime run faster and some slower than in the Mono runtime, so it depends which features are exercised the most in a particular benchmark. It is very likely that in most benchmarks the MS runtime will score better. > 2) Python 2.1 vs. Python 2.4 The python folks will have more details, but python2.4 seems to be faster than 2.1 and 2.3 in several areas. > 3) Changes in IronPython over the last two years. For some time the IronPython folks have focused on features and correctness issues, so I guess they'll start caring about performance in the future. It is also to note that IronPython now uses generics which have yet to see optimizations work in the mono runtime and libraries (they are a 2.0 feature). > I thought these findings might be interesting to some of the Python, IronPython, > and Mono developers. Let the flames begin! ;-) Why flames? Hard numbers are just that: useful information. It's just your conclusions that might be incorrect;-) I think the summary to take away is: *) Sometimes IronPython on mono is faster than CPython *) Often CPython is faster than IronPython on mono *) IronPython and mono are improving quickly. Thanks for the numbers: it would be nice if someone with time in his hands could rerun the benchmarks with ironpython beta6, mono from svn (or at least mono 1.1.14/1.1.15) and also with the MS runtime on windows. Better if this includes running pystone to relate to improvements in the published paper and the piethon benchmarks (on the MS runtime, in mono we need to implement some things that newer ironpythons use). In the meantime here are the piethon results running with IronPython 0.6 with a current mono. Mono results: b0 = 2.49623870849609 -- [2.59130096435547, 2.40117645263672] b1 = 0.473392486572266 -- [0.496170043945312, 0.450614929199219] b2 = 0.453083038330078 -- [0.543968200683594, 0.362197875976562] b3 = 0.967914581298828 -- [0.995307922363281, 0.940521240234375] b4 = 1.11585998535156 -- [1.23159027099609, 1.00012969970703] b5 = 3.56014633178711 -- [3.70928192138672, 3.4110107421875] b6 = 1.43364334106445 -- [1.42024230957031, 1.44704437255859] all done in 21.01 sec Python 2.4 results: b0 = 2.185000 -- [2.1800000000000002, 2.1900000000000004] b1 = 0.915000 -- [0.93999999999999995, 0.88999999999999879] b2 = 0.395000 -- [0.39999999999999991, 0.39000000000000057] b3 = 1.015000 -- [1.02, 1.0099999999999998] b4 = 0.585000 -- [0.58999999999999986, 0.58000000000000007] b5 = 0.995000 -- [0.99000000000000021, 1.0] b6 = 1.410000 -- [1.4199999999999999, 1.4000000000000004] all done in 15.00 sec Python 2.3 results: b0 = 2.270000 -- [2.2800000000000002, 2.2600000000000016] b1 = 0.955000 -- [0.98999999999999977, 0.91999999999999993] b2 = 0.395000 -- [0.39000000000000012, 0.39999999999999858] b3 = 1.230000 -- [1.2300000000000004, 1.2300000000000004] b4 = 0.805000 -- [0.80999999999999961, 0.80000000000000071] b5 = 1.145000 -- [1.1500000000000004, 1.1399999999999988] b6 = 1.500000 -- [1.5099999999999989, 1.4900000000000002] all done in 16.60 sec Since Guido designed it to excercise the tricky corners of the python implementation this could be a better general evaluation of the speed of a python runtime. Note that in the oscon paper, mono was reported to be 23 time slower than CPython (but 2 times slower excluding b5 where we had a really unoptimized codepath). Current mono is 27% slower than CPython 2.3 and 40% slower than CPython 2.4. So while the CPython performance increased, mono's rate of improvement is significantly faster. While having mono/IronPython be faster than CPython in more benchmarks would be nice (and I guess we could run a few profile runs if there is interest to see if some quick specific improvement could be done), it is not really necessary. Having mono run python code 1-2 times slower than CPython is pretty good. People interested in raw performance could just write the code in C#, gluelessly use it from python code and get the 1-2 orders of magnitude improvements visible also in the shootout comparison of C# vs python. Thanks. lupus -- ----------------------------------------------------------------- lupus at debian.org debian/rules lupus at ximian.com Monkeys do it better _______________________________________________ users mailing list users at lists.ironpython.com http://lists.ironpython.com/listinfo.cgi/users-ironpython.com From michael.foord at resolversystems.com Mon Apr 24 19:03:06 2006 From: michael.foord at resolversystems.com (Michael Foord) Date: Mon, 24 Apr 2006 18:03:06 +0100 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <ca471dc20604240712h64fc5741r43c891464944e1be@mail.gmail.com> References: <444CD3DB.7050704@voidspace.org.uk> <ca471dc20604240712h64fc5741r43c891464944e1be@mail.gmail.com> Message-ID: <444D04CA.5000902@resolversystems.com> (oops - should have gone to list) Guido van Rossum wrote: > Well, yes, the syntax is supposed to be something like "for varlist in > testlist". Could you report this as a doc bug (if you found this > information in the docs)? > I think the documentation (which does put expression_list there) reflects the current state of the parser. First of all the grammar in SVN also has expression_list there *and* the following does successfully parse to an ast (but fails when you compile the ast) : [x for x + 1 in y] All the best, Michael Foord > On 4/24/06, Michael Foord <fuzzyman at voidspace.org.uk> wrote: > >> Hello all, >> >> I'm working on a parser for part of the Python language (expressions but >> not statements basically). I'm using PLY to generate the parser and it's >> mostly done. >> >> I've hit on what looks like a fundamental ambiguity in the Python >> grammar which is difficult to get round with PLY; and I'm wondering >> *why* the grammar is defined in this way. It's possible there is a >> reason that I've missed, which means I need to rethink my workaround. >> >> List displays (list comprehensions) are defined as (from >> http://docs.python.org/ref/lists.html ) : >> >> >> test ::= and_test ( "or" and_test )* | lambda_form >> testlist ::= test ( "," test )* [ "," ] >> list_display ::= "[" [listmaker] "]" >> listmaker ::= expression ( list_for | ( "," expression )* [","] ) >> list_iter ::= list_for | list_if >> list_for ::= "for" expression_list "in" testlist [list_iter] >> list_if ::= "if" test [list_iter] >> >> The problem is that list_for is defined as : >> >> "for" expression_list "in" testlist >> >> This allows arbitrary expressions in the 'assignment' part of a list >> comprehension. >> >> As a result, the following is valid syntax according to the grammar : >> >> [x for x + 1 in y] >> >> Obviously it isn't valid ! This parses to an ast, but the syntax error >> is thrown when you compile the resulting ast. >> >> The problem is that for the basic case of a list comprehension ( ``[x >> for x in y]``), ``x in y`` is a valid expression. That makes it >> extremely hard to disambiguate the grammar so that the ``in`` is treated >> correctly, and not part of an expression. >> >> My question is, why are arbitrary expressions allowed here in the >> grammar ? As far as I can tell, only identifiers (nested in parentheses >> or brackets) are valid here. I've got round the problem by creating a >> new node 'identifier_list' and just having that return the expected >> syntax tree (actually an expression list). This gets round the ambiguity >> [#]_. >> >> It worries me that there might be a valid expression allowed here that I >> haven't thought of. My current rules allow anything that looks like >> ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything >> else be allowed ? >> >> If not, why not modify the grammar so that the compiler has less >> possible invalid syntax trees to work with ? >> >> (Also the grammar definition of string conversion is wrong as it states >> that a trailing comma is valid, which isn't the case. As far as I can >> tell that is necessary to allow nesting string conversions.) >> >> Fuzzyman >> http://www.voidspace.org.uk/python/index.shtml >> >> >> .. [#] If I could make precedence work in PLY I could also solve it I >> guess. However I can't. :-) >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > > From fuzzyman at voidspace.org.uk Mon Apr 24 19:23:50 2006 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 24 Apr 2006 18:23:50 +0100 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <17485.2000.827380.667816@montanaro.dyndns.org> References: <444CD3DB.7050704@voidspace.org.uk> <17485.2000.827380.667816@montanaro.dyndns.org> Message-ID: <444D09A6.5040907@voidspace.org.uk> skip at pobox.com wrote: > Michael> I've hit on what looks like a fundamental ambiguity in the > Michael> Python grammar which is difficult to get round with PLY; and > Michael> I'm wondering *why* the grammar is defined in this way. > > Michael, > > You refer to the ref manual documentation: > > Michael> List displays (list comprehensions) are defined as (from > Michael> http://docs.python.org/ref/lists.html ) > > Note that the BNF there is mostly designed for human consumption. Have you > verified that the ambiguity is also present in the Grammar file? > From : http://svn.python.org/view/python/tags/r243/Grammar/Grammar?rev=43414&view=auto list_for: 'for' exprlist 'in' testlist_safe [list_iter] So in the Python grammar list_for *is* defined as an expression list. That follows, because using the parser module I can create an ast for a list comprehension like the following : import parser expr = '[1 for 1 in n]\n' ast = parser.expr(expr) print parser.compileast(ast) Traceback (most recent call last): File "ast_example.py", line 6, in ? print parser.compileast(ast) SyntaxError: can't assign to literal The syntax error is thrown at the compile stage, not the parse stage. Having list_for being defined in terms of something like varlist makes more sense, but isn't how the grammar is done currently. Michael Foord > Skip > > From guido at python.org Mon Apr 24 20:01:51 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 24 Apr 2006 11:01:51 -0700 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <444D04CA.5000902@resolversystems.com> References: <444CD3DB.7050704@voidspace.org.uk> <ca471dc20604240712h64fc5741r43c891464944e1be@mail.gmail.com> <444D04CA.5000902@resolversystems.com> Message-ID: <ca471dc20604241101j2e3d2de6sc49be604f282b2c4@mail.gmail.com> This is probably because we have a similar ambiguity in assignments: the grammar says something like exprlist ('=' exprlist)* but what is actually desired is (varlist '=')* exprlist Unfortunately the latter is not LL1 so we lie to the parser and tell it the first form, and then in the code generator checks that the actual parse tree for all exprlists except for the last conforms to the more limited syntax for varlist (which is only in our head). If not, it issues a "phase 2" SyntaxError. (One that doesn't have the exact column information.) We didn't have to do this for the syntax of the for-statement, list-comprehensions etc. because the ambiguity doesn't exist there (the 'for' keyword disambiguates the situation); but the current approach allows more code sharing in the code generator. I suggest you go ahead and write the second form for your own parser. Coming up with the correct rules for varlist is not hard. --Guido On 4/24/06, Michael Foord <michael.foord at resolversystems.com> wrote: > (oops - should have gone to list) > > Guido van Rossum wrote: > > Well, yes, the syntax is supposed to be something like "for varlist in > > testlist". Could you report this as a doc bug (if you found this > > information in the docs)? > > > > I think the documentation (which does put expression_list there) > reflects the current state of the parser. > > First of all the grammar in SVN also has expression_list there *and* the > following does successfully parse to an ast (but fails when you compile > the ast) : > > [x for x + 1 in y] > > All the best, > > > Michael Foord > > > > On 4/24/06, Michael Foord <fuzzyman at voidspace.org.uk> wrote: > > > >> Hello all, > >> > >> I'm working on a parser for part of the Python language (expressions but > >> not statements basically). I'm using PLY to generate the parser and it's > >> mostly done. > >> > >> I've hit on what looks like a fundamental ambiguity in the Python > >> grammar which is difficult to get round with PLY; and I'm wondering > >> *why* the grammar is defined in this way. It's possible there is a > >> reason that I've missed, which means I need to rethink my workaround. > >> > >> List displays (list comprehensions) are defined as (from > >> http://docs.python.org/ref/lists.html ) : > >> > >> > >> test ::= and_test ( "or" and_test )* | lambda_form > >> testlist ::= test ( "," test )* [ "," ] > >> list_display ::= "[" [listmaker] "]" > >> listmaker ::= expression ( list_for | ( "," expression )* [","] ) > >> list_iter ::= list_for | list_if > >> list_for ::= "for" expression_list "in" testlist [list_iter] > >> list_if ::= "if" test [list_iter] > >> > >> The problem is that list_for is defined as : > >> > >> "for" expression_list "in" testlist > >> > >> This allows arbitrary expressions in the 'assignment' part of a list > >> comprehension. > >> > >> As a result, the following is valid syntax according to the grammar : > >> > >> [x for x + 1 in y] > >> > >> Obviously it isn't valid ! This parses to an ast, but the syntax error > >> is thrown when you compile the resulting ast. > >> > >> The problem is that for the basic case of a list comprehension ( ``[x > >> for x in y]``), ``x in y`` is a valid expression. That makes it > >> extremely hard to disambiguate the grammar so that the ``in`` is treated > >> correctly, and not part of an expression. > >> > >> My question is, why are arbitrary expressions allowed here in the > >> grammar ? As far as I can tell, only identifiers (nested in parentheses > >> or brackets) are valid here. I've got round the problem by creating a > >> new node 'identifier_list' and just having that return the expected > >> syntax tree (actually an expression list). This gets round the ambiguity > >> [#]_. > >> > >> It worries me that there might be a valid expression allowed here that I > >> haven't thought of. My current rules allow anything that looks like > >> ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything > >> else be allowed ? > >> > >> If not, why not modify the grammar so that the compiler has less > >> possible invalid syntax trees to work with ? > >> > >> (Also the grammar definition of string conversion is wrong as it states > >> that a trailing comma is valid, which isn't the case. As far as I can > >> tell that is necessary to allow nesting string conversions.) > >> > >> Fuzzyman > >> http://www.voidspace.org.uk/python/index.shtml > >> > >> > >> .. [#] If I could make precedence work in PLY I could also solve it I > >> guess. However I can't. :-) > >> > >> _______________________________________________ > >> Python-Dev mailing list > >> Python-Dev at python.org > >> http://mail.python.org/mailman/listinfo/python-dev > >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > >> > >> > > > > > > -- > > --Guido van Rossum (home page: http://www.python.org/~guido/) > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From amk at amk.ca Mon Apr 24 21:05:49 2006 From: amk at amk.ca (A.M. Kuchling) Date: Mon, 24 Apr 2006 15:05:49 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <444CFD14.70601@gmail.com> References: <444CFD14.70601@gmail.com> Message-ID: <20060424190549.GA18585@rogue.amk.ca> On Mon, Apr 24, 2006 at 12:30:12PM -0400, Alan McIntyre wrote: > My unglamorous proposal is to review bugs & patches (starting with the > oldest) and resolve at least 200 of them. Is that too much? Too few? > I'll fix as many as possible during the SoC time frame, but I wanted to > set a realistically achievable minimum for the proposal. If anybody can > offer helpful feedback on a good minimum number I'd appreciate it. I'd suggest 75 or maybe 100 bugs or patches, not 200. Let's assume you'll spend 60 days working on the project. While some items are trivial (documentation typos, small bugfixes), you'll probably exhaust those pretty quickly. If a bug has been sitting around in the bug tracker for 3 years, it's probably because the bug isn't trivial: it may break compatibility to fix it, or the fix requires significant redesign of a component. 200 bugs is a little over 3 per day. During bug days I've spent three or four hours working on a single bug, which implies two bugs/day. So if you're working 8-hour days for those two months, you might be able to process about 120 items. Therefore, 75 seems a reasonable number, a little more than a bug a day. --amk From guido at python.org Mon Apr 24 20:34:00 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 24 Apr 2006 11:34:00 -0700 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <444C7224.1010902@v.loewis.de> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> Message-ID: <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> On 4/23/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Nick Coghlan wrote: > > Do we want to add a "released" context manager to the threading module for > > 2.5? > > I don't think that should be added. I would consider it a dangerous > programming style: if the lock merely doesn't "need" to be held (i.e. > if it isn't necessary, but won't hurt), one should just keep holding > the lock. If it is essential to release the lock, because the code > would otherwise deadlock, the code should be dramatically revised > to avoid that situation, e.g. by redefining the granularity of the > lock, and moving the with statements accordingly. Actually, what Nick describes is *exactly* how one should write code using a condition variable: LOCK while nothing to do: UNLOCK wait for the condition variable (or sleep, or whatever) LOCK # here we have something to do with the lock held remove the to-do item UNLOCK except that the outer LOCK/UNLOCK pair should be using a try/except and the inner UNLOCK/LOCK pair should too. I don't see how you can do this easily by rewriting the code; the rewrite would be considerably ugly (or requires a GOTO :-). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jcarlson at uci.edu Mon Apr 24 20:51:30 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Mon, 24 Apr 2006 11:51:30 -0700 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <444C9FBE.2060709@renet.ru> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> <444C9FBE.2060709@renet.ru> Message-ID: <20060424114024.66D0.JCARLSON@uci.edu> There exists various C and Python implementations of both AVL and Red-Black trees. For users of Python who want to use AVL and/or Red-Black trees, I would urge them to use the Python implementations. In the case of *needing* the speed of a C extension, there already exists a CPython extension module for AVL trees: http://www.python.org/pypi/pyavl/1.1 I would suggest you look through some suggested SoC projects in the wiki: http://wiki.python.org/moin/SummerOfCode - Josiah "Vladimir 'Yu' Stepanov" <vys at renet.ru> wrote: > > I would like to participate in Google Summer of Code. The idea consists in > creation of a specialized class for work with binary trees AVL and RB. The > part of ideas is taken from Parrot (Perl6) where for pair values the > specialized type is stipulated. As advantages it is possible to note: > * High speed. On simple types at quantity of elements up to 10000 speed of > work concedes to the standard dictionary of all in 2.7 times. > * Safety of work in a multithread operating mode. > * The method for frosts of object is stipulated. > * The data storage is carried out in the sorted kind. > * A high overall performance `item' and `iteritem' methods as it is not > necessary to create new objects. > * At lots of objects in a tree memory is used much less intensively. > * Absence of collisions. As consequence, it is impossible to generate bad > data for creation DoS of the attacks influencing the dictionary of > transferred arguments. > * For objects existence of a method __ hash __ is not necessary. > * The same kind of the dictionary by means of the overloaded operations > probably change of its behaviour so that it supported the > multivariational > representation based on stratification given and subsequent recoils. > > Additional requirements to key objects: > * The opportunity of comparison of key objects function cmp is necessary. > > There are the reasons, braking development of this project: > * Compulsion of heading of a module `gc' for each compound object (this > structure is unessential to some objects, but updating `gc' the module > is required). > * Absence of standard object the pair, probably depends on the > previous item. > > Lacks of a binary tree: > * Average time of search for hash - O (1), and for trees - O (log2 N). > * A lot of memory under a single element of a tree: > (3*sizeof (void *) + sizeof (int))*2, > one element is used rather - the pair, the second site of memory is > allocated under node of a tree. > > In protection of object "pair": > * The logic of methods of the given object noticeably is easier than > tuple, > that as a result can affect speed of work of the program in the best > party. > * Alignment on 8 bytes has no big sense at present architecture where in > cache sample a minimum on 64 bytes is made. Use of type "pair" will give > an appreciable prize at use of alignment in 4 bytes. Otherwise on 64-bit > platforms it is much more favourable to use tuple object. > > The given project can demand processing of the module `gcmodule.c' and > `tupleobject.c'. It is necessary to reduce the size of static objects, > for this > purpose the opportunity is necessary is transparent to pass objects not > having > direct support from the module `gcmodule.c'. > > Also it will be partially necessary to process the module `obmalloc.c' > for more > effective distribution of memory. > > I shall be glad to answer questions on this theme. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jcarlson%40uci.edu From ncoghlan at gmail.com Mon Apr 24 20:48:50 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 04:48:50 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> Message-ID: <444D1D92.8030601@gmail.com> Paul Moore wrote: > 2. Nick, what can we do to persuade you to go back to the a1 version, > and simply look at @contextmanager? Using two names to describe three different things isn't intuitive for anybody. You might persuade me to change the names around, but you aren't going to persuade me to go back to using "context" for either of the concrete object types. If we use "context object" as the name for any kind of concrete object, decimal.Context has to be an example of it. It wasn't in the alpha 1 docs, which was one of the main points of confusion (@contextmanager was the other). The alpha 2 docs have ended up sidestepping the issue by not using that term at all (introducing "context specifier" instead). The extent to which I'm willing to consider reversion to alpha 1 terminology is to make it so that the context management protocol is the protocol with the single __context__ method. However, the objects with the __enter__/__exit__ methods need to be called something more specific than context objects (more on that below). > I've proposed splitting it into > two, but that seems not to suit you (you've never responded to it > specifically, so I may be misreading your silence here). Wanting to have two names for the same function tells me there's a problem with the terminology, not that we should actually have two names for the same function :) I don't want to be fielding the question "what's the difference between contextlib.context and contextlib.contextmanager?" "Well, there isn't actually any difference." "Why two functions then?" "Ummm. . ." So, if we really want to use the alpha 1 terminology where a context manager provides just a __context__ method, then I think the right answer is to introduce a separate term for the objects that are returned by that method. However, while coming up with "when requested by the with statement, a context specifier object provides a context manager object to set up and tear down the desired runtime context" was pretty easy, I really struggle with that sentence when the starting object is a context manager. Trying "a context manager object provides a context object to set up and tear down the desired runtime context" seems initially appealing, until you think about all the "context objects" that already exist in various domains, such as: decimal arithmetic contexts GUI toolkit drawing contexts parsing & compilation contexts The fact that those are actually all candidate context specifiers, along with the temptation to abbreviate the term to just 'context', results in confusion just as bad as with the original PEP terminology. However, if you can successfully fill in the blank in: "when requested by the with statement, a context manager object provides a context <blank> object to set up and tear down the desired runtime context" then we can simply change the name of the contextmanager decorator and the various ContextManager objects in the implementation to that new term, and make the appropriate changes to the documentation. From my POV, the fact that the specifier says what the context should be, and the manager makes that happen makes sense. I suppose you could call the second object an effector, since it effects the necessary changes: "when requested by the with statement, a context manager object provides a context effector object to set up and tear down the desired runtime context" Actually using the transitive verb form of effect is fairly unnatural English, though :) > One thing I would say is that Nick has added a section on context > types to the "Built-in types" section of the libref. At the moment, it > reflects his terminology, but even if the terminology gets reverted to > a1 style, I'd like to see that stay (suitably reworded, of course!) > It's a useful addition. And one with a fairly long history - I first drafted something along those lines nearly 10 months ago [1]. FWIW, that discussion last year is the biggest reason I've been trying so hard to preserve "context manager" as the name for the objects with the __enter__/__exit__ methods. We spent a lot of effort coming up with it, and the way I wrote those draft docs (and tried to convey in the PEP's standard terminology section), it makes far more sense to me for that term to continue to apply to the objects with __enter__ and __exit__ methods than it does to relocate it to the johnny-come-lately objects which only have a __context__ method. But if we can agree on a different name for context managers (context objects is *not* an option), then changing those docs wouldn't be difficult. > Oh, and if there's a huge group of people who prefer Nick's > terminology, now is the time to shout! So far I've only got Greg commenting that it would be a shame for decimal.Context to not be a 'context' anymore. And he loses that either way - the only difference in that respect between alpha 1 and alpha 2 is that the alpha 2 docs don't call *any* kind of concrete object a context object. Cheers, Nick. [1] http://mail.python.org/pipermail/python-dev/2005-July/054658.html -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From pje at telecommunity.com Mon Apr 24 20:57:18 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 14:57:18 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444D1D92.8030601@gmail.com> References: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> Message-ID: <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >Using two names to describe three different things isn't intuitive for >anybody. Um, what three things? I only count two: 1. Objects with __context__ 2. Objects with __enter__ and __exit__ What's the third thing? From guido at python.org Mon Apr 24 21:03:11 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 24 Apr 2006 12:03:11 -0700 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <444BF818.4050509@v.loewis.de> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> <381244830604231403j1897f035h6e7c9a4fdf4c906f@mail.gmail.com> <444BF818.4050509@v.loewis.de> Message-ID: <ca471dc20604241203t357c0a36me6fa45ea7b96bc59@mail.gmail.com> On 4/23/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Kirat Singh wrote: > > The reason I looked into this to begin with was that my code used up a > > bunch of memory which was traceable to lots of little objects with > > instance dicts, so it seemed that if instancedicts took less memory I > > wouldn't have to go and add __slots__ to a bunch of my classes, or > > rewrite things as tuples/lists, etc. > > Ah. In that case, I would be curious if tuning PyDict_MINSIZE could > help. If you have many objects of the same type, am I right assuming > they all have the same number of dictionary keys? If so, what is the > dictionary size? Do they use ma_smalltable, or do they have an extra > ma_table? But the space savings by using __slots__ is so much bigger! (And less work than hacking the C code too. :-) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tim.peters at gmail.com Mon Apr 24 21:10:29 2006 From: tim.peters at gmail.com (Tim Peters) Date: Mon, 24 Apr 2006 15:10:29 -0400 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> Message-ID: <1f7befae0604241210g1a1a323bqbe9561fe86ed90db@mail.gmail.com> [Guido] > Actually, what Nick describes is *exactly* how one should write code > using a condition variable: > > LOCK > while nothing to do: > UNLOCK > wait for the condition variable (or sleep, or whatever) > LOCK > # here we have something to do with the lock held > remove the to-do item > UNLOCK > > except that the outer LOCK/UNLOCK pair should be using a try/except > and the inner UNLOCK/LOCK pair should too. I don't see how you can do > this easily by rewriting the code; the rewrite would be considerably > ugly (or requires a GOTO :-). That didn't make much sense to me. If you're using a condition variable `cv`, the way that should be written is: cv.acquire() try: while nothing to do: cv.wait() # which unlocks on entry, and locks before return do something finally: cv.release() IOW, there is no "inner UNLOCK/LOCK" for the user to worry about (although the implementation of wait() has to worry about it). Does with cv: work to replace the outer (== only) acquire/try/finally/release dance? If so, that seems like plenty to me. From aahz at pythoncraft.com Mon Apr 24 21:24:16 2006 From: aahz at pythoncraft.com (Aahz) Date: Mon, 24 Apr 2006 12:24:16 -0700 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> References: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> Message-ID: <20060424192416.GB20795@panix.com> On Mon, Apr 24, 2006, Phillip J. Eby wrote: > At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >> >>Using two names to describe three different things isn't intuitive for >>anybody. > > Um, what three things? I only count two: > > 1. Objects with __context__ > 2. Objects with __enter__ and __exit__ > > What's the third thing? The actual context that's used during the execution of BLOCK. It does not exist as a concrete object, but we must be able to refer to it in documentation. I called it a "namespace" in my docs, which I now realize is incorrect, but it's not entirely clear to me how to describe it. Perhaps "wrapper" will do. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From brett at python.org Mon Apr 24 21:36:10 2006 From: brett at python.org (Brett Cannon) Date: Mon, 24 Apr 2006 12:36:10 -0700 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <444CD292.6080506@resolversystems.com> References: <444CD292.6080506@resolversystems.com> Message-ID: <bbaeab100604241236v4ec722am26f47ca759e16154@mail.gmail.com> On 4/24/06, Michael Foord <michael.foord at resolversystems.com> wrote: > Hello all, > > I'm working on a parser for part of the Python language (expressions but > not statements basically). I'm using PLY to generate the parser and it's > mostly done. > > I've hit on what looks like a fundamental ambiguity in the Python > grammar which is difficult to get round with PLY; and I'm wondering > *why* the grammar is defined in this way. It's possible there is a > reason that I've missed, which means I need to rethink my workaround. > > List displays (list comprehensions) are defined as (from > http://docs.python.org/ref/lists.html ) : > Have you checked Grammar/Grammar to make sure it is the same? The grammar in the language ref is not always the most up-to-date version of the grammar. For instance, the grammar rule for 'test' is actually ``test: or_test ['if' or_test 'else' test] | lambdef``. -Brett > > test ::= and_test ( "or" and_test )* | lambda_form > testlist ::= test ( "," test )* [ "," ] > list_display ::= "[" [listmaker] "]" > listmaker ::= expression ( list_for | ( "," expression )* [","] ) > list_iter ::= list_for | list_if > list_for ::= "for" expression_list "in" testlist [list_iter] > list_if ::= "if" test [list_iter] > > The problem is that list_for is defined as : > > "for" expression_list "in" testlist > > This allows arbitrary expressions in the 'assignment' part of a list > comprehension. > > As a result, the following is valid syntax according to the grammar : > > [x for x + 1 in y] > > Obviously it isn't valid ! This parses to an ast, but the syntax error > is thrown when you compile the resulting ast. > > The problem is that for the basic case of a list comprehension ( ``[x > for x in y]``), ``x in y`` is a valid expression. That makes it > extremely hard to disambiguate the grammar so that the ``in`` is treated > correctly, and not part of an expression. > > My question is, why are arbitrary expressions allowed here in the > grammar ? As far as I can tell, only identifiers (nested in parentheses > or brackets) are valid here. I've got round the problem by creating a > new node 'identifier_list' and just having that return the expected > syntax tree (actually an expression list). This gets round the ambiguity > [#]_. > > It worries me that there might be a valid expression allowed here that I > haven't thought of. My current rules allow anything that looks like > ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything > else be allowed ? > > If not, why not modify the grammar so that the compiler has less > possible invalid syntax trees to work with ? > > (Also the grammar definition of string conversion is wrong as it states > that a trailing comma is valid, which isn't the case. As far as I can > tell that is necessary to allow nesting string conversions.) > > Fuzzyman > http://www.voidspace.org.uk/python/index.shtml > > > .. [#] If I could make precedence work in PLY I could also solve it I > guess. However I can't. :-) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From pje at telecommunity.com Mon Apr 24 21:39:21 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 15:39:21 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <20060424192416.GB20795@panix.com> References: <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> At 12:24 PM 4/24/2006 -0700, Aahz wrote: >On Mon, Apr 24, 2006, Phillip J. Eby wrote: > > At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: > >> > >>Using two names to describe three different things isn't intuitive for > >>anybody. > > > > Um, what three things? I only count two: > > > > 1. Objects with __context__ > > 2. Objects with __enter__ and __exit__ > > > > What's the third thing? > >The actual context that's used during the execution of BLOCK. It does >not exist as a concrete object, Um, huh? It's a thing but it's not an object? I'm lost now. I don't see why we should introduce a concept that has no concrete existence into something that's hard enough to explain when you stick to the objects that actually exist. :) From aahz at pythoncraft.com Mon Apr 24 21:49:01 2006 From: aahz at pythoncraft.com (Aahz) Date: Mon, 24 Apr 2006 12:49:01 -0700 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> Message-ID: <20060424194901.GA3763@panix.com> On Mon, Apr 24, 2006, Phillip J. Eby wrote: > At 12:24 PM 4/24/2006 -0700, Aahz wrote: >>On Mon, Apr 24, 2006, Phillip J. Eby wrote: >>> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >>>> >>>>Using two names to describe three different things isn't intuitive for >>>>anybody. >>> >>> Um, what three things? I only count two: >>> >>> 1. Objects with __context__ >>> 2. Objects with __enter__ and __exit__ >>> >>> What's the third thing? >> >>The actual context that's used during the execution of BLOCK. It does >>not exist as a concrete object, > > Um, huh? It's a thing but it's not an object? I'm lost now. I don't see > why we should introduce a concept that has no concrete existence into > something that's hard enough to explain when you stick to the objects that > actually exist. :) Let's go back to a pseudo-coded with statement: with EXPRESSION [as NAME]: BLOCK What happens while BLOCK is being executed? Again, here's what I said originally: EXPRESSION returns a value that the with statement uses to create a context (a special kind of namespace). The context is used to execute the BLOCK. The block might end normally, get terminated by a break or return, or raise an exception. No matter which of those things happens, the context contains code to clean up after the block. Do you have an alternate proposal for describing this that works well for newbies? Forget about the internal mechanics of what really happens, we must have a simple way of describing the with block itself! -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From pje at telecommunity.com Mon Apr 24 21:58:16 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 15:58:16 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <20060424194901.GA3763@panix.com> References: <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060424155629.038be718@mail.telecommunity.com> At 12:49 PM 4/24/2006 -0700, Aahz wrote: >On Mon, Apr 24, 2006, Phillip J. Eby wrote: > > At 12:24 PM 4/24/2006 -0700, Aahz wrote: > >>On Mon, Apr 24, 2006, Phillip J. Eby wrote: > >>> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: > >>>> > >>>>Using two names to describe three different things isn't intuitive for > >>>>anybody. > >>> > >>> Um, what three things? I only count two: > >>> > >>> 1. Objects with __context__ > >>> 2. Objects with __enter__ and __exit__ > >>> > >>> What's the third thing? > >> > >>The actual context that's used during the execution of BLOCK. It does > >>not exist as a concrete object, > > > > Um, huh? It's a thing but it's not an object? I'm lost now. I don't see > > why we should introduce a concept that has no concrete existence into > > something that's hard enough to explain when you stick to the objects that > > actually exist. :) > >Let's go back to a pseudo-coded with statement: > > with EXPRESSION [as NAME]: > BLOCK > >What happens while BLOCK is being executed? Again, here's what I said >originally: > > EXPRESSION returns a value that the with statement uses to create a > context (a special kind of namespace). The context is used to > execute the BLOCK. The block might end normally, get terminated by > a break or return, or raise an exception. No matter which of those > things happens, the context contains code to clean up after the > block. > >Do you have an alternate proposal for describing this that works well for >newbies? No, I like your phrasing -- but it's quite concrete. EXPRESSION returns a value (object w/__context__) used to create a context (object w/__enter__ and __exit__). That's only two things. There is no *third* thing here. From fuzzyman at voidspace.org.uk Mon Apr 24 21:21:36 2006 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 24 Apr 2006 20:21:36 +0100 Subject: [Python-Dev] Python Grammar Ambiguity In-Reply-To: <ca471dc20604241101j2e3d2de6sc49be604f282b2c4@mail.gmail.com> References: <444CD3DB.7050704@voidspace.org.uk> <ca471dc20604240712h64fc5741r43c891464944e1be@mail.gmail.com> <444D04CA.5000902@resolversystems.com> <ca471dc20604241101j2e3d2de6sc49be604f282b2c4@mail.gmail.com> Message-ID: <444D2540.2060302@voidspace.org.uk> Guido van Rossum wrote: > This is probably because we have a similar ambiguity in assignments: > the grammar says something like > > exprlist ('=' exprlist)* > > but what is actually desired is > > (varlist '=')* exprlist > > Unfortunately the latter is not LL1 so we lie to the parser and tell > it the first form, and then in the code generator checks that the > actual parse tree for all exprlists except for the last conforms to > the more limited syntax for varlist (which is only in our head). If > not, it issues a "phase 2" SyntaxError. (One that doesn't have the > exact column information.) > > We didn't have to do this for the syntax of the for-statement, > list-comprehensions etc. because the ambiguity doesn't exist there > (the 'for' keyword disambiguates the situation); but the current > approach allows more code sharing in the code generator. > > I suggest you go ahead and write the second form for your own parser. > Coming up with the correct rules for varlist is not hard. > > We've already done that, I was just checking there wasn't some case we'd missed were an expression was possible. Thanks Michael > --Guido > > On 4/24/06, Michael Foord <michael.foord at resolversystems.com> wrote: > >> (oops - should have gone to list) >> >> Guido van Rossum wrote: >> >>> Well, yes, the syntax is supposed to be something like "for varlist in >>> testlist". Could you report this as a doc bug (if you found this >>> information in the docs)? >>> >>> >> I think the documentation (which does put expression_list there) >> reflects the current state of the parser. >> >> First of all the grammar in SVN also has expression_list there *and* the >> following does successfully parse to an ast (but fails when you compile >> the ast) : >> >> [x for x + 1 in y] >> >> All the best, >> >> >> Michael Foord >> >> >> >>> On 4/24/06, Michael Foord <fuzzyman at voidspace.org.uk> wrote: >>> >>> >>>> Hello all, >>>> >>>> I'm working on a parser for part of the Python language (expressions but >>>> not statements basically). I'm using PLY to generate the parser and it's >>>> mostly done. >>>> >>>> I've hit on what looks like a fundamental ambiguity in the Python >>>> grammar which is difficult to get round with PLY; and I'm wondering >>>> *why* the grammar is defined in this way. It's possible there is a >>>> reason that I've missed, which means I need to rethink my workaround. >>>> >>>> List displays (list comprehensions) are defined as (from >>>> http://docs.python.org/ref/lists.html ) : >>>> >>>> >>>> test ::= and_test ( "or" and_test )* | lambda_form >>>> testlist ::= test ( "," test )* [ "," ] >>>> list_display ::= "[" [listmaker] "]" >>>> listmaker ::= expression ( list_for | ( "," expression )* [","] ) >>>> list_iter ::= list_for | list_if >>>> list_for ::= "for" expression_list "in" testlist [list_iter] >>>> list_if ::= "if" test [list_iter] >>>> >>>> The problem is that list_for is defined as : >>>> >>>> "for" expression_list "in" testlist >>>> >>>> This allows arbitrary expressions in the 'assignment' part of a list >>>> comprehension. >>>> >>>> As a result, the following is valid syntax according to the grammar : >>>> >>>> [x for x + 1 in y] >>>> >>>> Obviously it isn't valid ! This parses to an ast, but the syntax error >>>> is thrown when you compile the resulting ast. >>>> >>>> The problem is that for the basic case of a list comprehension ( ``[x >>>> for x in y]``), ``x in y`` is a valid expression. That makes it >>>> extremely hard to disambiguate the grammar so that the ``in`` is treated >>>> correctly, and not part of an expression. >>>> >>>> My question is, why are arbitrary expressions allowed here in the >>>> grammar ? As far as I can tell, only identifiers (nested in parentheses >>>> or brackets) are valid here. I've got round the problem by creating a >>>> new node 'identifier_list' and just having that return the expected >>>> syntax tree (actually an expression list). This gets round the ambiguity >>>> [#]_. >>>> >>>> It worries me that there might be a valid expression allowed here that I >>>> haven't thought of. My current rules allow anything that looks like >>>> ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything >>>> else be allowed ? >>>> >>>> If not, why not modify the grammar so that the compiler has less >>>> possible invalid syntax trees to work with ? >>>> >>>> (Also the grammar definition of string conversion is wrong as it states >>>> that a trailing comma is valid, which isn't the case. As far as I can >>>> tell that is necessary to allow nesting string conversions.) >>>> >>>> Fuzzyman >>>> http://www.voidspace.org.uk/python/index.shtml >>>> >>>> >>>> .. [#] If I could make precedence work in PLY I could also solve it I >>>> guess. However I can't. :-) >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> http://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >>>> >>>> >>>> >>> -- >>> --Guido van Rossum (home page: http://www.python.org/~guido/) >>> >>> >>> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > From aahz at pythoncraft.com Mon Apr 24 22:19:02 2006 From: aahz at pythoncraft.com (Aahz) Date: Mon, 24 Apr 2006 13:19:02 -0700 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424155629.038be718@mail.telecommunity.com> References: <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <5.1.1.6.0.20060424155629.038be718@mail.telecommunity.com> Message-ID: <20060424201902.GA6269@panix.com> On Mon, Apr 24, 2006, Phillip J. Eby wrote: > At 12:49 PM 4/24/2006 -0700, Aahz wrote: >>On Mon, Apr 24, 2006, Phillip J. Eby wrote: >>> At 12:24 PM 4/24/2006 -0700, Aahz wrote: >>>>On Mon, Apr 24, 2006, Phillip J. Eby wrote: >>>>> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >>>>>> >>>>>>Using two names to describe three different things isn't intuitive for >>>>>>anybody. >>>>> >>>>> Um, what three things? I only count two: >>>>> >>>>> 1. Objects with __context__ >>>>> 2. Objects with __enter__ and __exit__ >>>>> >>>>> What's the third thing? >>>> >>>>The actual context that's used during the execution of BLOCK. It does >>>>not exist as a concrete object, >>> >>> Um, huh? It's a thing but it's not an object? I'm lost now. I don't see >>> why we should introduce a concept that has no concrete existence into >>> something that's hard enough to explain when you stick to the objects that >>> actually exist. :) >> >>Let's go back to a pseudo-coded with statement: >> >> with EXPRESSION [as NAME]: >> BLOCK >> >>What happens while BLOCK is being executed? Again, here's what I said >>originally: >> >> EXPRESSION returns a value that the with statement uses to create a >> context (a special kind of namespace). The context is used to >> execute the BLOCK. The block might end normally, get terminated by >> a break or return, or raise an exception. No matter which of those >> things happens, the context contains code to clean up after the >> block. >> >>Do you have an alternate proposal for describing this that works well for >>newbies? > > No, I like your phrasing -- but it's quite concrete. EXPRESSION returns a > value (object w/__context__) used to create a context (object w/__enter__ > and __exit__). > > That's only two things. There is no *third* thing here. What is EXPRESSION, then? Not the value it returns, but EXPRESSION itself -- does it have a name? What about the kinds of things we use for EXPRESSION? -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From p.f.moore at gmail.com Mon Apr 24 22:29:41 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 21:29:41 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444D1D92.8030601@gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> Message-ID: <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> On 4/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > Paul Moore wrote: > > I've proposed splitting it into > > two, but that seems not to suit you (you've never responded to it > > specifically, so I may be misreading your silence here). > > Wanting to have two names for the same function tells me there's a problem > with the terminology, not that we should actually have two names for the same > function :) My apologies. On rereading the code, I realised that I'd got muddled by the decorator implementation. [For anyone reading along, if you think I'm confused over contexts, watch me try to explain decorators!!!!] So my explanation was unclear. I'll try again. In the documentation of contextmanager, consider the examples: @contextmanager def tag(name): ... class Tag: ... @contextmanager def __context__(self): ... Now, tag should be a function which returns a context manager (a1 definition - object with a __context__ method) ("with" statement, item 1 of the definition). On the other hand, Tag.__context__ is a context manager's __context__ method, which according to item 2 of the definition of the "with" statement, should be a function which returns a context object (a1 definition - object with __enter__ and __exit__ methods). *These are two different types of function*. Just because contextmanager is implemented in such a way that decorated functions return objects with all 3 methods, doesn't make the functions conceptually the same. Does this help you to understand my concern? If not, I give up. (With one proviso - you insist that objects with __enter__ and __exit__ must supply __context__. I don't see why that's necessary, but I'll put that in another post). Paul. From p.f.moore at gmail.com Mon Apr 24 22:35:01 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 24 Apr 2006 21:35:01 +0100 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? Message-ID: <79990c6b0604241335l3b1ec972o56945bf38383c9b3@mail.gmail.com> The current, alpha 2, documentation insists that objects with __enter__ and __exit__ methods must also define __context__ in such a way that it returns self. I don't understand why that is necessary. I can understand that it is convenient, in cases where __context__ doesn't need to create a new object each time, but is it *necessary*? Specifically, is there a use case where you need to say "with x" where x is the return value of a __context__ method, or where you call __context__ on something you got from __context__? I can't find one in the PEP or in the code for contextlib... By insisting that things with __enter__ and __exit__ methods must implement __context__, there's a subtype relationship which I *think* means that Nick's insistence that the concepts are distinct, becomes difficult to support. But the terms are so confused now, that I'm utterly unable to frame my objection clearly. Can someone clarify this? Paul. From pje at telecommunity.com Mon Apr 24 22:40:21 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 16:40:21 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <20060424201902.GA6269@panix.com> References: <5.1.1.6.0.20060424155629.038be718@mail.telecommunity.com> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <5.1.1.6.0.20060424155629.038be718@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060424163129.02183eb0@mail.telecommunity.com> At 01:19 PM 4/24/2006 -0700, Aahz wrote: >What is EXPRESSION, then? Not the value it returns, but EXPRESSION >itself -- does it have a name? What about the kinds of things we use >for EXPRESSION? I read "EXPRESSION returns a value" as simply meaning that "value = EXPRESSION", i.e. that the result of computing EXPRESSION *is* the value. That's what it usually means when we talk about expressions returning a value -- that computing the expression produces a value. I still don't see a third thing here. "EXPRESSION returns a value" (Thing 1). That value is "used to create a context" (by calling __context__). This context (Thing 2) "is used to execute a block" (by calling __enter__ and __exit__). I don't get how you can have a difference between "EXPRESSION" and "value it returns" unless you're bringing functions into play. In everything else in Python, an expression *is* the value it returns. How could it be otherwise? Maybe you meant to write an explanation that included three objects, but what you wrote is actually a precise and accurate description of how things works. The value produced by EXPRESSION is used to create a context, and the context is used to execute the block. I don't know how you could explain it any more simply than that -- certainly not by adding a mysterious third gunman on the grassy knoll. :) From pje at telecommunity.com Mon Apr 24 22:50:32 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 16:50:32 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444D1D92.8030601@gmail.com> References: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> Message-ID: <5.1.1.6.0.20060424164211.04317c20@mail.telecommunity.com> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >Wanting to have two names for the same function tells me there's a problem >with the terminology, not that we should actually have two names for the same >function :) It is purely an implementation detail of @contextmanager that it can be used to define __context__ methods. It would be perfectly legal to implement it in such a way that there were two helper classes, one with a __context__ method, and the other with the __enter__/__exit__ methods. I'm fine, however, with: 1. Changing the decorator name to @contextfactory 2. Requiring objects with __enter/__exit__ to also have __context__ (i.e., keep "context" as a subtype of "contextmanager") The truth is that @contextmanager is a misnomer anyway, because it doesn't turn the function into a context manager, it turns the function into a context factory - i.e., when called, it returns a context (that's also a contextmanager). From pje at telecommunity.com Mon Apr 24 23:27:45 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Mon, 24 Apr 2006 17:27:45 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <79990c6b0604241335l3b1ec972o56945bf38383c9b3@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> At 09:35 PM 4/24/2006 +0100, Paul Moore wrote: >The current, alpha 2, documentation insists that objects with >__enter__ and __exit__ methods must also define __context__ in such a >way that it returns self. > >I don't understand why that is necessary. > >I can understand that it is convenient, in cases where __context__ >doesn't need to create a new object each time, but is it *necessary*? > >Specifically, is there a use case where you need to say "with x" where >x is the return value of a __context__ method, or where you call >__context__ on something you got from __context__? I can't find one in >the PEP or in the code for contextlib... The only benefit to this is that it allows us to have only one decorator. If the decorator is defined as returning a thing with __enter__ and __exit__, and such things must also have a __context__, then there is no need for a separate decorator that's defined as returning things that have __context__, nor to tweak the docs to explain that the single decorator does both, nor to have two names for the same decorator. So, it's sort of a documentation hack. :) By the way, I'm now on board with the idea that @contextmanager should be renamed, preferably to @contextfactory. (The objection someone made about "factory" implying a factory function is off-base; @contextfactory indeed *returns* a factory function that returns a context, so actually the name is perfect, and works for both wrapping __context__ methods and standalone generators.) From facundobatista at gmail.com Mon Apr 24 23:41:17 2006 From: facundobatista at gmail.com (Facundo Batista) Date: Mon, 24 Apr 2006 18:41:17 -0300 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. In-Reply-To: <4449E433.1090401@vp.pl> References: <44488AC9.10304@vp.pl> <ca471dc20604210230p3e72d46boa5c3bcde0df79151@mail.gmail.com> <4448B00C.2040901@vp.pl> <ca471dc20604210359g1cdba17eo1d927255fcaf8e96@mail.gmail.com> <44490DBD.3050703@vp.pl> <e8a0972d0604211037y22957fdey8796418e707943f1@mail.gmail.com> <4449E433.1090401@vp.pl> Message-ID: <e04bdf310604241441i38968aeby9fb7ea82b053b3fe@mail.gmail.com> 2006/4/22, Mateusz Rukowicz <mateusz.rukowicz at vp.pl>: > I am now quite sure, what I would like to do, and is possible by you to > accept - code decimal in C, most important things about that would: I'd be glad to mentor this. Regards, . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From crutcher at gmail.com Mon Apr 24 23:54:09 2006 From: crutcher at gmail.com (Crutcher Dunnavant) Date: Mon, 24 Apr 2006 14:54:09 -0700 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. In-Reply-To: <444C70B7.7090909@v.loewis.de> References: <20060424015553.GA842@tummy.com> <444C70B7.7090909@v.loewis.de> Message-ID: <d49fe110604241454r4f81f411u59f93f4508d138e3@mail.gmail.com> On 4/23/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Sean Reifschneider wrote: > > Thoughts? > > In Python 2.5, exit(0) exits. +1 > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/crutcher%40gmail.com > -- Crutcher Dunnavant <crutcher at gmail.com> littlelanguages.com monket.samedi-studios.com From martin at v.loewis.de Tue Apr 25 01:03:46 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:03:46 +0200 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> Message-ID: <444D5952.6090306@v.loewis.de> Guido van Rossum wrote: > Actually, what Nick describes is *exactly* how one should write code > using a condition variable: > > LOCK > while nothing to do: > UNLOCK > wait for the condition variable (or sleep, or whatever) > LOCK > # here we have something to do with the lock held > remove the to-do item > UNLOCK > > except that the outer LOCK/UNLOCK pair should be using a try/except > and the inner UNLOCK/LOCK pair should too. I don't see how you can do > this easily by rewriting the code; the rewrite would be considerably > ugly (or requires a GOTO :-). I thought the trick is that the condition variable *atomically* releases the lock, waits for the condition, and then reacquires the condition variable. I.e. c = threading.Condition() c.lock() while nothing to do: c.wait() # here we have something to do with the lock held c.unlock() So the refactoring is to move the unlock/wait/lock sequence into the condition object. Using with, you could write this as with threading.Condition() as c: while nothing to do: c.wait() # do work So no need for an additional context manager here. Regards, Martin From martin at v.loewis.de Tue Apr 25 01:07:48 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:07:48 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> Message-ID: <444D5A44.2020602@v.loewis.de> Neil Hodgson wrote: > I expect Microsoft means that Visual Studio Express will be > available free forever, not that you will always be able to download > Visual Studio 2005 Express. They normally only provide a particular > product version for a limited time after it has been superceded. Sure: they will remove download access to VS 2005 when VS 2007 comes available. Still, VS 2005 is available for download right now, and VS 2003 isn't (anymore). Regards, Martin From tjreedy at udel.edu Tue Apr 25 01:08:29 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 24 Apr 2006 19:08:29 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" References: <444CFD14.70601@gmail.com> Message-ID: <e2jlpe$348$1@sea.gmane.org> "Alan McIntyre" <alan.mcintyre at gmail.com> wrote in message news:444CFD14.70601 at gmail.com... > Hi all, > > I would like to participate in the Summer of Code as a student. At the > moment it looks like the Python tracker on SF has about 2100 open bugs > and patches, going back to late 2000. The latest weekly tracker summary says about 1300, + 200 RFEs. Still too many. > I'm assuming that a fair number > of these are no longer be applicable, have been fixed/implemented > already, etc., and somebody just needs to slog through the list and > figure out what to do with them. I suspect so too, and plan to at least recheck some that I have reviewed. > My unglamorous proposal is to review bugs & patches (starting with the > oldest) and resolve at least 200 of them. Funny, and nice!, that you should propose this. I thought of adding something like this to the Python wiki as something I might mentor, but hesitated because reviewing *is* not glamourous, because Google wants code-writing projects, and because I am not one to mentor C code writing. > Is that too much? To review and close things that don't need a fix or are obsolete, no. To write code and fix, go with the response from someone who has done such. The thing I worry about, besides you or whoever getting too bored after a week, is that a batch of 50-100 nice new patches could then sit unreviewed on the patch tracker along with those already there. Terry Jan Reedy From martin at v.loewis.de Tue Apr 25 01:11:40 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:11:40 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <6ED62838-6437-4FE9-B338-3B607B17E809@gmail.com> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> <Pine.WNT.4.64.0604241311360.1196@shaolin> <6ED62838-6437-4FE9-B338-3B607B17E809@gmail.com> Message-ID: <444D5B2C.9070803@v.loewis.de> Alex Martelli wrote: > For the Toolkit 2003: > http://tinyurl.com/gv8wr When I go to this URL, I get redirected to http://www.microsoft.com/downloads/details.aspx?familyid=272BE09D-40BB-4&displaylang=en This doesn't look right - it ought to be a UUID. Anyway, I get a page that reads "The download you requested is unavailable. If you continue to see this message when trying to access this download, go to the "Search for a Download" area on the Download Center home page." both with Firefox and MSIE. Regards, Martin From martin at v.loewis.de Tue Apr 25 01:16:25 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:16:25 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <Pine.WNT.4.64.0604241311360.1196@shaolin> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <79990c6b0604240124g74297aebnbd372049fd113579@mail.gmail.com> <Pine.WNT.4.64.0604241311360.1196@shaolin> Message-ID: <444D5C49.2040801@v.loewis.de> John J Lee wrote: > Actually, it's apparently still there, just at a different URL. > Somebody posted the new URL on c.l.py a day or two back (Alex Martelli > started the thread, IIRC). I'm off to the dentist, no time to Google > for it! Please do. If you find the URL, please post it here. All URLs I found don't work (anymore). Regards, Martin From martin at v.loewis.de Tue Apr 25 01:20:59 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:20:59 +0200 Subject: [Python-Dev] Buildbot messages and the build svn revision number In-Reply-To: <444C7D72.2050704@iinet.net.au> References: <444C7D72.2050704@iinet.net.au> Message-ID: <444D5D5B.9010607@v.loewis.de> Nick Coghlan wrote: > Would it be possible to get the buildbot error message subject lines to > include the svn revision number of the build that failed? Only if somebody contributes a patch. Feel free to submit a bug report, but I see little chance of implementing that feature within the next 6 months. Requesting it from the buildbot guys has a higher chance to get it implemented, though - Brian Warner promised to listen to our feature requests (IIRC). Regards, Martin From martin at v.loewis.de Tue Apr 25 01:24:07 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:24:07 +0200 Subject: [Python-Dev] gettext.py bug #1448060 In-Reply-To: <20060424135413.GC4036@logilab.fr> References: <20060424135413.GC4036@logilab.fr> Message-ID: <444D5E17.9030609@v.loewis.de> Sylvain Th?nault wrote: > I've posted a patch (#1475523) for this and assigned it to Martin Von > Loewis since he was the core developper who has made some followup on > the original bug. Could someone (Martin or someone else) quick review > this patch ? I really need a fix for this, so if anyone feels my > patch is not correct, and explain why and what should be done, I can > rework on it. If you need quick patch, you should just go ahead and use it (unreviewed). It will take several more months until either Python 2.4.4 or Python 2.5 is released. Regards, Martin From martin at v.loewis.de Tue Apr 25 01:35:46 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:35:46 +0200 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <1f7befae0604241210g1a1a323bqbe9561fe86ed90db@mail.gmail.com> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> <1f7befae0604241210g1a1a323bqbe9561fe86ed90db@mail.gmail.com> Message-ID: <444D60D2.4070501@v.loewis.de> Tim Peters wrote: > Does > > with cv: > > work to replace the outer (== only) acquire/try/finally/release dance? Indeed it does - although implemented in a somewhat convoluted way: A lock *is* a context (or is that "context manager"), i.e. it implements def __context__(self): return self __enter__=acquire def __exit__(self,*args): return self.release() #roughly A _Condition *has* a lock, so it could become the context (manager?) through def __context__(self): return self.lock However, instead of doing that, it does def __context__(self): return self # roughly: __enter__ is actually set in __init__ to self.lock.acquire def __enter__(self): return self.acquire() def __exit__(self): return self.release Looks somewhat redundant to me, but correct. Regards, Martin From martin at v.loewis.de Tue Apr 25 01:55:05 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 01:55:05 +0200 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <e2jna6$730$1@sea.gmane.org> References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de><444C7BED.9080204@v.loewis.de><50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> <444D5A44.2020602@v.loewis.de> <e2jna6$730$1@sea.gmane.org> Message-ID: <444D6559.2020206@v.loewis.de> Terry Reedy wrote: > Since, as I remember, there was no license agreement for the download, just > for the install (if/when I do it), I should think it legal to send the file > to someone else. But I don't really know, of course. Going by the download page for 2005, it does not have any restrictions on the page: http://msdn.microsoft.com/vstudio/express/visualc/download/ so it seems plausible that vc2003 did not have such restrictions, either. However, the bottom of the page has a link (Terms of Use) to http://www.microsoft.com/info/cpyright.mspx which says WITHOUT LIMITING THE FOREGOING, COPYING OR REPRODUCTION OF THE SOFTWARE TO ANY OTHER SERVER OR LOCATION FOR FURTHER REPRODUCTION OR REDISTRIBUTION IS EXPRESSLY PROHIBITED, UNLESS SUCH REPRODUCTION OR REDISTRIBUTION IS EXPRESSLY PERMITTED BY THE LICENSE AGREEMENT ACCOMPANYING SUCH SOFTWARE. So I guess they don't want you to provide copies of that file. Regards, Martin From guido at python.org Tue Apr 25 01:56:54 2006 From: guido at python.org (Guido van Rossum) Date: Mon, 24 Apr 2006 16:56:54 -0700 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <444D60D2.4070501@v.loewis.de> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> <1f7befae0604241210g1a1a323bqbe9561fe86ed90db@mail.gmail.com> <444D60D2.4070501@v.loewis.de> Message-ID: <ca471dc20604241656i5c7db665wc5ac8066a14c8386@mail.gmail.com> On 4/24/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Tim Peters wrote: > > Does > > > > with cv: > > > > work to replace the outer (== only) acquire/try/finally/release dance? > > Indeed it does - although implemented in a somewhat convoluted way: > A lock *is* a context (or is that "context manager"), i.e. it implements > > def __context__(self): return self > __enter__=acquire > def __exit__(self,*args): return self.release() #roughly > > A _Condition *has* a lock, so it could become the context (manager?) > through > > def __context__(self): return self.lock > > However, instead of doing that, it does > > def __context__(self): return self > # roughly: __enter__ is actually set in __init__ to self.lock.acquire > def __enter__(self): > return self.acquire() > def __exit__(self): > return self.release > > Looks somewhat redundant to me, but correct. Thanks -- I didn't see the shortcut when I coded this. I'll fix it. Tim is right, the UNLOCK/LOCK part is implied in the wait() call. However, the wait() implementation really *does* provide a use case for the primitive operation that Nick proposed, and it can't be refactored to remove the pattern Martin disapproves of (though of course the existing try/finally is fine). I'm not sure if the use case is strong enough to warrant adding it; I think it's fine not to support it directly. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at barrys-emacs.org Tue Apr 25 00:52:05 2006 From: barry at barrys-emacs.org (Barry Scott) Date: Mon, 24 Apr 2006 23:52:05 +0100 Subject: [Python-Dev] Why are contexts also managers? (wasr45544 -peps/trunk/pep-0343.txt) In-Reply-To: <444C3CBD.2070208@gmail.com> References: <r01050400-1039-732D0EC9D2CB11DAABF6001124365170@[10.0.0.24]> <444C3CBD.2070208@gmail.com> Message-ID: <0B52691F-ACE5-4FB8-B579-4BE0A001C481@barrys-emacs.org> On Apr 24, 2006, at 03:49, Nick Coghlan wrote: > Just van Rossum wrote: >> Baptiste Carvello wrote: >> >>> Terry Reedy a ?crit : >>>> So I propose that the context maker be called just that: 'context >>>> maker'. That should pretty clearly not be the context that manages >>>> the block execution. >>>> >>> +1 for context maker. In fact, after reading the begining of the >>> thread, I came up with the very same idea. >> >> Or maybe "context factory"? Yes its a factory. That is traditionally that you call a function that makes objects isn't it? > That would be fine if we used __call__ to retrieve the context > manager - but > "factory" is too tightly bound to "factory function" in my mind. __call__ may or may not implement a factory, that is up to my design to decide. __context__ is always a factory because that is the interface that "with" mandates. factory function is one way to implement a factory so yes its tightly bound, but that not a reason to reject factory. From what I've read on this list this is all there is to it: with EXPR: block with needs a context. If the EXPR object has a __context__ method it is a factory that will make a suitable context for the EXPR object otherwise EXPR is the context. that context must have __enter__ and __exit__ methods to operate the with protocol. I haven't learn about decorators so I've no comment on why you need them as well as the special method names. But the docs should tell me why. Barry From kirat.singh at gmail.com Tue Apr 25 02:20:15 2006 From: kirat.singh at gmail.com (Kirat Singh) Date: Mon, 24 Apr 2006 20:20:15 -0400 Subject: [Python-Dev] Reducing memory overhead for dictionaries by removing me_hash In-Reply-To: <ca471dc20604241203t357c0a36me6fa45ea7b96bc59@mail.gmail.com> References: <381244830604222105y5338daf8je37892a91507f1ec@mail.gmail.com> <1f7befae0604231347j5f5ceb19j4425ba86d83649c5@mail.gmail.com> <381244830604231403j1897f035h6e7c9a4fdf4c906f@mail.gmail.com> <444BF818.4050509@v.loewis.de> <ca471dc20604241203t357c0a36me6fa45ea7b96bc59@mail.gmail.com> Message-ID: <381244830604241720t5ab91e95g6a6a95758e51f9d6@mail.gmail.com> very true, but python makes it oh so easy to be lazy :-) On 4/24/06, Guido van Rossum <guido at python.org> wrote: > > On 4/23/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > Kirat Singh wrote: > > > The reason I looked into this to begin with was that my code used up a > > > bunch of memory which was traceable to lots of little objects with > > > instance dicts, so it seemed that if instancedicts took less memory I > > > wouldn't have to go and add __slots__ to a bunch of my classes, or > > > rewrite things as tuples/lists, etc. > > > > Ah. In that case, I would be curious if tuning PyDict_MINSIZE could > > help. If you have many objects of the same type, am I right assuming > > they all have the same number of dictionary keys? If so, what is the > > dictionary size? Do they use ma_smalltable, or do they have an extra > > ma_table? > > But the space savings by using __slots__ is so much bigger! (And less > work than hacking the C code too. :-) > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060424/9f4ccddf/attachment.htm From hyeshik at gmail.com Tue Apr 25 05:06:16 2006 From: hyeshik at gmail.com (Hye-Shik Chang) Date: Tue, 25 Apr 2006 12:06:16 +0900 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <20060424114024.66D0.JCARLSON@uci.edu> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> <444C9FBE.2060709@renet.ru> <20060424114024.66D0.JCARLSON@uci.edu> Message-ID: <4f0b69dc0604242006g1b2bd50fx4665b36fa02e2893@mail.gmail.com> On 4/25/06, Josiah Carlson <jcarlson at uci.edu> wrote: > > There exists various C and Python implementations of both AVL and > Red-Black trees. For users of Python who want to use AVL and/or > Red-Black trees, I would urge them to use the Python implementations. > In the case of *needing* the speed of a C extension, there already > exists a CPython extension module for AVL trees: > http://www.python.org/pypi/pyavl/1.1 > And a C implementation for redblack tree is here: http://python.org/sf/1324770 :) Hye-Shik From ncoghlan at gmail.com Tue Apr 25 05:22:18 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 13:22:18 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> Message-ID: <444D95EA.6060505@gmail.com> Paul Moore wrote: > In the documentation of contextmanager, consider the examples: > > @contextmanager > def tag(name): > ... > > class Tag: > ... > @contextmanager > def __context__(self): > ... > > Now, tag should be a function which returns a context manager (a1 > definition - object with a __context__ method) ("with" statement, item > 1 of the definition). On the other hand, Tag.__context__ is a context > manager's __context__ method, which according to item 2 of the > definition of the "with" statement, should be a function which returns > a context object (a1 definition - object with __enter__ and __exit__ > methods). > > *These are two different types of function*. Paraphrasing: def iterseq(seq): for x in range(len(seq)): yield x[i] class Container: ... def __iter__(self): for x in self.contents: yield x Now, iterseq should be a function which returns an iterable (object with an __iter__() method that is passed to for statements). On the other hand, Tag.__iter__ is an iterable's __iter__ method, which should be a function which returns an iterator (an object with __iter__() and next() methods). *These are two different types of function*. Why yes, yes they are. When iterators were introduced, a deliberate design decision was taken to make the iterator protocol a superset of the iterable protocol, precisely to allow for loops not to care whether they were being given an iterable or an iterator. Iterator's can be standalone, or they can be intended specifically for use as a particular iterable's native iterator. The terminology is the same either way. PEP 343 made a *deliberate, conscious design decision* to copy the semantics of iterators by making the context management protocol a superset of the context protocol (or rather, the context specification protocol in alpha 2). What I don't understand is why you and Phillip are so keen on breaking this deliberate symmetry with an existing part of the language design. Of course it isn't necessary, but then neither was it necessary to make the iterator protocol a superset of the iterable protocol. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Tue Apr 25 05:38:24 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 13:38:24 +1000 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> Message-ID: <444D99B0.6020008@gmail.com> Phillip J. Eby wrote: > At 09:35 PM 4/24/2006 +0100, Paul Moore wrote: >> The current, alpha 2, documentation insists that objects with >> __enter__ and __exit__ methods must also define __context__ in such a >> way that it returns self. >> >> I don't understand why that is necessary. It's not necessary at all. It's a deliberate design decision in the PEP, deliberately copying the semantics of the iterator protocol. >> I can understand that it is convenient, in cases where __context__ >> doesn't need to create a new object each time, but is it *necessary*? >> >> Specifically, is there a use case where you need to say "with x" where >> x is the return value of a __context__ method, or where you call >> __context__ on something you got from __context__? I can't find one in >> the PEP or in the code for contextlib... There aren't any current use cases that require it, no. But how much harder would it have been to develop itertools if the "all iterators are iterables" identity hadn't been part of the original design? Requiring that all context managers also be context specifiers is harmless. Not requiring it breaks the parallel with iterators and iterables, which means that parallel can't be leveraged to explain how things work anymore. > The only benefit to this is that it allows us to have only one > decorator. If the decorator is defined as returning a thing with > __enter__ and __exit__, and such things must also have a __context__, > then there is no need for a separate decorator that's defined as > returning things that have __context__, nor to tweak the docs to explain > that the single decorator does both, nor to have two names for the same > decorator. > > So, it's sort of a documentation hack. :) It's a deliberate design decision to parallel the existing model provided by iterators and iterables. Otherwise, you have to have three concepts: context specifiers (have __context__ method) context managers (have __enter__/__exit__ methods) combined specifier/managers (which have all three) There was no need for iterators to be iterables, either - that was a deliberate design decision taken at the time iterators were introduced. I merely copied that pre-existing approach for PEP 343. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Tue Apr 25 05:45:17 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 13:45:17 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> References: <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> Message-ID: <444D9B4D.2090603@gmail.com> Phillip J. Eby wrote: > At 12:24 PM 4/24/2006 -0700, Aahz wrote: >> On Mon, Apr 24, 2006, Phillip J. Eby wrote: >>> At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >>>> Using two names to describe three different things isn't intuitive for >>>> anybody. >>> Um, what three things? I only count two: >>> >>> 1. Objects with __context__ >>> 2. Objects with __enter__ and __exit__ >>> >>> What's the third thing? >> The actual context that's used during the execution of BLOCK. It does >> not exist as a concrete object, > > Um, huh? It's a thing but it's not an object? I'm lost now. I don't see > why we should introduce a concept that has no concrete existence into > something that's hard enough to explain when you stick to the objects that > actually exist. :) "the block executes with the lock held" "the block executes with the file open, and closes it when finished "the block executes with stdout redirected to the file f" "the block executes with opening and close HTML tags written to stdout before and after the code is executed" "the block executes with the active decimal context set to a copy of the supplied context object" Believe me, it may not be representable as a Python object, but the "runtime context" created by a with statement is very real, and its what users actually give a damn about. The concrete objects (whatever we call them) are implementation details. And this is why I now believe using "context object" for *either* of the concrete objects would be a mistake. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Tue Apr 25 05:53:23 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 13:53:23 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060424164211.04317c20@mail.telecommunity.com> References: <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444B102F.5060201@iinet.net.au> <444BB0F8.2060602@gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424164211.04317c20@mail.telecommunity.com> Message-ID: <444D9D33.1080306@gmail.com> Phillip J. Eby wrote: > At 04:48 AM 4/25/2006 +1000, Nick Coghlan wrote: >> Wanting to have two names for the same function tells me there's a >> problem >> with the terminology, not that we should actually have two names for >> the same >> function :) > > It is purely an implementation detail of @contextmanager that it can be > used to define __context__ methods. It would be perfectly legal to > implement it in such a way that there were two helper classes, one with > a __context__ method, and the other with the __enter__/__exit__ methods. > > I'm fine, however, with: > > 1. Changing the decorator name to @contextfactory Due to the ambiguity of the term, I'm still strongly resistant to the idea of calling *any* of the objects involved context objects - so I don't like contextfactory either (as it implies that this is something which creates context objects). > 2. Requiring objects with __enter/__exit__ to also have __context__ Yay, we finally agree on something :) > The truth is that @contextmanager is a misnomer anyway, because it > doesn't turn the function into a context manager, it turns the function > into a context factory - i.e., when called, it returns a context (that's > also a contextmanager). A decorated generator function is as much a context manager as a normal generator function is actually an iterator :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From nnorwitz at gmail.com Tue Apr 25 06:45:41 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Apr 2006 21:45:41 -0700 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <20060424190549.GA18585@rogue.amk.ca> References: <444CFD14.70601@gmail.com> <20060424190549.GA18585@rogue.amk.ca> Message-ID: <ee2a432c0604242145t1495fa28rfd8741b10b01ca7d@mail.gmail.com> On 4/24/06, A.M. Kuchling <amk at amk.ca> wrote: > On Mon, Apr 24, 2006 at 12:30:12PM -0400, Alan McIntyre wrote: > > My unglamorous proposal is to review bugs & patches (starting with the > > oldest) and resolve at least 200 of them. Is that too much? Too few? > > I'll fix as many as possible during the SoC time frame, but I wanted to > > set a realistically achievable minimum for the proposal. If anybody can > > offer helpful feedback on a good minimum number I'd appreciate it. > > I'd suggest 75 or maybe 100 bugs or patches, not 200. I agee with Andrew. There's not that much low hanging fruit. People review the old stuff from time to time. There are a lot of really hard bugs to fix. I guess there are also a lot that we can't reproduce and the submitter is MIA. Those might be easier. Ping them if not reproducible, if no response in a month, we close. n From martin at v.loewis.de Tue Apr 25 07:22:07 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 25 Apr 2006 07:22:07 +0200 Subject: [Python-Dev] Proposed addition to threading module - released In-Reply-To: <ca471dc20604241656i5c7db665wc5ac8066a14c8386@mail.gmail.com> References: <444C49B4.2030502@iinet.net.au> <444C7224.1010902@v.loewis.de> <ca471dc20604241134k6d6b4df0xa856756f4ba77453@mail.gmail.com> <1f7befae0604241210g1a1a323bqbe9561fe86ed90db@mail.gmail.com> <444D60D2.4070501@v.loewis.de> <ca471dc20604241656i5c7db665wc5ac8066a14c8386@mail.gmail.com> Message-ID: <444DB1FF.7090004@v.loewis.de> Guido van Rossum wrote: > Tim is right, the UNLOCK/LOCK part is implied in the wait() call. > However, the wait() implementation really *does* provide a use case > for the primitive operation that Nick proposed, and it can't be > refactored to remove the pattern Martin disapproves of (though of > course the existing try/finally is fine). I'm not sure if the use case > is strong enough to warrant adding it; I think it's fine not to > support it directly. Ah, you mean that the wait implementation *itself* should use the unlocked() context (which calls release on enter, and acquire on exit). That wouldn't work, as _Condition.wait doesn't use release/enter, but _release_save/_acquire_restore. So the unlocked context couldn't be used there if it existed. Regards, Martin From oliphant.travis at ieee.org Tue Apr 25 07:41:41 2006 From: oliphant.travis at ieee.org (Travis E. Oliphant) Date: Mon, 24 Apr 2006 23:41:41 -0600 Subject: [Python-Dev] adding Construct to the standard library? In-Reply-To: <4445C468.5010708@canterbury.ac.nz> References: <1d85506f0604171438q6670d003y279bc8cf8cb898cf@mail.gmail.com> <79990c6b0604181225y62244335x3fb40794cbc24ecc@mail.gmail.com> <1d85506f0604181339nc1f22fco872687074a3512d5@mail.gmail.com> <094301c66343$1d4e57b0$2452fea9@bagio> <e23v44$fin$1@sea.gmane.org> <4445C468.5010708@canterbury.ac.nz> Message-ID: <e2kcqn$32t$1@sea.gmane.org> Greg Ewing wrote: > Travis Oliphant wrote: > >> For what it's worth, NumPy also defines a data-type object which it >> uses to describe the fundamental data-type of an array. In the context >> of this thread it is also yet another way to describe a binary-packed >> structure in Python. > > Maybe there should be a separate module providing > a data-packing facility that ctypes, NumPy, etc. > can all use (perhaps with their own domain-specific > extensions to it). > > It does seem rather silly to have about 3 or 4 > different incompatible ways to do almost exactly > the same thing (struct, ctypes, NumPy and now > Construct). I agree. Especially with ctypes and struct now in the standard library. The problem, however, is that every module does something a little-bit different with the object. NumPy needs a built-in object with at least a few fields defined. The idea of "specifying the data-type" is different then it's representation to NumPy. After looking at it, I'm not particularly fond of Construct's way to specify data-types, but then again we've been developing the array interface for just this purpose and so have some biased opinions. Some kind of data-type specification would indeed be useful. NumPy needs a built-in (i.e. written in C) data-type object internally. If that builtin object were suitable generally then all the better. For details, look at http://numeric.scipy.org/array_interface (in particular the __array_descr__ field of the interface for what we came up with last year over several months of discussion. -Travis From kap4020 at rit.edu Tue Apr 25 06:46:58 2006 From: kap4020 at rit.edu (Karol Pietrzak) Date: Tue, 25 Apr 2006 00:46:58 -0400 Subject: [Python-Dev] interested in Google Summer of Code: what should I do? Message-ID: <200604250046.58852.kap4020@rit.edu> Hello everyone. I'm interested in the Python Software Foundation's Google Summer of Code proposals. What should I do? 1. Should I post my resume here? 2. List a few of the proposals that really interest me? 3. Explain why I feel qualified to contribute? 4. Who should I talk to? Thanks for everyone's time, and I eagerly await any responses. -- Karl Pietrzak kap4020 at rit.edu -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20060425/9bf9fa4a/attachment.pgp From nnorwitz at gmail.com Tue Apr 25 08:06:00 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 24 Apr 2006 23:06:00 -0700 Subject: [Python-Dev] interested in Google Summer of Code: what should I do? In-Reply-To: <200604250046.58852.kap4020@rit.edu> References: <200604250046.58852.kap4020@rit.edu> Message-ID: <ee2a432c0604242306n29ff2a64p24897029e43378f1@mail.gmail.com> Hi Karol. Please see the wiki: http://wiki.python.org/moin/SummerOfCode/ There are a bunch of ideas up there. If you want to hash out new ideas, I suppose this list is as good as any. But please do some searching to find if your idea has been tried before. It might also help you to guage interest in your project. You can also post messages to comp.lang.python. You should start working on your proposal now. The final submission is due in about 2 weeks. Good luck! n -- On 4/24/06, Karol Pietrzak <kap4020 at rit.edu> wrote: > Hello everyone. > > I'm interested in the Python Software Foundation's Google Summer of Code > proposals. > > What should I do? > 1. Should I post my resume here? > 2. List a few of the proposals that really interest me? > 3. Explain why I feel qualified to contribute? > 4. Who should I talk to? > > Thanks for everyone's time, and I eagerly await any responses. > > > -- > Karl Pietrzak > kap4020 at rit.edu > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com > > > > From sylvain.thenault at logilab.fr Tue Apr 25 09:15:11 2006 From: sylvain.thenault at logilab.fr (Sylvain =?iso-8859-1?Q?Th=E9nault?=) Date: Tue, 25 Apr 2006 09:15:11 +0200 Subject: [Python-Dev] gettext.py bug #1448060 In-Reply-To: <444D5E17.9030609@v.loewis.de> References: <20060424135413.GC4036@logilab.fr> <444D5E17.9030609@v.loewis.de> Message-ID: <20060425071511.GA4030@logilab.fr> On Tuesday 25 April ? 01:24, "Martin v. L?wis" wrote: > Sylvain Th?nault wrote: > > I've posted a patch (#1475523) for this and assigned it to Martin Von > > Loewis since he was the core developper who has made some followup on > > the original bug. Could someone (Martin or someone else) quick review > > this patch ? I really need a fix for this, so if anyone feels my > > patch is not correct, and explain why and what should be done, I can > > rework on it. > > If you need quick patch, you should just go ahead and use it > (unreviewed). It will take several more months until either Python 2.4.4 > or Python 2.5 is released. yep, that's what I intend to do anyway... But I wish to contribute the fix back so I can drop the overriden module asap (even if asap means in a few months in that case), and I would feel better if the patch was validated and checked in. But you're right that it doesn't have to be done so quickly since I'll be stuck with my own patched module until an official python release. -- Sylvain Th?nault LOGILAB, Paris (France). http://www.logilab.com http://www.logilab.fr http://www.logilab.org From p.f.moore at gmail.com Tue Apr 25 10:05:47 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 25 Apr 2006 09:05:47 +0100 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444D95EA.6060505@gmail.com> References: <444B102F.5060201@iinet.net.au> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> <444D95EA.6060505@gmail.com> Message-ID: <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> On 4/25/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > PEP 343 made a *deliberate, conscious design decision* to copy the semantics > of iterators by making the context management protocol a superset of the > context protocol (or rather, the context specification protocol in alpha 2). OK. It's possible I'll need to go back to PEP 343 to locate the justification for this design decision (as I can't find it in the documentation). I haven't done so thus far, as I am deliberately trying to retain a position as "newcomer who has only ready the documentation", because I want to offer that perspective. I've been drawn so far into design discussions now, that I am probably no longer of any use in that role. So now, I'm really just supporting Phillip's side of the discussion. Which isn't much help, as all we seem to be achieving is a 2 vs 1 impasse, rather than the previous 1 vs 1 :-( > What I don't understand is why you and Phillip are so keen on breaking this > deliberate symmetry with an existing part of the language design. Of course it > isn't necessary, but then neither was it necessary to make the iterator > protocol a superset of the iterable protocol. What *I* don't understand, is why the symmetry is so crucial to you. OK, I'll give in on this one (as Philip has done). I'll accept that objects providing __enter__ and __exit__ must also provide __context__. I'll accept your argument above that the parallel with generators and __iter__ is enough. I'll even accept that the name of the decorator is the only issue, and it's just a naming and documentation fix we're after here. But that leaves one fundamental point (I'm tempted to shout here, but I won't :-)) I still found the alpha 1 terminology and documentation completely natural and intuitive. Completely. Not "acceptable", but "completely natural". From the perspective of someone with limited understanding of the design, looking to the documentation for enlightenment. And that's got to be my last word. I can't judge the alpha 2 documentation any more, I'm too close to the problem now. Heck, I no longer even have usable terms to describe the objects involved, because every reference has to be qualified with (a1 terminology) or (a2 terminology). So I'll bow out at this point. +1 on a1 terminology, can't judge on anything else. (Oh, and +100000 on just getting the damn thing resolved once and for all). I hope my contributions have been useful. Paul. From ncoghlan at gmail.com Tue Apr 25 11:24:18 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Apr 2006 19:24:18 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> References: <444B102F.5060201@iinet.net.au> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> <444D95EA.6060505@gmail.com> <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> Message-ID: <444DEAC2.8020203@gmail.com> Paul Moore wrote: > On 4/25/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > I still found the alpha 1 terminology and documentation completely > natural and intuitive. Completely. Not "acceptable", but "completely > natural". From the perspective of someone with limited understanding > of the design, looking to the documentation for enlightenment. Don't get me wrong, I *like* the alpha1 terminology - looking at it in isolation it is very attractive (indeed, one of my early suggestions after AMK brought up the conflict between the docs and the implementation was to change the implementation to match the alpha 1 docs). But I backed away from that idea, because I believe the alpha 1 terminology will break badly as soon as you try to put it in a wider context. The term "context" is already overloaded with too many meanings (decimal arithmetic contexts, GUI drawing context objects, parser contexts, etc, etc). Normally the overloaded meanings don't matter, because you'll be working in one problem domain or another, so it will be clear which meaning is intended. The issue this poses for the with statement, however, is that its just a programming tool, so its potential use spans all application domains. And under the alpha 1 terminology, almost none of those existing context objects would be able to serve as with statement context objects (at best, they'd be context managers). So things like decimal.Context get left trying to find a sane name for what their __context__ method returns. decimal.Context.__context__() returns a . . . context? What? Wasn't it already a context? Oh, so it actually returns a "with statement context object". But that object still isn't really the context from a user's point of view - the context of interest to a user is the effect that object has on the runtime state (i.e. setting the decimal context for the current thread). That said, that might actually still be salvageable if the term 'context object' is appropriately qualified. . . "when requested by the with statement, a context manager returns a with statement context object that sets up and tears down the desired runtime context" The only implementation changes needed then would be to change contextlib.contextmanager to contextlib.context, contextlib.GeneratorContextManager to contextlib.GeneratorContext and change decimal.ContextManager to decimal.WithStatementContext > And that's got to be my last word. I can't judge the alpha 2 > documentation any more, I'm too close to the problem now. Heck, I no > longer even have usable terms to describe the objects involved, > because every reference has to be qualified with (a1 terminology) or > (a2 terminology). > So I'll bow out at this point. +1 on a1 terminology, can't judge on > anything else. (Oh, and +100000 on just getting the damn thing > resolved once and for all). Since the only other two people who seem to really care about this are dead set against me, I'll see how the qualified term "with statement context object" pans out for clearing up any potential naming conflicts with domain-specific context objects, or with the nebulous entity that is the runtime context. I haven't seen an alpha2 freeze announcement yet, so I hopefully still have time to look into this. . . Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From amk at amk.ca Tue Apr 25 13:02:18 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 25 Apr 2006 07:02:18 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <ee2a432c0604242145t1495fa28rfd8741b10b01ca7d@mail.gmail.com> References: <444CFD14.70601@gmail.com> <20060424190549.GA18585@rogue.amk.ca> <ee2a432c0604242145t1495fa28rfd8741b10b01ca7d@mail.gmail.com> Message-ID: <20060425110218.GA1764@Andrew-iBook2.local> On Mon, Apr 24, 2006 at 09:45:41PM -0700, Neal Norwitz wrote: > hard bugs to fix. I guess there are also a lot that we can't > reproduce and the submitter is MIA. Those might be easier. Ping them > if not reproducible, if no response in a month, we close. The last time there was a thread suggesting closing old bugs, wasn't the consensus to leave them open for information purposes? --amk From mwh at python.net Tue Apr 25 14:22:27 2006 From: mwh at python.net (Michael Hudson) Date: Tue, 25 Apr 2006 13:22:27 +0100 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> (Neil Hodgson's message of "Mon, 24 Apr 2006 17:48:27 +1000") References: <ca471dc20604210237h797824ccu796146dae44a4689@mail.gmail.com> <44491299.9040607@v.loewis.de> <444C7BED.9080204@v.loewis.de> <50862ebd0604240048m2840a9f2y3576f0dc279a5b98@mail.gmail.com> Message-ID: <2mr73lhmkc.fsf@starship.python.net> "Neil Hodgson" <nyamatongwe at gmail.com> writes: > Martin v. L?wis: > >> Apparently, the status of this changed right now: it seems that >> the 2003 compiler is not available anymore; the page now says >> that it was replaced with the 2005 compiler. >> >> Should we reconsider? > > I expect Microsoft means that Visual Studio Express will be > available free forever, not that you will always be able to download > Visual Studio 2005 Express. I don't think that's what Herb Sutter said in his ACCU keynote, which is where I'm pretty sure Guido got his information at the start of this thread (he was there too and the email appeared soon after). If I remember right, he said that 2005 was free, forever, and they'd think about later versions. I may be misremembering, and I certainly haven't read any official stuff from Microsoft... Cheers, mwh -- I also feel it essential to note, [...], that Description Logics, non-Monotonic Logics, Default Logics and Circumscription Logics can all collectively go suck a cow. Thank you. -- http://advogato.org/person/Johnath/diary.html?start=4 From arigo at tunes.org Tue Apr 25 14:51:46 2006 From: arigo at tunes.org (Armin Rigo) Date: Tue, 25 Apr 2006 14:51:46 +0200 Subject: [Python-Dev] EuroPython 2006: Call for papers Message-ID: <20060425125146.GA8556@code0.codespeak.net> Hi all, A shameless plug and reminder for EuroPython 2006 (July 3-5): * you can submit talk proposals until May 31st. * there is a refereed papers track; deadline for abstracts: May 5th. See the full call for papers below. A bientot, Armin Rigo & Carl Friedrich Bolz ======================================================================== EuroPython 2006 CERN, Geneva, 3-5 July Refereed Track: Call for Paper http://www.europython.org ======================================================================== EuroPython is the only conference in the Python world that has a properly prestigious peer-reviewed forum for presenting technical and scientific papers. Such papers, with advanced and highly innovative contents, can equally well stem from academic research or industrial research. We think this is an important function for EuroPython, so we are even making some grants available to help people with travel costs. For this refereed track, we will be happy to consider papers in subject areas including, but not necessarily limited to, the following: * Python language and implementations * Python modules (in the broadest sense) * Python extensions * Interoperation between Python and other languages / subsystems * Scientific applications of Python * Python in Education * Benchmarking Python We are looking for Python-related scientific and technical papers of advanced, highly innovative content that present the results of original research (be it of the academic or "industrial research" kind), with proper attention to "state of the art" and previous relevant literature/results (whether such relevant previous literature is itself directly related to Python or not). We do not intend to let the specific subject area block a paper's acceptance, as long as the paper satisfies other requirements: innovative, Python-related, reflecting original research, with proper attention to previous literature. Abstracts ========= Please submit abstracts of no more than 200 words to the refereeing committee. You can send submissions no later than 5 May 2006. We shall inform you whether your paper has been selected no later than 15 May 2006. For all details regarding the submission of abstracts, please see the EuroPython website (http://www.europython.org). Papers If your abstract is accepted, you must submit your corresponding paper before 17 June 2006. You should submit the paper as a PDF file, in A4 format, complete, "stand-alone", and readable on any standards-compliant PDF reader (basically, the paper must include all fonts and figures it uses, rather than using external pointers to them; by default, most PDF-preparation programs typically produce such valid "stand-alone" PDF documents). Refereeing ========== The refereeing committee, selected by Armin Rigo, will examine all abstracts and papers. The committee may consult external experts as it deems fit. Referees may suggest or require certain changes and editing in submissions, and make acceptance conditional on such changes being performed. We expect all papers to reflect the abstract as approved and reserve the right, at our discretion, to reject a paper, despite having accepted the corresponding abstract, if the paper does not substantially correspond to the approved abstract. Presentation ============ The paper must be presented at EuroPython by one or more of the authors. Presentation time will be either half an hour or an hour, including time for questions and answers, depending on each paper's details, and also on the total number of papers approved for presentation. Proceedings =========== We will publish the conference's proceedings in purely electronic form. By presenting a paper, authors agree to give the EuroPython conference non-exclusive rights to publish the paper in electronic forms (including, but not limited to, partial and total publication on web sites and/or such media as CDROM and DVD-ROM), and warrant that the papers are not infringing on the rights of any third parties. Authors retain all other intellectual property rights on their submitted abstracts and papers excepting only this non-exclusive license. Subsidised travel ================= We have funds available to subsidise travel costs for some presenters who would otherwise not be able to attend EuroPython. When submitting your abstract, please indicate if you would need such a subsidy as a precondition of being able to come and present your paper. (Yes, this possibility does exist even if you are coming from outside of Europe. Papers from people in New Zealand who can only come if their travel is subsidised, for example, would be just fine with us...). From pje at telecommunity.com Tue Apr 25 15:32:33 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 09:32:33 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <444DEAC2.8020203@gmail.com> References: <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> <444B102F.5060201@iinet.net.au> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> <444D95EA.6060505@gmail.com> <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> Message-ID: <5.1.1.6.0.20060425092734.01e61700@mail.telecommunity.com> At 07:24 PM 4/25/2006 +1000, Nick Coghlan wrote: >So things like decimal.Context get left trying to find a sane name for what >their __context__ method returns. decimal.Context.__context__() returns a . . >. context? What? Wasn't it already a context? Oh, so it actually returns a >"with statement context object". But that object still isn't really the >context from a user's point of view - the context of interest to a user is >the >effect that object has on the runtime state (i.e. setting the decimal context >for the current thread). > >That said, that might actually still be salvageable if the term 'context >object' is appropriately qualified. . . > >"when requested by the with statement, a context manager returns a with >statement context object that sets up and tears down the desired runtime >context" If qualification of "context" is the only problem, I propose: context manager -- thing with __context__ method execution context object -- thing with __enter__/__exit__/__context__ execution context -- the abstract thing set up and torn down by the ECO "When requested by the with statement, a context manager returns an execution context object that sets up and tears down the desired execution context for the block." And I still call for @contextfactory as a decorator that creates a factory function returning an execution context. I don't think that calling it an "executioncontextfactory" or "executionfactory" or anything like that adds anything useful; it is after all coming from a library for dealing with execution contexts, so it's sufficiently clear, um, in context. :) From ncoghlan at iinet.net.au Tue Apr 25 16:08:47 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Wed, 26 Apr 2006 00:08:47 +1000 Subject: [Python-Dev] Updated context management documentation Message-ID: <444E2D6F.2070508@iinet.net.au> I won't call this a resolution yet, since it'll probably be a few days before I get time to update the PEP itself, and the changes below are based on pulling together a few different threads of the recent discussion. However, I believe (hope?) we're very close to being done :) The heart of the matter is that, having put together a full documentation set based on the interpretation I intended when I updated PEP 343 last year, I've come to agree that, despite its problems, the alpha 1 documentation generally reads better than my intended approach did. So I just checked in a change that reverts to the alpha 1 terminology - the phrase "context specifier" disappears, and "context manager" goes back to referring to any object with a __context__ method. However, I made the changes below in order to address the conflicts between the alpha 1 documentation and implementation. Avoiding ambiguity Reverting to the alpha 1 terminology still left the problem of ambiguity in the terms "context" and "context object". Aside from the terms "runtime context" and "context of execution" to refer to application state when particular code is run, and the general English usage of the word "context", there are also too many kinds of context object (like decimal.Context) already out in the wild for us to simply adopt the term "context object" without qualifying it in some fashion. Lacking a better solution, I went for the straightforward approach of using "with statement context object" (and variants along those lines) in a number of places where the alpha 1 docs just used "context object". It's clumsy, but it seems to work (and in cases where application domain context objects like decimal.Context don't intrude, it's still possible to shorten the phrase). (An idea that just occurred to me in writing this email is "managed context". That's a lot less clumsy, and fits with the context manager idea. If other folks like it, we could probably switch to that either before or after alpha 2, depending on when Anthony wants to make the alpha release happen). Context expressions In response to a comment Aahz made, I tweaked the language reference to explicitly refer to the expression in the with statement as a "context expression". The result of the context expression must then be a context manager in order for the with statement to operate correctly. This means that, as far as users are concerned, the context expression defines the runtime context for the block of code in the with statement. Unless you're writing your own objects, the existence of context managers and with statement context objects will generally be an irrelevant implementation detail. Dealing with decimal.ContextManager I renamed this to decimal.WithStatementContext. It's ugly but it works, and the existence of this object is really an implementation detail anyway. (decimal.ManagedContext would definitely look better. . .) Dealing with contextlib.contextmanager As recently suggested (by Terry, IIRC), I renamed this to contextlib.contextfactory, as the decorator creates a factory function for producing with statement context objects. This name works fine for both standalone context factories and the __context__ method of a context manager (as that method is itself a context factory). (As a fortuitous accident, this actually aligns with descriptions I've read of normal generator functions as iterator factories. . .) Dealing with contextlib.GeneratorContextManager I renamed this to contextlib.GeneratorContext. The fact that it's in the contextlib module provides sufficient context to indicate that this is a with statement context object, so I could avoid the clumsy naming that was needed in the decimal module. Changes to contextlib.closing These changes weren't actually terminology related, but they got mixed in with the rest of the changes to the contextlib module. Firstly, as the state to be retained is trivial, contextlib.closing is now implemented directly rather than via a generator. Secondly, the documentation now shows an example of a class with a close() method using contextlib.closing directly as its own __context__() method. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From alan.mcintyre at gmail.com Tue Apr 25 16:15:11 2006 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Tue, 25 Apr 2006 10:15:11 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <e2jlpe$348$1@sea.gmane.org> References: <444CFD14.70601@gmail.com> <e2jlpe$348$1@sea.gmane.org> Message-ID: <444E2EEF.2090701@gmail.com> Terry Reedy wrote: >> My unglamorous proposal is to review bugs & patches (starting with the >> oldest) and resolve at least 200 of them. > Funny, and nice!, that you should propose this. I thought of adding > something like this to the Python wiki as something I might mentor, but > hesitated because reviewing *is* not glamourous, because Google wants > code-writing projects, and because I am not one to mentor C code writing. I suppose the "new code" emphasis may make writing a proposal to fix bugs an exercise in futility. :) I'll submit it anyway, since I don't have any new whiz-bang features/applications/frameworks that I think deserve to be inflicted upon anybody. > The thing I worry about, besides you or whoever getting too bored after a > week, is that a batch of 50-100 nice new patches could then sit unreviewed > on the patch tracker along with those already there. Although I didn't state it explicitly, my intention was to push each item to completion, so that at the end bug+patch is closed with new code checked into svn. Since I should probably restrict the proposal to fixing things that actually require new code to be written (assuming closing out bogus/inappropriate bugs doesn't meet Google's expectations for SoC), I'll revise my number down to the 25-75 range so that there's time to make sure they're actually completed. From ncoghlan at gmail.com Tue Apr 25 16:23:42 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 00:23:42 +1000 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <5.1.1.6.0.20060425092734.01e61700@mail.telecommunity.com> References: <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> <444B102F.5060201@iinet.net.au> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <444D1D92.8030601@gmail.com> <79990c6b0604241329wcac9d97m76fb5e915c64e6db@mail.gmail.com> <444D95EA.6060505@gmail.com> <79990c6b0604250105t14644e4ela5e9e1a6952a24d0@mail.gmail.com> <5.1.1.6.0.20060425092734.01e61700@mail.telecommunity.com> Message-ID: <444E30EE.9030904@gmail.com> Phillip J. Eby wrote: > If qualification of "context" is the only problem, I propose: > > context manager -- thing with __context__ method > execution context object -- thing with __enter__/__exit__/__context__ > execution context -- the abstract thing set up and torn down by the ECO > > "When requested by the with statement, a context manager returns an > execution context object that sets up and tears down the desired > execution context for the block." I just checked in a change that reverts to the alpha 1 terminology, but uses "with statement context" to work around the ambiguity issues. Hopefully as just an interim fix until we find something better, but I could live with it if I absolutely had to :) 'execution context object' I still find too ambiguous, because I'd like to be able to drop the 'object' off the end and still have something that refers to the right kind of object (just as we don't need to tack 'object' onto the end of context manager all the time). The best I've thought of is 'managed context' (and that only occurred to me about five minutes ago): "When requested by the with statement, a context manager returns a managed context object that sets up and tears down the desired execution context for the block." > And I still call for @contextfactory as a decorator that creates a > factory function returning an execution context. I don't think that > calling it an "executioncontextfactory" or "executionfactory" or > anything like that adds anything useful; it is after all coming from a > library for dealing with execution contexts, so it's sufficiently clear, > um, in context. :) I'm definitely sold on that one - I included it as part of my latest checkin since it dealt nicely with the alpha 1 documentation's contextmanager problem. As you say, the fact that it's in contextlib provides ample indication that it's talking about with statement contexts. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From alan.mcintyre at gmail.com Tue Apr 25 16:30:48 2006 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Tue, 25 Apr 2006 10:30:48 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <20060424190549.GA18585@rogue.amk.ca> References: <444CFD14.70601@gmail.com> <20060424190549.GA18585@rogue.amk.ca> Message-ID: <444E3298.2050400@gmail.com> A.M. Kuchling wrote: > On Mon, Apr 24, 2006 at 12:30:12PM -0400, Alan McIntyre wrote: > >> My unglamorous proposal is to review bugs & patches (starting with the >> oldest) and resolve at least 200 of them. Is that too much? Too few? >> > I'd suggest 75 or maybe 100 bugs or patches, not 200. > <snip> > Thanks; that's helpful. I'll probably make a cursory pass over the bug list and get a better feel for what's there (I suppose I should have done that before posting my original email :). From amk at amk.ca Tue Apr 25 16:36:23 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 25 Apr 2006 10:36:23 -0400 Subject: [Python-Dev] Updated context management documentation In-Reply-To: <444E2D6F.2070508@iinet.net.au> References: <444E2D6F.2070508@iinet.net.au> Message-ID: <20060425143623.GA10290@localhost.localdomain> On Wed, Apr 26, 2006 at 12:08:47AM +1000, Nick Coghlan wrote: > However, I made the > changes below in order to address the conflicts between the alpha 1 > documentation and implementation. IMHO this set of changes makes the terminology reasonably clear, so I'm happy with it. I've edited the What's New accordingly. --amk From engelbert.gruber at ssg.co.at Tue Apr 25 16:18:59 2006 From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at) Date: Tue, 25 Apr 2006 16:18:59 +0200 (CEST) Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" In-Reply-To: <20060425110218.GA1764@Andrew-iBook2.local> References: <444CFD14.70601@gmail.com> <20060424190549.GA18585@rogue.amk.ca> <ee2a432c0604242145t1495fa28rfd8741b10b01ca7d@mail.gmail.com> <20060425110218.GA1764@Andrew-iBook2.local> Message-ID: <Pine.LNX.4.64.0604251617270.7890@lx3.local> On Tue, 25 Apr 2006, A.M. Kuchling wrote: > On Mon, Apr 24, 2006 at 09:45:41PM -0700, Neal Norwitz wrote: >> hard bugs to fix. I guess there are also a lot that we can't >> reproduce and the submitter is MIA. Those might be easier. Ping them >> if not reproducible, if no response in a month, we close. > > The last time there was a thread suggesting closing old bugs, wasn't > the consensus to leave them open for information purposes? maybe a note or special state "off" (open-for-fun) would be helpful for unwary newbies, looking for there little feat. cheers -- From pje at telecommunity.com Tue Apr 25 17:42:54 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 11:42:54 -0400 Subject: [Python-Dev] Updated context management documentation In-Reply-To: <444E2D6F.2070508@iinet.net.au> Message-ID: <5.1.1.6.0.20060425114129.01eefbb8@mail.telecommunity.com> At 12:08 AM 4/26/2006 +1000, Nick Coghlan wrote: >Secondly, the documentation now shows an example >of a class with a close() method using contextlib.closing directly as its own >__context__() method. Sadly, that would only work if closing() were a function. Classes don't get turned into methods, so you'll need to change that example to use: def __context__(self): return closing(self) instead. From tjreedy at udel.edu Tue Apr 25 18:57:47 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Apr 2006 12:57:47 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" References: <444CFD14.70601@gmail.com> <e2jlpe$348$1@sea.gmane.org> <444E2EEF.2090701@gmail.com> Message-ID: <e2lkeb$jq0$1@sea.gmane.org> "Alan McIntyre" <alan.mcintyre at gmail.com> wrote in message news:444E2EEF.2090701 at gmail.com... > I suppose the "new code" emphasis may make writing a proposal to fix > bugs an exercise in futility. :) I personally consider anything *you* write to be 'new code'. Let us see what Google thinks. Patches that other people have written would be 'old code', so merely reviewing such, no matter how useful to us, would seem to not meet the criteria. > I'll submit it anyway, Please do. To me, this project would be as useful to Google as many. > Although I didn't state it explicitly, my intention was to push each > item to completion, so that at the end bug+patch is closed with new code > checked into svn. Good intention. One think you can't do is the necessary review of your own patch. Perhaps you could get pre-commitments to review patches for particular bugs. Terry Jan Reedy From tjreedy at udel.edu Tue Apr 25 19:23:59 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Apr 2006 13:23:59 -0400 Subject: [Python-Dev] SoC proposal: "fix some old, old bugs in sourceforge" References: <444CFD14.70601@gmail.com> <20060424190549.GA18585@rogue.amk.ca><ee2a432c0604242145t1495fa28rfd8741b10b01ca7d@mail.gmail.com> <20060425110218.GA1764@Andrew-iBook2.local> Message-ID: <e2llve$pec$1@sea.gmane.org> "A.M. Kuchling" <amk at amk.ca> wrote in message news:20060425110218.GA1764 at Andrew-iBook2.local... > On Mon, Apr 24, 2006 at 09:45:41PM -0700, Neal Norwitz wrote: >> hard bugs to fix. I guess there are also a lot that we can't >> reproduce and the submitter is MIA. Those might be easier. Ping them >> if not reproducible, if no response in a month, we close. > > The last time there was a thread suggesting closing old bugs, wasn't > the consensus to leave them open for information purposes? There was an opinion, but certainly not a consensus from me. I think that we should be more agressive in closing bug reports. 1. Open items should represent needed action on the Python documentation or the CPython implementation. Keeping invalid or unactioable items open constitutes noise that interferes with action on valid items. 2. Closed items are just as available for information purposes as open items. Just don't restrict a search a search to open items. 3. Closed items can be reopened either immediately, when the weekly tracker come out, or later, when new information arrives. Consider [ python-Bugs-1437614 ] can't send files via ftp on my MacOS X 10.3.9. https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1437614&group_id=5470 Both I and Ronald Oussoren though it almost certainly not a bug, but neither of us closed at the time. The OP never responded to argue otherwise. Does anyone really think this should be left open indefinitely 'for information purposes'? I just closed it. Perhaps next time I review part of the list, I will pick out a few items (say 5) I think should be closed but am less sure of and post here for other opinions. Terry Jan Reedy From guido at python.org Tue Apr 25 20:37:23 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 11:37:23 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <444D99B0.6020008@gmail.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> Message-ID: <ca471dc20604251137j3f981a6bra7f3132e8bda8933@mail.gmail.com> But what's the use case? Have we actually got an example where it makes sense to use the "thing with __enter__ and __exit__ methods" in a with-statement, other than the (many) examples where the original __context__ method returns "self"? --Guido On 4/24/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > Phillip J. Eby wrote: > > At 09:35 PM 4/24/2006 +0100, Paul Moore wrote: > >> The current, alpha 2, documentation insists that objects with > >> __enter__ and __exit__ methods must also define __context__ in such a > >> way that it returns self. > >> > >> I don't understand why that is necessary. > > It's not necessary at all. It's a deliberate design decision in the PEP, > deliberately copying the semantics of the iterator protocol. > > >> I can understand that it is convenient, in cases where __context__ > >> doesn't need to create a new object each time, but is it *necessary*? > >> > >> Specifically, is there a use case where you need to say "with x" where > >> x is the return value of a __context__ method, or where you call > >> __context__ on something you got from __context__? I can't find one in > >> the PEP or in the code for contextlib... > > There aren't any current use cases that require it, no. But how much harder > would it have been to develop itertools if the "all iterators are iterables" > identity hadn't been part of the original design? > > Requiring that all context managers also be context specifiers is harmless. > Not requiring it breaks the parallel with iterators and iterables, which means > that parallel can't be leveraged to explain how things work anymore. > > > The only benefit to this is that it allows us to have only one > > decorator. If the decorator is defined as returning a thing with > > __enter__ and __exit__, and such things must also have a __context__, > > then there is no need for a separate decorator that's defined as > > returning things that have __context__, nor to tweak the docs to explain > > that the single decorator does both, nor to have two names for the same > > decorator. > > > > So, it's sort of a documentation hack. :) > > It's a deliberate design decision to parallel the existing model provided by > iterators and iterables. > > Otherwise, you have to have three concepts: > > context specifiers (have __context__ method) > context managers (have __enter__/__exit__ methods) > combined specifier/managers (which have all three) > > There was no need for iterators to be iterables, either - that was a > deliberate design decision taken at the time iterators were introduced. I > merely copied that pre-existing approach for PEP 343. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > http://www.boredomandlaziness.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Tue Apr 25 20:59:10 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 14:59:10 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604251137j3f981a6bra7f3132e8bda8933@mail.gmail.co m> References: <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> Message-ID: <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> At 11:37 AM 4/25/2006 -0700, Guido van Rossum wrote: >But what's the use case? Have we actually got an example where it >makes sense to use the "thing with __enter__ and __exit__ methods" in >a with-statement, other than the (many) examples where the original >__context__ method returns "self"? Objects returned by @contextfactory-decorated functions must have __enter__ and __exit__ (so @contextfactory can be used to define __context__ methods) *and* they must also have __context__, so they can be used directly in a "with" statement. I think that in all cases where you want this (enter/exit implies context method availability), it's going to be the case that you want the context method to return self, just as iterating an object with a next() method normally returns that object. From jimjjewett at gmail.com Tue Apr 25 22:10:02 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 25 Apr 2006 16:10:02 -0400 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] Message-ID: <fb6fbf560604251310j24272440s1be87ea0201dc7d2@mail.gmail.com> > The latest weekly tracker summary says about 1300, + 200 RFEs. ... > I worry about ... a batch of 50-100 nice new patches could then sit > unreviewed on the patch tracker along with those already there. Is there a good way to flag a patch as reviewed and recommendation made? I understand that if I do 5 at a time *and* want someone to look at a sixth, I can post to python-dev. Normally, though, I'll only look at one or two at a time. I don't see a good way to say "It looks good to me". I don't see any way to say "There were issues, but I think they're resolved now". So either way, I and the author are both sort of waiting for a committer to randomly happen back over old patches. -jJ From brett at python.org Tue Apr 25 22:13:34 2006 From: brett at python.org (Brett Cannon) Date: Tue, 25 Apr 2006 14:13:34 -0600 Subject: [Python-Dev] what do you like about other trackers and what do you hate about SF? Message-ID: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> I am starting to hash out what the Call for Trackers is going to say on the Infrastructure mailing list. Laura Creighton suggested we have a list of features that we would like to see and what we all hate about SF so as to provide some guidelines in terms of how to set up the test trackers that people try to sell us on. So, if you could, please reply to this message with ONE thing you have found in a tracker other than SF that you have liked (especially compared to SF) and ONE thing you dislike/hate about SF's tracker. I will use the replies as a quick-and-dirty features list of stuff that we would like to see demonstrated in the test trackers. -Brett From amk at amk.ca Tue Apr 25 23:37:04 2006 From: amk at amk.ca (A.M. Kuchling) Date: Tue, 25 Apr 2006 17:37:04 -0400 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] In-Reply-To: <fb6fbf560604251310j24272440s1be87ea0201dc7d2@mail.gmail.com> References: <fb6fbf560604251310j24272440s1be87ea0201dc7d2@mail.gmail.com> Message-ID: <20060425213704.GA19842@rogue.amk.ca> On Tue, Apr 25, 2006 at 04:10:02PM -0400, Jim Jewett wrote: > I don't see a good way to say "It looks good to me". I don't see any > way to say "There were issues, but I think they're resolved now". So > either way, I and the author are both sort of waiting for a committer > to randomly happen back over old patches. If there are too many patches waiting for a committer to assess them, that probably points up the need for more committers. --amk From jimjjewett at gmail.com Tue Apr 25 22:51:55 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 25 Apr 2006 16:51:55 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) Message-ID: <fb6fbf560604251351h68ed13f4rf90c59f2c7fdc194@mail.gmail.com> > So things like decimal.Context get left trying to find a sane > name for what their __context__ method returns. > decimal.Context.__context__() returns a . . . context? What? > Wasn't it already a context? Oh, so it actually returns a > "with statement context object". [I was OK with "context specifiers", but here is another attempt.] ----------- With statements execute in a temporary context. with with_expression as temporary_context_name: stmt1 stmt2 The with_expression must return an object implementing the Temporary Context protocol. The Temporary Context's __enter__ method runs before the statement block, and its __exit__ method runs after the block. Temporary Contexts can represent arbitrary changes to the execution environment. Often, they just ensure that a file is closed in a timely fashion. A more advanced example is Decimal.Context, which controls the precision of certain math calculations. Decimal.Context also implements the Temporary Context protocol, so that you can increase the precision of a few calculations without slowing down the rest of the program. -jJ From jimjjewett at gmail.com Wed Apr 26 00:04:12 2006 From: jimjjewett at gmail.com (Jim Jewett) Date: Tue, 25 Apr 2006 18:04:12 -0400 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] Message-ID: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> > If there are too many patches waiting for a committer to assess them, > that probably points up the need for more committers. Perhaps; part of the problem is with the SF workflow. New bug or patch comes in. Shows up on the list of new bugs, but not obviously ready for action. Not assigned to anyone, because it says not to. Patch added. email sent to anyone who made an explicit comment, but perhaps not to any committers. More comments. More patches. Finally ready. Another comment is made, maybe saying so, but certainly not with any magic words. The patch isn't even on the front page any more. ... So how are the committers supposed to even know that it is waiting for assessment? The solutions that I've seen work are (a) A committer gets involved (at least to be sent email) early (b) Someone bugs a committer out of band (despite the don't-assign rhetoric) (c) It gets fixed really fast, before it can fall off the first screen (d) Someone (such as Georg, recently) makes a point of going through old tickets almost at random. These are all pretty ad-hoc -jJ From jafo-python-dev at tummy.com Wed Apr 26 00:04:58 2006 From: jafo-python-dev at tummy.com (Sean Reifschneider) Date: Tue, 25 Apr 2006 16:04:58 -0600 Subject: [Python-Dev] Builtin exit, good in interpreter, bad in code. In-Reply-To: <444C70B7.7090909@v.loewis.de> References: <20060424015553.GA842@tummy.com> <444C70B7.7090909@v.loewis.de> Message-ID: <20060425220458.GA8765@tummy.com> On Mon, Apr 24, 2006 at 08:31:19AM +0200, "Martin v. L?wis" wrote: >Sean Reifschneider wrote: >> Thoughts? > >In Python 2.5, exit(0) exits. Eeexcellent. Thanks, Sean -- UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things. -- Doug Gwyn Sean Reifschneider, Member of Technical Staff <jafo at tummy.com> tummy.com, ltd. - Linux Consulting since 1995: Ask me about High Availability From pje at telecommunity.com Wed Apr 26 00:43:01 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 18:43:01 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available Message-ID: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> Please see http://svn.python.org/projects/sandbox/trunk/setuptools/doc/formats.txt for source, or http://peak.telecommunity.com/DevCenter/EggFormats for an HTML-formatted version. Included are summary descriptions of the formats of all of the standard metadata produced by setuptools, along with pointers to the existing manuals that describe the syntax used for representing requirements, entry points, etc. as text. The .egg, .egg-info, and .egg-link formats and layouts are also specified, along with the filename syntax used to embed project/version/Python version/platform metadata. Last, but not least, there are detailed explanations of how resources (such as C extensions) are extracted on-the-fly and cached, how C extensions get imported from zipfiles, and how EasyInstall works around the limitations of Python's default sys.path initialization. If there's anything else you'd like in there, please let me know. From tdelaney at avaya.com Wed Apr 26 00:49:05 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Wed, 26 Apr 2006 08:49:05 +1000 Subject: [Python-Dev] Updated context management documentation Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E685@au3010avexu1.global.avaya.com> Nick Coghlan wrote: > (An idea that just occurred to me in writing this email is "managed > context". That's a lot less clumsy, and fits with the context manager > idea. +1 > Context expressions > In response to a comment Aahz made, I tweaked the language > reference to explicitly refer to the expression in the with statement > as a "context expression". The result of the context expression must > then be a context manager in order for the with statement to operate > correctly. +1 > Dealing with decimal.ContextManager > (decimal.ManagedContext would definitely look better. . .) +1 > Dealing with contextlib.contextmanager > As recently suggested (by Terry, IIRC), I renamed this to > contextlib.contextfactory, as the decorator creates a factory > function for producing with statement context objects. +1 > Dealing with contextlib.GeneratorContextManager > I renamed this to contextlib.GeneratorContext. The fact that it's > in the contextlib module provides sufficient context to indicate that > this is a with statement context object, so I could avoid the clumsy > naming that was needed in the decimal module. Might still be better to name this as contextlib.ManagedGeneratorContext (or contextlib.GeneratorManagedContext, but I think the former works better). This has been a long, tiring set of threads, but I think the end result is an improvement (particularly contextlib.contextfactory). Tim Delaney From guido at python.org Wed Apr 26 01:18:17 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 16:18:17 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> Message-ID: <ca471dc20604251618q131aaecfvc6949ba0001df37d@mail.gmail.com> On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 11:37 AM 4/25/2006 -0700, Guido van Rossum wrote: > >But what's the use case? Have we actually got an example where it > >makes sense to use the "thing with __enter__ and __exit__ methods" in > >a with-statement, other than the (many) examples where the original > >__context__ method returns "self"? > > Objects returned by @contextfactory-decorated functions must have __enter__ > and __exit__ (so @contextfactory can be used to define __context__ methods) > *and* they must also have __context__, so they can be used directly in a > "with" statement. That doesn't make sense. *Except* for cases where __context__() returns self, when would I ever use the object returned by __context__() in a with statement? That would mean that I would have to write code like this: A = B.__context__() # Here, A is not B with A: <whatever> I don't see any reason to write such code!!! The more I think about it the more I believe the parallel with __iter__ is a fallacy. There are many ways to get an iterator in your hands without calling X.__iter__(): not just iter(X), but also X.iterkeys(), enumerate(X), and so on. It makes total sense (in fact it is sometimes the only use case) to pass those things to a for loop for iteration, which implies calling iter() on them, which implies calling __iter__() method. But (again, *excluding* objects whose __context__ returns self!) I don't see any use cases for calling __context__() explicitly -- that is *always* done by a with-statement. So, again, I'm asking for a real use case, similar to the one provided by enumerate() or iterkeys(). > I think that in all cases where you want this (enter/exit implies context > method availability), it's going to be the case that you want the context > method to return self, just as iterating an object with a next() method > normally returns that object. Yeah, of course, just like for iterators. But the question remains, under what circumstances is it convenient to call __context__() explicit, and pass the result to a with-statement? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Wed Apr 26 01:21:31 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Apr 2006 19:21:31 -0400 Subject: [Python-Dev] what do you like about other trackers and what do youhate about SF? References: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> Message-ID: <e2matq$kr$1@sea.gmane.org> "Brett Cannon" <brett at python.org> wrote in message news:bbaeab100604251313r1987c13fwd73df678c84e7da2 at mail.gmail.com... > So, if you could, please reply to this message with ONE thing you have > found in a tracker other than SF that you have liked (especially > compared to SF) and ONE thing you dislike/hate about SF's tracker. I > will use the replies as a quick-and-dirty features list of stuff that > we would like to see demonstrated in the test trackers. The most useful thing I can think of is to be able to tag each item by the doc section (lang ref, lib ref, tut, pep) most applicable to the item and then be able to sort tracker items (on any of the lists, possibly) by such tags or be able to extract all tracker items, or all open tracker items, for a particular section. Terry Jan Reedy From guido at python.org Wed Apr 26 01:23:18 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 16:23:18 -0700 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <20060424194901.GA3763@panix.com> References: <444B102F.5060201@iinet.net.au> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <20060424194901.GA3763@panix.com> Message-ID: <ca471dc20604251623x65e305e3ocbb0c365ad3ffbf1@mail.gmail.com> On 4/24/06, Aahz <aahz at pythoncraft.com> wrote: > Let's go back to a pseudo-coded with statement: > > with EXPRESSION [as NAME]: > BLOCK > > What happens while BLOCK is being executed? Again, here's what I said > originally: > > EXPRESSION returns a value that the with statement uses to create a > context (a special kind of namespace). The context is used to > execute the BLOCK. The block might end normally, get terminated by > a break or return, or raise an exception. No matter which of those > things happens, the context contains code to clean up after the > block. I strongly object to your use of the term "namespace" here. The with statement does *not* create a new namespace. Using the term namespace will only confuse people who understand what it means (in Python) -- we have the global namespace, the builtin namespace, the local namespace, classes introduce a new namespace, etc. The with-statement does *not* create a namespace in this sense -- there's no new place where name lookup can take place. In particular, this code prints 42: x = 1 with whatever(doesnt_matter): x = 42 print x -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Wed Apr 26 01:24:53 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Apr 2006 19:24:53 -0400 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] References: <fb6fbf560604251310j24272440s1be87ea0201dc7d2@mail.gmail.com> Message-ID: <e2mb44$13j$1@sea.gmane.org> "Jim Jewett" <jimjjewett at gmail.com> wrote in message news:fb6fbf560604251310j24272440s1be87ea0201dc7d2 at mail.gmail.com... >> The latest weekly tracker summary says about 1300, + 200 RFEs. ... > >> I worry about ... a batch of 50-100 nice new patches could then sit >> unreviewed on the patch tracker along with those already there. > > Is there a good way to flag a patch as reviewed and recommendation made? Perhaps you could submit this as your wish item for a new tracker (see Brett's post). (I already used up mine ;-) tjr From aahz at pythoncraft.com Wed Apr 26 01:31:00 2006 From: aahz at pythoncraft.com (Aahz) Date: Tue, 25 Apr 2006 16:31:00 -0700 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <ca471dc20604251623x65e305e3ocbb0c365ad3ffbf1@mail.gmail.com> References: <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <20060424194901.GA3763@panix.com> <ca471dc20604251623x65e305e3ocbb0c365ad3ffbf1@mail.gmail.com> Message-ID: <20060425233100.GB18042@panix.com> On Tue, Apr 25, 2006, Guido van Rossum wrote: > On 4/24/06, Aahz <aahz at pythoncraft.com> wrote: >> >> Let's go back to a pseudo-coded with statement: >> >> with EXPRESSION [as NAME]: >> BLOCK >> >> What happens while BLOCK is being executed? Again, here's what I said >> originally: >> >> EXPRESSION returns a value that the with statement uses to create a >> context (a special kind of namespace). The context is used to >> execute the BLOCK. The block might end normally, get terminated by >> a break or return, or raise an exception. No matter which of those >> things happens, the context contains code to clean up after the >> block. > > I strongly object to your use of the term "namespace" here. The with > statement does *not* create a new namespace. Using the term namespace > will only confuse people who understand what it means (in Python) -- > we have the global namespace, the builtin namespace, the local > namespace, classes introduce a new namespace, etc. The with-statement > does *not* create a namespace in this sense -- there's no new place > where name lookup can take place. Right -- I've already been chastised for that. Unless someone has a better idea, I'm going to call it a "wrapper". -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From bfulg at pacbell.net Wed Apr 26 01:41:04 2006 From: bfulg at pacbell.net (Brent Fulgham) Date: Tue, 25 Apr 2006 16:41:04 -0700 (PDT) Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> Message-ID: <20060425234104.35361.qmail@web81210.mail.mud.yahoo.com> "Included are summary descriptions of the formats of all of the standard metadata produced by setuptools, along with pointers to the existing manuals that describe the syntax used for representing requirements, entry points, etc. as text. The .egg, .egg-info, and .egg-link formats and layouts are also specified...." I also follow the chicken Scheme mailing list, and initially though this was a mistaken reference to http://www.call-with-current-continuation.org/eggs/. Is there any concern that the use of 'egg' might cause some confusion? -Brent From pje at telecommunity.com Wed Apr 26 02:19:04 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 20:19:04 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604251618q131aaecfvc6949ba0001df37d@mail.gmail.co m> References: <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> At 04:18 PM 4/25/2006 -0700, Guido van Rossum wrote: >But the question remains, >under what circumstances is it convenient to call __context__() >explicit, and pass the result to a with-statement? Oh. I don't know of any; I previously asked the same question myself. I just eventually answered myself with "I don't care; we need to require self-returning __context__ on execution context objects so that the documentation can appear vaguely sane." :) So, I don't know of any non-self-returning use cases for __context__ on an object that has __enter__ and __exit__. In fact, I suspect that we could strengthen the requirements to say that: 1. If you have __enter__ and __exit__, you MUST have a self-returning __context__ 2. If you don't have __enter__ and __exit__, you MUST NOT have a self-returning __context__ #2 is obvious since you can't use the object with "with" otherwise. #1 reflects the fact that it doesn't make any sense to take the context of a context. From pje at telecommunity.com Wed Apr 26 02:21:38 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 20:21:38 -0400 Subject: [Python-Dev] PEP 343 update (with statement context terminology) In-Reply-To: <20060425233100.GB18042@panix.com> References: <ca471dc20604251623x65e305e3ocbb0c365ad3ffbf1@mail.gmail.com> <79990c6b0604231547i1c279e8ayf6ad07215132c64b@mail.gmail.com> <444C3B79.3020405@gmail.com> <79990c6b0604240103j13611f6rcc9670f741da4ac3@mail.gmail.com> <444C9142.5050409@gmail.com> <5.1.1.6.0.20060424101158.01e5d7c8@mail.telecommunity.com> <79990c6b0604240939i1d90f4b8x9063178575c94843@mail.gmail.com> <5.1.1.6.0.20060424145528.01f8c900@mail.telecommunity.com> <5.1.1.6.0.20060424153715.036de008@mail.telecommunity.com> <20060424194901.GA3763@panix.com> <ca471dc20604251623x65e305e3ocbb0c365ad3ffbf1@mail.gmail.com> Message-ID: <5.1.1.6.0.20060425201932.01e83c30@mail.telecommunity.com> At 04:31 PM 4/25/2006 -0700, Aahz wrote: >Right -- I've already been chastised for that. Unless someone has a >better idea, I'm going to call it a "wrapper". Better idea: just delete the parenthetical about a namespace and leave the rest of your text alone, at least until the dust settles. I thought your original text was perfect except for the namespace thing. From guido at python.org Wed Apr 26 02:20:17 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 17:20:17 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> Message-ID: <ca471dc20604251720p6d30feapfb2828cb6c582ec7@mail.gmail.com> On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 04:18 PM 4/25/2006 -0700, Guido van Rossum wrote: > >But the question remains, > >under what circumstances is it convenient to call __context__() > >explicit, and pass the result to a with-statement? > > Oh. I don't know of any; I previously asked the same question myself. I > just eventually answered myself with "I don't care; we need to require > self-returning __context__ on execution context objects so that the > documentation can appear vaguely sane." :) > > So, I don't know of any non-self-returning use cases for __context__ on an > object that has __enter__ and __exit__. In fact, I suspect that we could > strengthen the requirements to say that: > > 1. If you have __enter__ and __exit__, you MUST have a self-returning > __context__ > > 2. If you don't have __enter__ and __exit__, you MUST NOT have a > self-returning __context__ > > #2 is obvious since you can't use the object with "with" otherwise. #1 > reflects the fact that it doesn't make any sense to take the context of a > context. I would augment #1 to clarify that if you have __enter__ and __exit__ you may not have __context__ at all; if you have all three, __context__ must return self. #2 is unnecessary (already covered elsewhere). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Wed Apr 26 02:35:38 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 20:35:38 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604251720p6d30feapfb2828cb6c582ec7@mail.gmail.com > References: <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> At 05:20 PM 4/25/2006 -0700, Guido van Rossum wrote: >I would augment #1 to clarify that if you have __enter__ and __exit__ >you may not have __context__ at all; if you have all three, >__context__ must return self. Well, requiring the __context__ allows us to ditch the otherwise complex problem of why @contextfactory functions' return value is usable by "with", without having to explain it separately. So, I don't think there's any reason to provide an option; there should be Only One Way To Do It. Well, actually, two. That is, you can have one method or all three. Two is right out. :) (ObMontyPython: Wait, I'll come in again...) From pje at telecommunity.com Wed Apr 26 02:47:49 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 20:47:49 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <20060425234104.35361.qmail@web81210.mail.mud.yahoo.com> References: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060425202336.01e850d8@mail.telecommunity.com> At 04:41 PM 4/25/2006 -0700, Brent Fulgham wrote: >"Included are summary descriptions of the formats of all of the standard >metadata produced by setuptools, along with pointers to the existing >manuals that describe the syntax used for representing requirements, entry >points, etc. as text. The .egg, .egg-info, and .egg-link formats and >layouts are also specified...." > >I also follow the chicken Scheme mailing list, and initially though this was >a mistaken reference to http://www.call-with-current-continuation.org/eggs/. > >Is there any concern that the use of 'egg' might cause some confusion? Not for the software, anyway. As long as nobody asks EasyInstall to install something from that page, and as long as they're not in the habit of installing Scheme extensions to their Python directories, everything will be fine. :) Just for the heck of it, I tried asking easy_install to install some of the stuff on that page, and it griped about most of the eggs listed on that page not having version numbers, and then it barfed with a ZipImportError after downloading the .egg and observing that it was not a valid zip file. (It hadn't actually installed the egg file yet, so no changes were made to the Python installation.) So, also just for the heck of it, I'm tempted to add some code to easy_install to notice when it encounters a tarball .egg (which is what the Scheme/Chicken eggs are), and maybe have it explain that Scheme eggs aren't Python eggs, perhaps in humorous fashion. If I did add such code, you might even call it an "easter egg", I suppose... ;) From tjreedy at udel.edu Wed Apr 26 02:50:30 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Apr 2006 20:50:30 -0400 Subject: [Python-Dev] what do you like about other trackers and what doyouhate about SF? References: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> <e2matq$kr$1@sea.gmane.org> Message-ID: <e2mg4l$cbb$1@sea.gmane.org> "Terry Reedy" <tjreedy at udel.edu> wrote in message news:e2matq$kr$1 at sea.gmane.org... > The most useful thing I can think of is to be able to tag each item by > the doc section (lang ref, lib ref, tut, pep) most applicable to the item > and then be able to sort tracker items (on any of the lists, possibly) by > such tags or be able to extract all tracker items, or all open tracker > items, for a particular section. In thinking some more, I realized that the *basic problem* is that the SF bug tracker is a mostly-closed, fixed-design, shared DB application with data access restricted to the canned interface. What I therefore think we need is an extensible application (preferably open-source) written on top of an rdbms (ditto). We should be able to add fields, tables, reports, and scripts. We should be able to write the latter in Python using the standard DB API. If necessary, we should be able to add the interface if not already existent. I think being able to access *our* data and do whatever we want with it is, in the long run, more important than any specific list of canned features. My first post, quoted above, illustrates the sort of Python-specific feature that I would not expect to find exactly built in to any tracker but which we could add with the flexibility that is my real wish. Terry Jan Reedy From brett at python.org Wed Apr 26 04:06:14 2006 From: brett at python.org (Brett Cannon) Date: Tue, 25 Apr 2006 20:06:14 -0600 Subject: [Python-Dev] draft of externally maintained packages PEP Message-ID: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> here is the rough draft of the PEP for packages maintained externally from Python itself. There is some missing, though, that I would like help filling in. I don't know who to list as the contact person (i.e., the Python developer in charge of the code) for Expat, Optik or pysqlite. I know Greg Ward wrote Optik, but I don't know if he is still active at all (I saw AMK's email on Greg being busy). I also thought Gerhard Haering was in charge of pysqlite, but he has never responded to any emails about this topic. Maybe the responsibility should go to Anthony since I know he worked to get the package in and probably cares about keeping it updated? As for Expat (the parser), is that Fred? I also don't know what version of ctypes is in 2.5 . Thomas, can you tell me? Otherwise, I have cc:ed the people I do know who are in charge of some packages. Can you please look over your package information and let me know if you want anything changed? ------------------------------------------------------- PEP: XXX Title: Externally Maintained Packages Version: $Revision: 43251 $ Last-Modified: $Date: 2006-03-23 06:28:55 -0800 (Thu, 23 Mar 2006) $ Author: Brett Cannon <brett at python.org> Status: Active Type: Informational Content-Type: text/x-rst Created: XX-XXX-2006 Abstract ======== There are many great pieces of Python software developed outside of the Python standard library. Sometimes it makes sense to incorporate these externally maintained packages into the standard library in order to fill a gap in the tools provided by Python. But by having the packages maintained externally it means Python's developers do not have direct control over the packages. Some package developers prefer to have bug reports and patches go through them first instead of being directly applied by Python developers. This PEP is meant to record details of packages in the standard library. Specifically, it is meant to keep track of any specific maintenance needs for each package. It also is meant to allow people to know which version of a package is in which version of Python. Externally Maintained Packages ============================== Below is a list of modules/packages within Python that are externally maintained. Any special notes in terms of maintenance of the code within the Python code repository are mentioned. The section title is the name of the package as known outside of the Python standard library. The "standard library name" is what the package is named within Python. The "contact person" is the Python developer in charge of maintaining the package. The "synchronisation history" lists what external version of the package was included in each version of Python (if different from the previous version). ctypes ------ - Web page http://starship.python.net/crew/theller/ctypes/ - Standard library name ctypes - Contact person Thomas Heller - Synchronisation history * XXX (2.5) Bugs can be reported to either the Python tracker [#python-tracker]_ or the ctypes tracker [#ctypes-tracker]_ and assigned to Thomas Heller. ElementTree ----------- - Web page http://effbot.org/zone/element-index.htm - Standard library name xml.etree - Contact person Fredrik Lundh - Synchronisation history * 1.2.6 (2.5) Patches should not be directly applied to Python HEAD, but instead reported to the Python tracker [#python-tracker]_ (critical bug fixes are the exception). Bugs should also be reported to the Python tracker. Both bugs and patches should be assigned to Fredrik Lundh. Expat XML parser ---------------- - Web page http://www.libexpat.org/ - Standard library name N/A (this refers to the parser itself, and not the Python bindings) - Contact person XXX - Synchronisation history * 1.95.8 (2.4) * 1.95.7 (2.3) Optik ----- - Web site http://optik.sourceforge.net/ - Standard library name optparse - Contact person XXX - Synchronisation history * 1.5.1 (2.5) * 1.5a1 (2.4) * 1.4 (2.3) pysqlite -------- - Web site http://www.sqlite.org/ - Standard library name sqlite3 - Contact person XXX - Synchronisation history * 2.1.3 (2.5) References ========== .. [#python-tracker] Python tracker (http://sourceforge.net/tracker/?group_id=5470) .. [#ctypes-tracker] ctypes tracker (http://sourceforge.net/tracker/?group_id=71702) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From brett at python.org Wed Apr 26 04:12:33 2006 From: brett at python.org (Brett Cannon) Date: Tue, 25 Apr 2006 20:12:33 -0600 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <442F6F50.2030002@v.loewis.de> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> <442F6F50.2030002@v.loewis.de> Message-ID: <bbaeab100604251912n5f7a06dcgd2e195283c761206@mail.gmail.com> On 4/2/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Tim Peters wrote: > > For gcc we _could_ solve it in the obvious way, which I guess Martin > > was hoping to avoid: change Unixish config to detect whether the > > platform C supports the "z" format modifier (I believe gcc does), and > > if so arrange to stick > > > > #define PY_FORMAT_SIZE_T "z" > > It's not gcc to support "z" (except for the compile-time check); it's > the C library (on Unix, the C library is part of the system, not part > of the compiler). > > But yes: if we could detect in configure that the C library supports > %zd, then we should use that. I created patch 1474907 with a fix for it. Checks if %zd works for size_t and if so sets PY_FORMAT_SIZE_T to "z", otherwise just doesn't set the macro def. Assigned to Martin to make sure I didn't foul it up, but pretty much anyone could probably double-check it. -Brett From guido at python.org Wed Apr 26 05:09:34 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 20:09:34 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> Message-ID: <ca471dc20604252009s5656db3mfcfa62e270351da2@mail.gmail.com> On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 05:20 PM 4/25/2006 -0700, Guido van Rossum wrote: > >I would augment #1 to clarify that if you have __enter__ and __exit__ > >you may not have __context__ at all; if you have all three, > >__context__ must return self. > > Well, requiring the __context__ allows us to ditch the otherwise complex > problem of why @contextfactory functions' return value is usable by "with", > without having to explain it separately. Really? I thought that that was due to the magic in the decorator (and in the class it uses). In this case the use of magic is fine by me; I know I could reconstruct it from scratch if I had to (with only one or two bugs :-) but it's convenient to have it in the library. The primary use case is this: @contextfactory def foo(): ... with foo(): ... but a secondary one is this: class C: ... @contextfactory def __context__(self): ... with C(): .... Because of these two different uses it makes sense for @contextfactory to return an object that has both __context__ and __enter__/__exit__ methods. But that *still* doesn't explain why we are recommending that everything providing __enter__/__exit__ must also provide __context__! I still think we should take it out. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Wed Apr 26 05:29:46 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 23:29:46 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604252009s5656db3mfcfa62e270351da2@mail.gmail.com > References: <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> At 08:09 PM 4/25/2006 -0700, Guido van Rossum wrote: >On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 05:20 PM 4/25/2006 -0700, Guido van Rossum wrote: > > >I would augment #1 to clarify that if you have __enter__ and __exit__ > > >you may not have __context__ at all; if you have all three, > > >__context__ must return self. > > > > Well, requiring the __context__ allows us to ditch the otherwise complex > > problem of why @contextfactory functions' return value is usable by "with", > > without having to explain it separately. > >Really? I thought that that was due to the magic in the decorator (and >in the class it uses). Actually, I got that explanation backwards above. What I meant is that the hard thing to explain is why you can use @contextfactory to define a __context__ method. All other examples of @contextfactory are perfectly fine, it's only the fact that you can use it to define a __context__ method. See, if @contextfactory functions return a thing *with* a __context__ method, how is that usable with "with"? It isn't, unless the thing also happens to have __enter__/__exit__ methods. This was the hole in the documentation that caused Nick to seek to revisit the decorator name in the first place. >But that *still* doesn't explain why we are recommending that >everything providing __enter__/__exit__ must also provide __context__! Because it means that users never have to worry about which kind of object they have. Either you pass a 1-method object to "with", or a 3-method object to "with". If you have a 2-method object, you can never pass it to "with". Here's the thing: you're going to have 1-method objects and you're going to have 3-method objects, we know that. But the only time a 2-method object is useful is if it's a tag-along to a 1-method object. It's easier from a documentation perspective to just say "you can have one method or three", and not get into this whole "well, you can also have two, but only if you use it with a one". And if you rule out the existence of the two-method variant, you don't have to veer into any complex questions of when you should use three methods instead of two, because the answer is simply "always use three". This isn't a technical problem, in other words. I had exactly the same POV as you on this until I read enough of Nick's rants on the subject to finally see it from the education perspective; it's easier to explain two things, one of which is a subtype of the other, versus explaining two orthogonal things that sometimes go together and sometimes don't, and the reasons that you might or might not want to put them together are tricky to explain. Of course, this approach opens a new hole, which is how to deal with people asking "why does it have to have a __context__ method if it's never called". So far, Nick's answer is "because we said so" (aka "deliberate design decision"), which isn't great, but at least it's honest. :) From pje at telecommunity.com Wed Apr 26 05:39:01 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Tue, 25 Apr 2006 23:39:01 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> References: <ca471dc20604252009s5656db3mfcfa62e270351da2@mail.gmail.com > <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> At 11:29 PM 4/25/2006 -0400, Phillip J. Eby wrote: >See, if @contextfactory functions return a thing *with* a __context__ >method, how is that usable with "with"? It isn't, unless the thing also >happens to have __enter__/__exit__ methods. This was the hole in the >documentation that caused Nick to seek to revisit the decorator name in the >first place. Argh. I seem to be tongue-tied this evening. What I mean is, if @contextfactory functions' return value is usable as a "with" expression, that means it must have a __context__ method. But, if you are using @contextfactory to *define* a __context__ method, the return value should clearly have __enter__ and __exit__ methods. What this means is that if we describe the one method and the two methods as independent things, there is no *single* name we can use to describe the return value of a @contextfactory function. It's a wave and a particle, so we either have to start talking about "wavicles" or have some detailed explanation of why @contextfactory function return values are both waves and particles at the same time. However, if we say that particles are a kind of wave, and have all the same features as waves but just add a few others, then we can simply say @contextfactory functions return particles, and the waviness is implied, and all is right in the world. At least, until AMK comes along and asks why you can't separate the particleness from the waveness, which was what started this whole thing in the first place... :) From fdrake at acm.org Wed Apr 26 05:59:26 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Tue, 25 Apr 2006 23:59:26 -0400 Subject: [Python-Dev] GNU info version of documentation Message-ID: <200604252359.27051.fdrake@acm.org> Something's gone south in the GNU info conversion of the documentation on the trunk. If there's anyone here with time to look at it, I'd appreciate it. Unfortunately, the GNU info conversion is one of the bits I understand the least in the doc tool chain, and there's little information in the error message: $ make info cd info && make EMACS=emacs WHATSNEW=whatsnew25 make[1]: Entering directory `/home/fdrake/projects/python/trunk/Doc/info' Using emacs to build the info docs EMACS=emacs ../tools/mkinfo ../lib/lib.tex python-lib.texi python-lib.info emacs -batch -q --no-site-file -l /home/fdrake/projects/python/trunk/Doc/tools/py2texi.el --eval (setq py2texi-dirs '("/home/fdrake/projects/python/trunk/Doc/lib" "/home/fdrake/projects/python/trunk/Doc/commontex" "../texinputs")) --eval (setq py2texi-texi-file-name "python-lib.texi") --eval (setq py2texi-info-file-name "python-lib.info") --eval (py2texi "/home/fdrake/projects/python/trunk/Doc/lib/lib.tex") -f kill-emacs Mark set Unknown environment: quote Unknown environment: quote Unknown environment: quote Unknown environment: quote Unknown environment: longtable Wrong type argument: char-or-string-p, nil make[1]: *** [python-lib.info] Error 255 make[1]: Leaving directory `/home/fdrake/projects/python/trunk/Doc/info' make: *** [info] Error 2 -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From guido at python.org Wed Apr 26 06:23:36 2006 From: guido at python.org (Guido van Rossum) Date: Tue, 25 Apr 2006 21:23:36 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> Message-ID: <ca471dc20604252123y64b8cc66jc0413f6b027c0664@mail.gmail.com> I'm not convinced. On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 11:29 PM 4/25/2006 -0400, Phillip J. Eby wrote: > >See, if @contextfactory functions return a thing *with* a __context__ > >method, how is that usable with "with"? It isn't, unless the thing also > >happens to have __enter__/__exit__ methods. This was the hole in the > >documentation that caused Nick to seek to revisit the decorator name in the > >first place. > > Argh. I seem to be tongue-tied this evening. What I mean is, if > @contextfactory functions' return value is usable as a "with" expression, > that means it must have a __context__ method. But, if you are using > @contextfactory to *define* a __context__ method, the return value should > clearly have __enter__ and __exit__ methods. > > What this means is that if we describe the one method and the two methods > as independent things, there is no *single* name we can use to describe the > return value of a @contextfactory function. It's a wave and a particle, so > we either have to start talking about "wavicles" or have some detailed > explanation of why @contextfactory function return values are both waves > and particles at the same time. > > However, if we say that particles are a kind of wave, and have all the same > features as waves but just add a few others, then we can simply say > @contextfactory functions return particles, and the waviness is implied, > and all is right in the world. At least, until AMK comes along and asks > why you can't separate the particleness from the waveness, which was what > started this whole thing in the first place... :) > > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From theller at python.net Wed Apr 26 07:55:46 2006 From: theller at python.net (Thomas Heller) Date: Wed, 26 Apr 2006 07:55:46 +0200 Subject: [Python-Dev] draft of externally maintained packages PEP In-Reply-To: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> References: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> Message-ID: <444F0B62.7010207@python.net> Brett Cannon wrote: > here is the rough draft of the PEP for packages maintained externally > from Python itself. There is some missing, though, that I would like > help filling in. > > I don't know who to list as the contact person (i.e., the Python > developer in charge of the code) for Expat, Optik or pysqlite. I know > Greg Ward wrote Optik, but I don't know if he is still active at all > (I saw AMK's email on Greg being busy). I also thought Gerhard > Haering was in charge of pysqlite, but he has never responded to any > emails about this topic. Maybe the responsibility should go to > Anthony since I know he worked to get the package in and probably > cares about keeping it updated? As for Expat (the parser), is that > Fred? > > I also don't know what version of ctypes is in 2.5 . Thomas, can you tell me? > ctypes > ------ > - Web page > http://starship.python.net/crew/theller/ctypes/ > - Standard library name > ctypes > - Contact person > Thomas Heller > - Synchronisation history * 0.9.9.4 (2.5a1) * 0.9.9.6 (2.5a2) From vys at renet.ru Wed Apr 26 09:15:13 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Wed, 26 Apr 2006 11:15:13 +0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <20060424114024.66D0.JCARLSON@uci.edu> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> <444C9FBE.2060709@renet.ru> <20060424114024.66D0.JCARLSON@uci.edu> Message-ID: <444F1E01.904@renet.ru> Josiah Carlson wrote: > There exists various C and Python implementations of both AVL and > Red-Black trees. For users of Python who want to use AVL and/or > Red-Black trees, I would urge them to use the Python implementations. > In the case of *needing* the speed of a C extension, there already > exists a CPython extension module for AVL trees: > http://www.python.org/pypi/pyavl/1.1 > > I would suggest you look through some suggested SoC projects in the > wiki: > http://wiki.python.org/moin/SummerOfCode > > - Josiah > > Thanks for the answer! I already saw pyavl-1.1. But for this reason I would like to see the module in a standard package python. Functionality for pyavl and dict to compare difficultly. Functionality of my module will differ from functionality dict in the best party. I have lead some tests on for work with different types both for a package pyavl-1.1, and for the prototype of own module. The script of check is resulted in attached a file avl-timeit.py In files timeit-result-*-*.txt results of this test. The first in the name of a file means quantity of the added elements, the second - argument of a method timeit. There it is visible, that in spite of the fact that the module xtree is more combined in comparison with pyavl the module (for everyone again inserted pair [the key, value], is created two elements: python object - pair, and an internal element of a tree), even in this case it works a little bit more quickly. Besides the module pyavl is unstable for work in multithread appendices (look the attached file module-avl-is-thread-unsafe.py). I think, that creation of this type (together with type of pair), will make programming more convenient since sorting by function sort will be required less often. I can probably borrow in this module beyond the framework of the project google. The demand of such type of data is interesting. Because of necessity of processing `gcmodule.c' and `obmalloc.c' this module cannot be realized as the external module. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: avl-timeit.py Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/50c773c4/attachment-0001.pot -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: install-pyavl.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/50c773c4/attachment-0001.txt From vys at renet.ru Wed Apr 26 09:16:06 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Wed, 26 Apr 2006 11:16:06 +0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <20060424114024.66D0.JCARLSON@uci.edu> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> <444C9FBE.2060709@renet.ru> <20060424114024.66D0.JCARLSON@uci.edu> Message-ID: <444F1E36.30302@renet.ru> Josiah Carlson wrote: > There exists various C and Python implementations of both AVL and > Red-Black trees. For users of Python who want to use AVL and/or > Red-Black trees, I would urge them to use the Python implementations. > In the case of *needing* the speed of a C extension, there already > exists a CPython extension module for AVL trees: > http://www.python.org/pypi/pyavl/1.1 > > I would suggest you look through some suggested SoC projects in the > wiki: > http://wiki.python.org/moin/SummerOfCode > > - Josiah > > Thanks for the answer! I already saw pyavl-1.1. But for this reason I would like to see the module in a standard package python. Functionality for pyavl and dict to compare difficultly. Functionality of my module will differ from functionality dict in the best party. I have lead some tests on for work with different types both for a package pyavl-1.1, and for the prototype of own module. The script of check is resulted in attached a file avl-timeit.py In files timeit-result-*-*.txt results of this test. The first in the name of a file means quantity of the added elements, the second - argument of a method timeit. There it is visible, that in spite of the fact that the module xtree is more combined in comparison with pyavl the module (for everyone again inserted pair [the key, value], is created two elements: python object - pair, and an internal element of a tree), even in this case it works a little bit more quickly. Besides the module pyavl is unstable for work in multithread appendices (look the attached file module-avl-is-thread-unsafe.py). I think, that creation of this type (together with type of pair), will make programming more convenient since sorting by function sort will be required less often. I can probably borrow in this module beyond the framework of the project google. The demand of such type of data is interesting. Because of necessity of processing `gcmodule.c' and `obmalloc.c' this module cannot be realized as the external module. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: avl-timeit.py Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment.pot -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: install-pyavl.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: module-avl-is-thread-unsafe.py Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0001.pot -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-30-1000000.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0001.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-100-100000.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0002.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-1000-10000.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0003.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-10000-1000.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0004.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-100000-100.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0005.txt -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: timeit-result-1000000-10.txt Url: http://mail.python.org/pipermail/python-dev/attachments/20060426/3df7dab5/attachment-0006.txt From gh at ghaering.de Wed Apr 26 10:18:43 2006 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Wed, 26 Apr 2006 10:18:43 +0200 Subject: [Python-Dev] draft of externally maintained packages PEP In-Reply-To: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> References: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> Message-ID: <444F2CE3.2070707@ghaering.de> Brett Cannon wrote: > here is the rough draft of the PEP for packages maintained externally > from Python itself. There is some missing, though, that I would like > help filling in. > > I don't know who to list as the contact person (i.e., the Python > developer in charge of the code) for Expat, Optik or pysqlite. [...] > I also thought Gerhard Haering was in charge of pysqlite, but he has > never responded to any emails about this topic. Sorry for not answering any sooner. Please list me as contact person for the SQLite module. > Maybe the responsibility should go to Anthony since I know he worked > to get the package in and probably cares about keeping it updated? > As for Expat (the parser), is that Fred? [...] > > > pysqlite > -------- > - Web site > http://www.sqlite.org/ > - Standard library name > sqlite3 > - Contact person > XXX > - Synchronisation history > * 2.1.3 (2.5) You can add * 2.2.0 * 2.2.2 here. -- Gerhard From gh at ghaering.de Wed Apr 26 10:30:28 2006 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Wed, 26 Apr 2006 10:30:28 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> Message-ID: <444F2FA4.7080406@ghaering.de> Brett Cannon wrote: > OK, I am going to write the PEP I proposed a week or so ago, listing > all modules and packages within the stdlib that are maintained > externally so we have a central place to go for contact info or where > to report bugs on issues. This should only apply to modules that want > bugs reported outside of the Python tracker and have a separate dev > track. People who just use the Python repository as their mainline > version can just be left out. [...] I prefer to have bugs on the sqlite module reported in the pysqlite tracker at http://pysqlite.org/ aka http://initd.org/tracker/pysqlite For bug fixes I have the same position as Fredrik: security fixes, bugs that block the build and warnings from automatic checkers should be done through the Python repository and I will port them to the external version. For any changes that reach "deeper" I'd like to have them handed over to me. As it's unrealistic that all bugs are reported through the pysqlite tracker, can it please be arranged that if somebody enters a SQLite related bug through the Sourceforge tracker, that I get notified? Perhaps by defining a category SQLite here and adding me as default responsible, if that's possible on SF. Currently I'm not subscribed to python-checkins and didn't see a need to. Is there a need to for Python core developers? I think there's no better way except subscribing and defining a filter for SQLite-related commits to be notified if other people commit changes to the SQLite module in Python? It's not that I'm too lazy, but I'd like to keep the number of things I need to monitor low. -- Gerhard From p.f.moore at gmail.com Wed Apr 26 10:34:37 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Apr 2006 09:34:37 +0100 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] In-Reply-To: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> References: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> Message-ID: <79990c6b0604260134o37db72e0yfb3f81374d3405d0@mail.gmail.com> On 4/25/06, Jim Jewett <jimjjewett at gmail.com> wrote: > Perhaps; part of the problem is with the SF workflow. Yes. Brett should probably add that to the list of what's wanted from a new tracker (good alerting of new items, and maybe some specific "Request commit" functionality, tied to a listing of committers) Paul. From p.f.moore at gmail.com Wed Apr 26 10:56:10 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 26 Apr 2006 09:56:10 +0100 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604252009s5656db3mfcfa62e270351da2@mail.gmail.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <ca471dc20604252009s5656db3mfcfa62e270351da2@mail.gmail.com> Message-ID: <79990c6b0604260156t4fe140d3jcbc25f2bf2c3d26b@mail.gmail.com> On 4/26/06, Guido van Rossum <guido at python.org> wrote: > Really? I thought that that was due to the magic in the decorator (and > in the class it uses). In this case the use of magic is fine by me; I > know I could reconstruct it from scratch if I had to (with only one or > two bugs :-) but it's convenient to have it in the library. The > primary use case is this: > > @contextfactory > def foo(): ... > > with foo(): > ... > > but a secondary one is this: > > class C: > ... > @contextfactory > def __context__(self): > ... > > with C(): > .... > > Because of these two different uses it makes sense for @contextfactory > to return an object that has both __context__ and __enter__/__exit__ > methods. > > But that *still* doesn't explain why we are recommending that > everything providing __enter__/__exit__ must also provide __context__! > I still think we should take it out. You've just hit the problem that sucked me into all this.... You either need two decorator names for the 2 different uses of contextfactory, or you need to require everything providing __enter__/__exit__ to also provide __context__, or you need to accept that @contextfactory has a bit of implementation-level magic, that people shouldn't really care about. Changing the name to contextfactory made me happy with the "implementation magic" approach, but Nick still felt the need for the stronger method requirement (IIRC, he cited the parallel with iterators, but I don't want to misrepresent his position, so I'll leave him to speak for himself). Phillip and I conceded this, as (in my case) it didn't feel like a big thing, and we were close to a workable compromise. Beyond this bit of history, I don't really have a strong opinion either way. Paul. From ncoghlan at gmail.com Wed Apr 26 11:23:07 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 19:23:07 +1000 Subject: [Python-Dev] Updated context management documentation In-Reply-To: <5.1.1.6.0.20060425114129.01eefbb8@mail.telecommunity.com> References: <5.1.1.6.0.20060425114129.01eefbb8@mail.telecommunity.com> Message-ID: <444F3BFB.7000201@gmail.com> Phillip J. Eby wrote: > At 12:08 AM 4/26/2006 +1000, Nick Coghlan wrote: >> Secondly, the documentation now shows an example >> of a class with a close() method using contextlib.closing directly as >> its own >> __context__() method. > > Sadly, that would only work if closing() were a function. Classes don't > get turned into methods, so you'll need to change that example to use: > > def __context__(self): > return closing(self) > > instead. D'oh! I'll fix the example :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From thomas at python.org Wed Apr 26 11:44:01 2006 From: thomas at python.org (Thomas Wouters) Date: Wed, 26 Apr 2006 11:44:01 +0200 Subject: [Python-Dev] big-memory tests Message-ID: <9e804ac0604260244m842924fl38756758ad1cc47b@mail.gmail.com> Neal and I wrote a few tests that exercise the Py_ssize_t code on 64bit hardware: http://python.org/sf/1471578 Now that it's configurable and integrated with test_support and all, we think it's time to include it in the normal testsuite. I'd really like it to be in one of the next alphas, so people with an interest in insane-sized objects (and insane memory in their hardware) can run the tests, and (hopefully) add more. The tests that are there passed on AMD64 with 16Gb of RAM (although the tests that took 48Gb, and thus a ton of swap, took literally days to run,) but there are still many potential tests to add. I still want to add some 'find memory requirements for bigmem tests' and maybe 'run only bigmem tests' code to regrtest, and the bigmemtest decorator should be moved to test_support so we can intersperse bigmem tests with regular tests (say, in test_mmap.) But other than that, anyone object to committing this to the trunk (not sandbox)? The tests *are* run by default, but only with very small limits (so they'll use less memory than most of the tests), to make sure the tests do function. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060426/14df4f43/attachment-0001.html From ncoghlan at gmail.com Wed Apr 26 11:53:38 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 19:53:38 +1000 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <ca471dc20604252123y64b8cc66jc0413f6b027c0664@mail.gmail.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> <ca471dc20604252123y64b8cc66jc0413f6b027c0664@mail.gmail.com> Message-ID: <444F4322.8020009@gmail.com> Guido van Rossum wrote: > I'm not convinced. Phillip managed to capture most of my reasons for documenting it that way. However, trying to find a better term than the "with statement context" phrase used in the latest docs is causing me to have doubts. The term I'm currently favouring is "managed context" because of the obvious relationship with "context manager". But with the current documentation, it's potentially unclear which object is doing the managing, as all managed contexts are required to serve as context managers as well. So the term ends up potentially a little confusing (is the manager of a managed context the original context manager, or the managed context itself?). I'd have to actually try it out to see if the terminology could be kept clear. OTOH, if the two protocols are made orthogonal, then it's clear that the manager is always the original object with the __context__ method. Then the three cases are: - a pure context manager (only provides __context__) - a pure managed context (only provides __enter__/__exit__) - a managed context which can be its own context manager (provides all three) In the standard library, decimal.Context and threading.Condition would be examples of the first, decimal.ManagedContext would be an example of the second, and contextlib.GeneratorContext, contextlib.closing, files and the various threading module lock objects would be examples of the third. The final thing that occurred to me is that if we do find a use case for mixing and matching context managers and managed contexts, it would be easy enough to add a function to contextlib to assist in doing so: class SimpleContextManager(object): def __init__(ctx): self.context = ctx def __context__(self): return self.context def contextmanager(obj): if hasattr(obj, "__context__"): return obj if hasattr(obj, "__enter__") and hasattr(obj, "__exit__"): return SimpleContextManager(obj) raise TypeError("Require a context manager or a managed context") The ability to add that function if we eventually decide we need it means that we don't have to worry about inadvertently ruling out use cases we simply haven't thought of yet (which was one of my concerns). Since I can supply reasonable answers to all of my earlier reservations, I now believe we could drop this requirement and still end up with comprehensible documentation. Whether it's a good idea to do so. . . I'm not sure. I still quite like the idea of having to describe only two kinds of object instead of three, but making the protocols orthogonal has its merits, too. Cheers, Nick. > On 4/25/06, Phillip J. Eby <pje at telecommunity.com> wrote: >> At 11:29 PM 4/25/2006 -0400, Phillip J. Eby wrote: >>> See, if @contextfactory functions return a thing *with* a __context__ >>> method, how is that usable with "with"? It isn't, unless the thing also >>> happens to have __enter__/__exit__ methods. This was the hole in the >>> documentation that caused Nick to seek to revisit the decorator name in the >>> first place. >> Argh. I seem to be tongue-tied this evening. What I mean is, if >> @contextfactory functions' return value is usable as a "with" expression, >> that means it must have a __context__ method. But, if you are using >> @contextfactory to *define* a __context__ method, the return value should >> clearly have __enter__ and __exit__ methods. >> >> What this means is that if we describe the one method and the two methods >> as independent things, there is no *single* name we can use to describe the >> return value of a @contextfactory function. It's a wave and a particle, so >> we either have to start talking about "wavicles" or have some detailed >> explanation of why @contextfactory function return values are both waves >> and particles at the same time. >> >> However, if we say that particles are a kind of wave, and have all the same >> features as waves but just add a few others, then we can simply say >> @contextfactory functions return particles, and the waviness is implied, >> and all is right in the world. At least, until AMK comes along and asks >> why you can't separate the particleness from the waveness, which was what >> started this whole thing in the first place... :) >> >> >> > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From mwh at python.net Wed Apr 26 12:35:55 2006 From: mwh at python.net (Michael Hudson) Date: Wed, 26 Apr 2006 11:35:55 +0100 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <444F2FA4.7080406@ghaering.de> ( =?iso-8859-1?q?Gerhard_H=E4ring's_message_of?= "Wed, 26 Apr 2006 10:30:28 +0200") References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <444F2FA4.7080406@ghaering.de> Message-ID: <2maca8hbec.fsf@starship.python.net> Gerhard H?ring <gh at ghaering.de> writes: > Currently I'm not subscribed to python-checkins and didn't see a need > to. Is there a need to for Python core developers? I would say it's "encouraged". > I think there's no better way except subscribing and defining a > filter for SQLite-related commits to be notified if other people > commit changes to the SQLite module in Python? That sounds like the least effort yes. mailman has 'topic filters' so one could define a sqlite topic for python-checkins and you could just subscribe to that, but that requires Barry doing something :-) (I think). Cheers, mwh -- Two things I learned for sure during a particularly intense acid trip in my own lost youth: (1) everything is a trivial special case of something else; and, (2) death is a bunch of blue spheres. -- Tim Peters, 1 May 1998 From mwh at python.net Wed Apr 26 12:37:03 2006 From: mwh at python.net (Michael Hudson) Date: Wed, 26 Apr 2006 11:37:03 +0100 Subject: [Python-Dev] big-memory tests In-Reply-To: <9e804ac0604260244m842924fl38756758ad1cc47b@mail.gmail.com> (Thomas Wouters's message of "Wed, 26 Apr 2006 11:44:01 +0200") References: <9e804ac0604260244m842924fl38756758ad1cc47b@mail.gmail.com> Message-ID: <2m64kwhbcg.fsf@starship.python.net> "Thomas Wouters" <thomas at python.org> writes: > Neal and I wrote a few tests that exercise the Py_ssize_t code on 64bit > hardware: > > http://python.org/sf/1471578 > > Now that it's configurable and integrated with test_support and all, we > think it's time to include it in the normal testsuite. I'd really like it > to be in one of the next alphas, so people with an interest in > insane-sized objects (and insane memory in their hardware) can run the > tests, and (hopefully) add more. The tests that are there passed on AMD64 > with 16Gb of RAM (although the tests that took 48Gb, and thus a ton of > swap, took literally days to run,) but there are still many potential > tests to add. > > I still want to add some 'find memory requirements for bigmem tests' and > maybe 'run only bigmem tests' code to regrtest, and the bigmemtest > decorator should be moved to test_support so we can intersperse bigmem > tests with regular tests (say, in test_mmap.) But other than that, anyone > object to committing this to the trunk (not sandbox)? Not at all, get it in pronto, I say! Cheers, mwh -- ... but I guess there are some things that are so gross you just have to forget, or it'll destroy something within you. perl is the first such thing I have known. -- Erik Naggum, comp.lang.lisp From theller at python.net Wed Apr 26 13:06:52 2006 From: theller at python.net (Thomas Heller) Date: Wed, 26 Apr 2006 13:06:52 +0200 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <2maca8hbec.fsf@starship.python.net> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <444F2FA4.7080406@ghaering.de> <2maca8hbec.fsf@starship.python.net> Message-ID: <e2nk8d$oka$1@sea.gmane.org> Michael Hudson wrote: > Gerhard H?ring <gh at ghaering.de> writes: > >> Currently I'm not subscribed to python-checkins and didn't see a need >> to. Is there a need to for Python core developers? > > I would say it's "encouraged". > >> I think there's no better way except subscribing and defining a >> filter for SQLite-related commits to be notified if other people >> commit changes to the SQLite module in Python? > > That sounds like the least effort yes. mailman has 'topic filters' so > one could define a sqlite topic for python-checkins and you could just > subscribe to that, but that requires Barry doing something :-) (I > think). One could as well read python-checkins via gmane, and set up client-side filtering in the nntp reader. Thomas From anthony at interlink.com.au Wed Apr 26 13:15:35 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 26 Apr 2006 21:15:35 +1000 Subject: [Python-Dev] TRUNK FREEZE from 00:00 UTC, 27th April 2006 for 2.5a2 Message-ID: <200604262115.39655.anthony@interlink.com.au> I'm going to be cutting 2.5a2 tomorrow. The trunk should be considered FROZEN from 00:00 UTC on the 27th of April (in about 12-13 hours time). Unless you're one of the release team, please don't make checkins to the trunk until the release is done. I'll send another message when it's done. Thanks, Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From ncoghlan at gmail.com Wed Apr 26 13:19:19 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 21:19:19 +1000 Subject: [Python-Dev] Reviewed patches [was: SoC proposal: "fix some old, old bugs in sourceforge"] In-Reply-To: <20060425213704.GA19842@rogue.amk.ca> References: <fb6fbf560604251310j24272440s1be87ea0201dc7d2@mail.gmail.com> <20060425213704.GA19842@rogue.amk.ca> Message-ID: <444F5737.1070100@gmail.com> A.M. Kuchling wrote: > On Tue, Apr 25, 2006 at 04:10:02PM -0400, Jim Jewett wrote: >> I don't see a good way to say "It looks good to me". I don't see any >> way to say "There were issues, but I think they're resolved now". So >> either way, I and the author are both sort of waiting for a committer >> to randomly happen back over old patches. > > If there are too many patches waiting for a committer to assess them, > that probably points up the need for more committers. Simply adding more committers in general isn't necessarily the answer though, since it takes a while to feel entitled to check code in for different areas. I know I'm pretty reluctant to commit anything not directly related to the AST compiler, PEP 338 or PEP 343. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Wed Apr 26 13:28:52 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 21:28:52 +1000 Subject: [Python-Dev] what do you like about other trackers and what do you hate about SF? In-Reply-To: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> References: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> Message-ID: <444F5974.6010406@gmail.com> Brett Cannon wrote: > I am starting to hash out what the Call for Trackers is going to say > on the Infrastructure mailing list. Laura Creighton suggested we have > a list of features that we would like to see and what we all hate > about SF so as to provide some guidelines in terms of how to set up > the test trackers that people try to sell us on. > > So, if you could, please reply to this message with ONE thing you have > found in a tracker other than SF that you have liked (especially > compared to SF) and ONE thing you dislike/hate about SF's tracker. I > will use the replies as a quick-and-dirty features list of stuff that > we would like to see demonstrated in the test trackers. A feature I can't recall seeing anywhere, but one I'd like: the ability to define a filter, and have the tracker email me when a new bug is submitted that matches the filter (being able to create an RSS feed based on a filter would be just as good) Something that I dislike about SF: the fact that I can't use email to participate in a bug discussion, but have to use SF's somewhat horrible web interface. OK, I guess I slipped in another feature request with that second one ;) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at gmail.com Wed Apr 26 14:11:39 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 26 Apr 2006 22:11:39 +1000 Subject: [Python-Dev] [Python-checkins] r45721 - python/trunk/Lib/test/test_with.py In-Reply-To: <20060426011553.D37D51E400C@bag.python.org> References: <20060426011553.D37D51E400C@bag.python.org> Message-ID: <444F637B.6080402@gmail.com> tim.peters wrote: > Author: tim.peters > Date: Wed Apr 26 03:15:53 2006 > New Revision: 45721 > > Modified: > python/trunk/Lib/test/test_with.py > Log: > Rev 45706 renamed stuff in contextlib.py, but didn't rename > uses of it in test_with.py. As a result, test_with has been skipped > (due to failing imports) on all buildbot boxes since. Alas, that's > not a test failure -- you have to pay attention to the > > 1 skip unexpected on PLATFORM: > test_with > > kinds of output at the ends of test runs to notice that this got > broken. That would be my fault - I've got about four unexpected skips I actually expect because I don't have the relevant external modules built. I must have missed this new one joining the list. > It's likely that more renaming in test_with.py would be desirable. Yep. I can see why you deferred fixing it, though :) I'm also curious as to why that test uses its own version of Nested rather than the one in contextlib. (granted, you can't inherit from the contextlib version, but containment should work fine) Anyway, I've created SF bug #1476845 to keep track of the things that need to be done to finish the PEP 343 terminology cleanup. (What we have now should be fine for alpha 2) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From vys at renet.ru Wed Apr 26 15:53:56 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Wed, 26 Apr 2006 17:53:56 +0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <4f0b69dc0604250724r6ae229b7v6a28b9df4eedd383@mail.gmail.com> References: <4039D552ADAB094BB1EA670F3E96214E0252B3FA@df-foxhound-msg.exchange.corp.microsoft.com> <444C9FBE.2060709@renet.ru> <20060424114024.66D0.JCARLSON@uci.edu> <4f0b69dc0604242006g1b2bd50fx4665b36fa02e2893@mail.gmail.com> <444E3033.2070503@renet.ru> <4f0b69dc0604250724r6ae229b7v6a28b9df4eedd383@mail.gmail.com> Message-ID: <444F7B74.7070102@renet.ru> Hye-Shik Chang wrote: > On 4/25/06, Vladimir 'Yu' Stepanov <vys at renet.ru> wrote: > >> Thanks for the answer, but after application of a patch on python-2.4.3 >> I could not compile it. A conclusion of a stage of compilation in the >> attached file. >> >> > > Aah. The patch is for Python in Subversion trunk. I think you can > add the following line somewhere in collectionsmodule.c to avoid > to error. > > #define Py_ssize_t int > > Thank you! > > Hye-Shik > > Thanks! I have compared realizations. The object collections.rbtree works very well. Why it still extends in the form of a patch? Speed of work can be compared to speed of work of the standard dictionary. Here comparative results of speed of work. My module concedes in speed of work of that realization without the index on a parental element and allocation is used goes two objects on each element. There is an offer on improvement of its work. To add new exceptions for indication of the fact of blocking, for example: Exception |__ StandartError |__ LockError |__ FreezeError |__ RDLockError |__ WRLockError That speed of work was is high enough for check on each kind of blocking it is necessary to check only one identifier. For example these macro: --------------------------------------------------------------------- void _lock_except(int lockcnt) { if (lockcnt > 0) { if ((lockcnt&1) == 0) PyErr_SetNone(PyExc_RDLockError); else PyErr_SetNone(PyExc_FreezeLockError); } else PyErr_SetNone(PyExc_WRLockError); } #define wrlock_SET(x,r) if ((x)->ob_lockcnt != 0) { \ _lock_except((x)->ob_lockcnt); \ return r; \ } \ (x)->ob_lockcnt = -1 #define wrlock_UNSET(x) (x)->ob_lockcnt = 0 #define rdlock_SET(x,r) if ((x)->ob_lockcnt < 0) { \ _lock_except((x)->ob_lockcnt); \ return r; \ } \ (x)->ob_lockcnt += 2 #define rdlock_UNSET(x) (x)->ob_lockcnt -= 2 #define freezelock_SET(x, r) if ((x)->ob_lockcnt < 0) { \ _lock_except((x)->ob_lockcnt); \ return r; \ } \ (x)->ob_lockcnt |= 1; --------------------------------------------------------------------- In object the counter in the form of an integer with a sign should be certain - ob_lockcnt: --------------------------------------------------------------------- struct newobject { PyObject_HEAD // or PyObject_HEAD_VAR ... int ob_lockcnt; ... }; --------------------------------------------------------------------- And by any call depending on if the method only reads data of object, the sequence rdlock_* should be used. Example: --------------------------------------------------------------------- PyObject * method_for_read_something(PyObject *ob) { rdlock_SET(ob, NULL); ... do something ... if (failure) goto failed; ... do some else ... rdlock_UNSET(ob); Py_INCREF(Py_None); return Py_None; failed: rdlock_UNSET(ob); return NULL; } --------------------------------------------------------------------- If the given method can make any critical changes that the sequence wrlock_* should be used. Example: --------------------------------------------------------------------- PyObject * method_for_some_change(PyObject *ob) { wrlock_SET(ob, NULL); ... do something ... if (failure) goto failed; ... do some else ... wrlock_UNSET(ob); Py_INCREF(Py_None); return Py_None; failed: wrlock_UNSET(ob); return NULL; } --------------------------------------------------------------------- If the object needs to be frozen, it is equivalent to installation of constant blocking on reading. Thus it is possible to establish blocking on already readable object. For the open iterators it is necessary to carry out opening of blocking on reading. Closing can be carried out or on end of work of iterator, or on its closing: --------------------------------------------------------------------- PyObject * map_object_keys(PyObject *ob) { rdlock_SET(ob, NULL); ... allocate iterator and initialize ... } map_object_keys_iter_destroy(PyObject *ob) { ... destroy data ... rdlock_UNSET(ob); } --------------------------------------------------------------------- From amk at amk.ca Wed Apr 26 17:23:05 2006 From: amk at amk.ca (A.M. Kuchling) Date: Wed, 26 Apr 2006 11:23:05 -0400 Subject: [Python-Dev] Reviewed patches In-Reply-To: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> References: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> Message-ID: <20060426152305.GA20535@rogue.amk.ca> On Tue, Apr 25, 2006 at 06:04:12PM -0400, Jim Jewett wrote: > So how are the committers supposed to even know that it is waiting for > assessment? The solutions that I've seen work are Could we mark the bug/patch as status 'pending'? This status exists in the SF bug tracker but no bugs or patches are assigned this status, so we're probably not using it. --amk From tomerfiliba at gmail.com Mon Apr 24 22:56:37 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Mon, 24 Apr 2006 22:56:37 +0200 Subject: [Python-Dev] suggestion: except in list comprehension Message-ID: <1d85506f0604241356g11ebf29emdb15c789089afb25@mail.gmail.com> a friend of mine suggested this, and i thought i should share it with the mailing list. many times when you would want to use list/generator comprehensions, you have to fall back to the old for/append way, because of exceptions. so it may be a good idea to allow an exception handling mechanism in these language constructs. since list comprehensions are expressions, an exceptions thrown inside one means the entire list is discarded. you may want to provide some, at least fundamental, error handling, like "if this item raises an exception, just ignore it", or "terminate the loop in that case and return whatever you got so far". the syntax is quite simple: "[" <expr> for <expr> in <expr> [if <cond>] [except <exception-class-or-tuple>: <action>] "]" where <action> is be one of "continue" or "break": * continue would mean "ignore this item" * break would mean "return what you got so far" for example: a = ["i", "fart", "in", "your", "general", 5, "direction", "you", "silly", "english", "kniggits"] give me every word that starts with "y", ignoring all errors b = [x for x in a if x.startswith("y") except: continue] # returns ["your", "you"] or only AttributeError to be more speciifc b = [x for x in a if x.startswith("y") except AttributeError: continue] # returns ["your", "you"] and with break b = [x for x in a if x.startswith("y") except AttributeError: continue] # returns only ["your"] -- we didn't get past the 5 in order to do something like this today, you have to resort to the old way, b = [] for x in a: try: if x.startswith("y"): b.append(x) except ...: pass which really breaks the idea behind list comprehension. so it's true that this example i provided can be done with a more complex if condition (first doing hasattr), but it's not always possible, and how would you do it if the error occurs at the first part of the expression? >>> y = [4, 3, 2, 1, 0, -1, -2, -3] >>> [1.0 / x for x in y except ZeroDivisionError: break] [0.25, 0.333, 0.5, 1.0] >>> [1.0 / x for x in y except ZeroDivisionError: continue] [0.25, 0.333, 0.5, 1.0, -1.0, -0.5, -0.333] again, in this example you can add "if x != 0", but it's not always possible to tell which element will fail. for example: filelist = ["a", "b", "c", "<\\/invalid file name:?*>", "e"] openfiles = [open(filename) for filename in filelist except IOError: continue] the example is hypothetical, but in this case you can't tell *prior to the exception* that the operation is invalid. the same goes for generator expressions, of course. -tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060424/c3635932/attachment.htm From Ben.Young at risk.sungard.com Tue Apr 25 14:30:28 2006 From: Ben.Young at risk.sungard.com (Ben.Young at risk.sungard.com) Date: Tue, 25 Apr 2006 13:30:28 +0100 Subject: [Python-Dev] Visual studio 2005 express now free In-Reply-To: <2mr73lhmkc.fsf@starship.python.net> Message-ID: <OFB2E84A8E.4FE0CC3D-ON8025715B.0044A9C1-8025715B.0044B50E@risk.sungard.com> python-dev-bounces+python=theyoungfamily.co.uk at python.org wrote on 25/04/2006 13:22:27: > "Neil Hodgson" <nyamatongwe at gmail.com> writes: > > > Martin v. L?wis: > > > >> Apparently, the status of this changed right now: it seems that > >> the 2003 compiler is not available anymore; the page now says > >> that it was replaced with the 2005 compiler. > >> > >> Should we reconsider? > > > > I expect Microsoft means that Visual Studio Express will be > > available free forever, not that you will always be able to download > > Visual Studio 2005 Express. > > I don't think that's what Herb Sutter said in his ACCU keynote, which > is where I'm pretty sure Guido got his information at the start of > this thread (he was there too and the email appeared soon after). If > I remember right, he said that 2005 was free, forever, and they'd > think about later versions. I may be misremembering, and I certainly > haven't read any official stuff from Microsoft... > Hi Michael, I was there and that's how I remember it too. Cheers, Ben > Cheers, > mwh > > -- > I also feel it essential to note, [...], that Description Logics, > non-Monotonic Logics, Default Logics and Circumscription Logics > can all collectively go suck a cow. Thank you. > -- http://advogato.org/person/Johnath/diary.html?start=4 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python- > dev/python%40theyoungfamily.co.uk > From mhearne808 at yahoo.com Tue Apr 25 21:30:56 2006 From: mhearne808 at yahoo.com (Michael Hearne) Date: Tue, 25 Apr 2006 12:30:56 -0700 (PDT) Subject: [Python-Dev] Accessing DLL objects from other DLLs Message-ID: <20060425193057.64602.qmail@web36807.mail.mud.yahoo.com> Hi - I'm looking at implementing Python as a scripting language for an existing C++ code base. However, there are many interdependencies between the C++ modules that I already have, and I'm not sure how to deal with this in a Python context. I thought this would be the appropriate place to pose my question. As background, I'm familiar with the basics of creating Python DLLs from scratch and using SWIG (my preferred approach). For example: Let's say I have a Foo class, which has it's own independent set of functionality, and so is worthy of making into it's own Python module. Let us also say that I have a Bar class, which has some independent functionality and so could be made into it's own module, but has a method which can take an object of type Foo. If I want to keep these classes as distinct modules, but retain this kind of module interdependency, how can I architect this with Python? The current architecture has a sort of COM-like messaging middleman instantiated by "Main" that allows DLLs to call functionality in classes contained in other DLLs. Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060425/3501e995/attachment.html From nnorwitz at gmail.com Wed Apr 26 08:55:27 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Tue, 25 Apr 2006 23:55:27 -0700 Subject: [Python-Dev] Addressing Outstanding PEPs Message-ID: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> The following PEPs are open according to PEP 0 and haven't seen much activity recently IIRC. I'd like everyone to take a cut and bring these up to date. For all the PEPs that aren't going anywhere, can we close them? Please update your PEP if appropriate. PEP 237 mentions changing an OverflowWarning in 2.5. Did we do that? Are any of the PEPs below in a similar boat? 302? If you want to discuss any of these, please start a new thread. n -- S 209 Adding Multidimensional Arrays Barrett, Oliphant S 228 Reworking Python's Numeric Model Zadka, GvR S 243 Module Repository Upload Mechanism Reifschneider S 256 Docstring Processing System Framework Goodger S 258 Docutils Design Specification Goodger S 266 Optimizing Global Variable/Attribute Access Montanaro S 267 Optimized Access to Module Namespaces Hylton S 268 Extended HTTP functionality and WebDAV Stein S 275 Switching on Multiple Values Lemburg S 280 Optimizing access to globals GvR S 286 Enhanced Argument Tuples von Loewis I 287 reStructuredText Docstring Format Goodger S 297 Support for System Upgrades Lemburg S 298 The Locked Buffer Interface Heller S 302 New Import Hooks JvR, Moore S 304 Controlling Generation of Bytecode Files Montanaro S 314 Metadata for Python Software Packages v1.1 Kuchling, Jones S 321 Date/Time Parsing and Formatting Kuchling S 323 Copyable Iterators Martelli S 331 Locale-Independent Float/String Conversions Reis S 334 Simple Coroutines via SuspendIteration Evans S 335 Overloadable Boolean Operators Ewing S 337 Logging Usage in the Standard Library Dubner S 344 Exception Chaining and Embedded Tracebacks Yee S 345 Metadata for Python Software Packages 1.2 Jones I 350 Codetags Elliott S 354 Enumerations in Python Finney From skip at pobox.com Wed Apr 26 16:18:24 2006 From: skip at pobox.com (skip at pobox.com) Date: Wed, 26 Apr 2006 09:18:24 -0500 Subject: [Python-Dev] Addressing Outstanding PEPs In-Reply-To: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> References: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> Message-ID: <17487.33072.710366.3328@montanaro.dyndns.org> Neal> S 266 Optimizing Global Variable/Attribute Access Montanaro Should be closed/rejected. I assume this concept will not be address in 2.x but resurface in a completely different form in 3.x. Neal> S 304 Controlling Generation of Bytecode Files Montanaro Probably ditto. There were some problems reported with the concept on Windows (which unfortunately I lost). I have no particular interest in this for the environments in which I work. I've asked for a couple times on c.l.py for someone who is interested in this to take it over, but nobody's ever bitten. Skip From alan.mcintyre at gmail.com Wed Apr 26 17:49:46 2006 From: alan.mcintyre at gmail.com (Alan McIntyre) Date: Wed, 26 Apr 2006 11:49:46 -0400 Subject: [Python-Dev] Reviewed patches In-Reply-To: <20060426152305.GA20535@rogue.amk.ca> References: <fb6fbf560604251504j6002ba93q7b42c9c43f784f7c@mail.gmail.com> <20060426152305.GA20535@rogue.amk.ca> Message-ID: <444F969A.4060305@gmail.com> A.M. Kuchling wrote: > On Tue, Apr 25, 2006 at 06:04:12PM -0400, Jim Jewett wrote: > >> So how are the committers supposed to even know that it is waiting for >> assessment? The solutions that I've seen work are >> > > Could we mark the bug/patch as status 'pending'? This status exists > in the SF bug tracker but no bugs or patches are assigned this status, > so we're probably not using it. > If the SF tracker help is to be believed, pending status won't stick for very long (unless somebody has changed the default timeout): "You can set the status to 'Pending' if you are waiting for a response from the tracker item author. When the author responds the status is automatically reset to that of 'Open'. Otherwise, if the author doesn't respond with an admin-defined amount of time (default is 14 days) then the item is given a status of 'Deleted'." From martin at v.loewis.de Wed Apr 26 18:16:43 2006 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 26 Apr 2006 18:16:43 +0200 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <bbaeab100604251912n5f7a06dcgd2e195283c761206@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> <442F6F50.2030002@v.loewis.de> <bbaeab100604251912n5f7a06dcgd2e195283c761206@mail.gmail.com> Message-ID: <444F9CEB.9030303@v.loewis.de> Brett Cannon wrote: > I created patch 1474907 with a fix for it. Checks if %zd works for > size_t and if so sets PY_FORMAT_SIZE_T to "z", otherwise just doesn't > set the macro def. > > Assigned to Martin to make sure I didn't foul it up, but pretty much > anyone could probably double-check it. Unfortunately, SF stopped sending emails when you get assigned an issue, so I didn't receive any message when you created that patch. I now reviewed it, and think the test might not fail properly on systems where %zd isn't supported. Regards, Martin From martin at v.loewis.de Wed Apr 26 18:27:35 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 26 Apr 2006 18:27:35 +0200 Subject: [Python-Dev] PEP 304 (Was: Addressing Outstanding PEPs) In-Reply-To: <17487.33072.710366.3328@montanaro.dyndns.org> References: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> <17487.33072.710366.3328@montanaro.dyndns.org> Message-ID: <444F9F77.4070901@v.loewis.de> skip at pobox.com wrote: [...] > Should be closed/rejected. [...] > > Neal> S 304 Controlling Generation of Bytecode Files Montanaro > > Probably ditto. Rejected would be a wrong description, IMO; "withdrawn" describes it better. It's not that the feature is undesirable or the specific approach at solving the problem - just nobody is interested to work on it. So future contributors shouldn't get the impression that this was discussed and rejected, but that it was discussed and abandoned. Regards, Martin From dh at triple-media.com Wed Apr 26 18:28:36 2006 From: dh at triple-media.com (Dennis Heuer) Date: Wed, 26 Apr 2006 18:28:36 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <mailman.2570.1146065722.27773.python-dev@python.org> References: <mailman.2570.1146065722.27773.python-dev@python.org> Message-ID: <20060426182836.400f2917.dh@triple-media.com> Have never seen such an answer before. Please excuse if I read it wrong. The method would just return the new values, for shure. That is what it shall do. The interpreter calls that method for receiving the new (to be used) values to pass them on to the real target (the called attribute). The method is a kind of filter to be used intermediately. Dennis On Wed, 26 Apr 2006 17:35:22 +0200 python-dev-bounces at python.org wrote: > The results of your email command are provided below. Attached is your > original message. > > > - Unprocessed: > quite now: > The new-style classes support the inheritance of basic types, like int, > long, etc. Inheriting them is nice but not always efficient because one > can not *control* the inherited methods but only overwrite them. For > example, I want to implement a bitarray type. Because the long type is > unlimited and implemented as an integer array (as far as I know), it > seems to be a good choice for basing a bitarray type on it. Boolean > arithmetics on long integers should work quite fast, I expect. However, > the bitarray class should accept different values than the long type > accepts, like "101010". This is not only true for initialization but > also for comparisons, etc.: > if x < "101010": print True. > Here now the trouble appears. There seems to be no way to catch an > input value before it reaches the attribute, to parse and convert it > into the appropriate integer, and to send this integer to the attribute > instead. One actually has to overwrite all the inherited methods to > call the validation method from inside each of them individually. This > renders the inheritance of the long type useless. One could just write > a new class instead. This is not optimal. > Please think about implementing a method that catches all input values. > > - Ignored: > This method would just return the new values. > > Thanks, > Dennis > > - Done. > > From pje at telecommunity.com Wed Apr 26 18:49:13 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 12:49:13 -0400 Subject: [Python-Dev] Addressing Outstanding PEPs In-Reply-To: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> Message-ID: <5.1.1.6.0.20060426124138.01f12318@mail.telecommunity.com> At 11:55 PM 4/25/2006 -0700, Neal Norwitz wrote: > S 243 Module Repository Upload Mechanism Reifschneider This one needs to be withdrawn or updated - it totally doesn't match the implementation in Python 2.5. > S 302 New Import Hooks JvR, Moore Unless somebody loans me the time machine, I won't be able to finish all of PEP 302's remaining items before the next alpha. Some, like the idea of moving the built-in logic to sys.meta_path, can't be done until Py3K without having adverse impact on deployed software. > S 334 Simple Coroutines via SuspendIteration Evans IIRC, this one's use cases can be met by using a coroutine trampoline as described in PEP 342. > S 345 Metadata for Python Software Packages 1.2 Jones This one should not be accepted in its current state; too much specification of syntax, too little of semantics. And the specifications that *are* there, are often wrong. For example, the strict version syntax wouldn't even support *Python's own* version numbering scheme, let alone the many package versioning schemes in actual use. From martin at v.loewis.de Wed Apr 26 18:50:22 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 26 Apr 2006 18:50:22 +0200 Subject: [Python-Dev] Accessing DLL objects from other DLLs In-Reply-To: <20060425193057.64602.qmail@web36807.mail.mud.yahoo.com> References: <20060425193057.64602.qmail@web36807.mail.mud.yahoo.com> Message-ID: <444FA4CE.2040305@v.loewis.de> Michael Hearne wrote: > If I want to keep these classes as distinct modules, but retain this > kind of module interdependency, how can I architect this with Python? Please understand that python-dev is for discussions about the development *of* Python, not for the development *with* Python. Use news:comp.lang.python (mailto:python-list at python.org) for the latter. Regards, Martin From theller at python.net Wed Apr 26 19:09:08 2006 From: theller at python.net (Thomas Heller) Date: Wed, 26 Apr 2006 19:09:08 +0200 Subject: [Python-Dev] Addressing Outstanding PEPs In-Reply-To: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> References: <ee2a432c0604252355hc7bf87bne38e0177f91a16f@mail.gmail.com> Message-ID: <e2o9fk$fdg$1@sea.gmane.org> Neal Norwitz wrote: > The following PEPs are open according to PEP 0 and haven't seen much > activity recently IIRC. I'd like everyone to take a cut and bring > these up to date. For all the PEPs that aren't going anywhere, can we > close them? Please update your PEP if appropriate. > > S 298 The Locked Buffer Interface Heller I withdraw this. Thomas From guido at python.org Wed Apr 26 19:16:59 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 10:16:59 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages Message-ID: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> (Context: There's a large crowd with pitchforks and other sharp pointy farm implements just outside the door of my office at Google. They are making an unbelievable racket. It appears they are Google engineers who have been bitten by a misfeature of Python, and they won't let me go home before I have posted this message.) The requirement that a directlry must contain an __init__.py file before it is considered a valid package has always been controversial. It's designed to prevent the existence of a directory with a common name like "time" or "string" from preventing fundamental imports to work. But the feature often trips newbie Python programmers (of which there are a lot at Google, at our current growth rate we're probably training more new Python programmers than any other organization worldwide :-). One particular egregious problem is that *subpackage* are subject to the same rule. It so happens that there is essentially only one top-level package in the Google code base, and it already has an __init__.py file. But developers create new subpackages at a frightening rate, and forgetting to do "touch __init__.py" has caused many hours of lost work, not to mention injuries due to heads banging against walls. So I have a very simple proposal: keep the __init__.py requirement for top-level pacakages, but drop it for subpackages. This should be a small change. I'm hesitant to propose *anything* new for Python 2.5, so I'm proposing it for 2.6; if Neal and Anthony think this would be okay to add to 2.5, they can do so. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From dh at triple-media.com Wed Apr 26 19:27:26 2006 From: dh at triple-media.com (Dennis Heuer) Date: Wed, 26 Apr 2006 19:27:26 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060426182836.400f2917.dh@triple-media.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> Message-ID: <20060426192726.b4b654c1.dh@triple-media.com> To bring the real problem more upfront. Up to now, inheriting classes is all about processing (the output channel) but not about retrieving (the input channel). However, though a new type can advance from an existing type if it just needs to provide some few more methods, it can not advance from an existing type if it needs to support some few other input formats--even if they all convert to the inherited type easily. The input channel is completely forgotten. Dennis From nd at perlig.de Wed Apr 26 19:25:12 2006 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Wed, 26 Apr 2006 19:25:12 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <200604261925.12302@news.perlig.de> * Guido van Rossum wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. Not that it would count in any way, but I'd prefer to keep it. How would I mark a subdirectory as "not-a-package" otherwise? echo "raise ImportError" >__init__.py ? nd -- Das, was ich nicht kenne, spielt st?ckzahlm??ig *keine* Rolle. -- Helmut Schellong in dclc From benji at benjiyork.com Wed Apr 26 19:33:24 2006 From: benji at benjiyork.com (Benji York) Date: Wed, 26 Apr 2006 13:33:24 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <444FAEE4.7090306@benjiyork.com> Guido van Rossum wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. So this would mean that current non-package subdirectories in a package (that contain things like data files or configuration info) would become packages with no modules in them? sharpening-my-farm-implements-ly y'rs, -- Benji York From guido at python.org Wed Apr 26 19:44:22 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 10:44:22 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <200604261925.12302@news.perlig.de> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <200604261925.12302@news.perlig.de> Message-ID: <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> On 4/26/06, Andr? Malo <nd at perlig.de> wrote: > * Guido van Rossum wrote: > > > So I have a very simple proposal: keep the __init__.py requirement for > > top-level pacakages, but drop it for subpackages. This should be a > > small change. I'm hesitant to propose *anything* new for Python 2.5, > > so I'm proposing it for 2.6; if Neal and Anthony think this would be > > okay to add to 2.5, they can do so. > > Not that it would count in any way, but I'd prefer to keep it. How would I > mark a subdirectory as "not-a-package" otherwise? What's the use case for that? Have you run into this requirement? And even if you did, was there a requirement that the subdirectory's name be the same as a standard library module? If the subdirectory's name is not constrained, the easiest way to mark it as a non-package is to put a hyphen or dot in its name; if you can't do that, at least name it something that you don't need to import. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Wed Apr 26 19:46:21 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 10:46:21 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <444FAEE4.7090306@benjiyork.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <444FAEE4.7090306@benjiyork.com> Message-ID: <ca471dc20604261046m6b9760d3jb2a00831ee8a6875@mail.gmail.com> On 4/26/06, Benji York <benji at benjiyork.com> wrote: > Guido van Rossum wrote: > > So I have a very simple proposal: keep the __init__.py requirement for > > top-level pacakages, but drop it for subpackages. > > So this would mean that current non-package subdirectories in a package > (that contain things like data files or configuration info) would become > packages with no modules in them? Yup. Of course unless you try to import from them that wouldn't particularly hurt, except if the subdir name happens to be the same as a module name. Note that absolute import (which will be turned on for all in 2.6) will solve the ambiguity; the only ambiguity left would be if you had a module foo.py and also a non-package subdirectory foo. But that's just asking for trouble. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Wed Apr 26 19:47:55 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 10:47:55 -0700 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060426192726.b4b654c1.dh@triple-media.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> Message-ID: <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> I doubt you'll get many answers. I have no idea what you're talking about. How about an example or two? On 4/26/06, Dennis Heuer <dh at triple-media.com> wrote: > To bring the real problem more upfront. Up to now, inheriting classes > is all about processing (the output channel) but not about retrieving > (the input channel). However, though a new type can advance from an > existing type if it just needs to provide some few more methods, it can > not advance from an existing type if it needs to support some few other > input formats--even if they all convert to the inherited type easily. > The input channel is completely forgotten. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jcarlson at uci.edu Wed Apr 26 19:51:33 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 26 Apr 2006 10:51:33 -0700 Subject: [Python-Dev] suggestion: except in list comprehension In-Reply-To: <1d85506f0604241356g11ebf29emdb15c789089afb25@mail.gmail.com> References: <1d85506f0604241356g11ebf29emdb15c789089afb25@mail.gmail.com> Message-ID: <20060426103804.66F1.JCARLSON@uci.edu> "tomer filiba" <tomerfiliba at gmail.com> wrote: > "[" <expr> for <expr> in <expr> [if <cond>] [except > <exception-class-or-tuple>: <action>] "]" Note that of the continue cases you offer, all of them are merely simple if condition (though the file example could use a better test than os.path.isfile). [x for x in a if x.startswith("y") except AttributeError: continue] [x for x in a if hasattr(x, 'startswith') and x.startswith("y")] [1.0 / x for x in y except ZeroDivisionError: continue] [1.0 / x for x in y if x != 0] [open(filename) for filename in filelist except IOError: continue] [open(filename) for filename in filelist if os.path.isfile(filename)] The break case can be implemented with particular kind of instance object, though doesn't have the short-circuiting behavior... class StopWhenFalse: def __init__(self): self.t = 1 def __call__(self, t): if not t: self.t = 0 return 0 return self.t z = StopWhenFalse() Assuming you create a new instance z of StopWhenFalse before doing the list comprehensions... [x for x in a if z(hasattr(x, 'startswith') and x.startswith("y"))] [1.0 / x for x in y if z(x != 0)] [open(filename) for filename in filelist if z(os.path.isfile(filename))] If you couldn't guess; -1, you can get equivalent behavior without complicating the generator expression/list comprension syntax. - Josiah From tim.peters at gmail.com Wed Apr 26 19:49:39 2006 From: tim.peters at gmail.com (Tim Peters) Date: Wed, 26 Apr 2006 13:49:39 -0400 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <444F9CEB.9030303@v.loewis.de> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> <442F6F50.2030002@v.loewis.de> <bbaeab100604251912n5f7a06dcgd2e195283c761206@mail.gmail.com> <444F9CEB.9030303@v.loewis.de> Message-ID: <1f7befae0604261049s240b9d02te72061f4386c0410@mail.gmail.com> [Brett Cannon] >> I created patch 1474907 with a fix for it. Checks if %zd works for >> size_t and if so sets PY_FORMAT_SIZE_T to "z", otherwise just doesn't >> set the macro def. >> >> Assigned to Martin to make sure I didn't foul it up, but pretty much >> anyone could probably double-check it. [Martin v. L?wis] > Unfortunately, SF stopped sending emails when you get assigned an issue, > so I didn't receive any message when you created that patch. > > I now reviewed it, and think the test might not fail properly on systems > where %zd isn't supported. I agree with Martin's patch comments. The C standards don't require that printf fail if an unrecognized format gimmick is used (behavior is explicitly "undefined" then). For example, """ #include <stdio.h> #include <stdlib.h> int main() { int i; i = printf("%zd\n", (size_t)0); printf("%d\n", i); return 0; } """ prints "zd\n3\n" under all of MSVC 6.0, MSVC 7.1, and gcc (Cygwin) on WinXP. Using sprintf instead and checking the string produced seems much more likely to work. After the trunk freeze melts, I suggest just _trying_ stuff. The buildbot system covers a bunch of platforms now, and when trying to out-hack ill-defined C stuff "try it and see" is easier than thinking <0.5 wink>. From barry at python.org Wed Apr 26 19:50:20 2006 From: barry at python.org (Barry Warsaw) Date: Wed, 26 Apr 2006 13:50:20 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <1146073820.10733.70.camel@resist.wooz.org> On Wed, 2006-04-26 at 10:16 -0700, Guido van Rossum wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. But if it's there, then nothing changes, right? IOW, if we want to expose names in the subpackage's namespace, we can still do it in the subpackage's __init__.py. It's just that otherwise empty subpackage __init__.py files won't be required. What would the following print? import toplevel.sub1.sub2 print toplevel.sub1.sub2.__file__ If it's "<path>/sub1/sub2" then that kind of breaks a common idiom of using os.path.dirname() on a module's __file__ to find co-located resources. Or at least, you have to know whether to dirname its __file__ or not (which might not be too bad, since you'd probably know how that package dir is organized anyway). I dunno. Occasionally it trips me up too, but it's such an obvious and easy fix that it's never bothered me enough to care. I can't think of an example, but I suppose it's still possible that lifting this requirement could cause some in-package located data directories to be mistakenly importable. I'd be somewhat more worried about frameworks that dynamically import things having to be more cautious about cleansing their __import__() arguments now. I'd be -1 but the remote possibility of you being burned at the stake by your fellow Googlers makes me -0 :). -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060426/f8ce6e56/attachment.pgp From guido at python.org Wed Apr 26 19:59:03 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 10:59:03 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <1146073820.10733.70.camel@resist.wooz.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <1146073820.10733.70.camel@resist.wooz.org> Message-ID: <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> On 4/26/06, Barry Warsaw <barry at python.org> wrote: > On Wed, 2006-04-26 at 10:16 -0700, Guido van Rossum wrote: > > > So I have a very simple proposal: keep the __init__.py requirement for > > top-level pacakages, but drop it for subpackages. This should be a > > small change. I'm hesitant to propose *anything* new for Python 2.5, > > so I'm proposing it for 2.6; if Neal and Anthony think this would be > > okay to add to 2.5, they can do so. > > But if it's there, then nothing changes, right? IOW, if we want to > expose names in the subpackage's namespace, we can still do it in the > subpackage's __init__.py. It's just that otherwise empty subpackage > __init__.py files won't be required. Correct. This is an important clarification. > What would the following print? > > import toplevel.sub1.sub2 > print toplevel.sub1.sub2.__file__ > > If it's "<path>/sub1/sub2" then that kind of breaks a common idiom of > using os.path.dirname() on a module's __file__ to find co-located > resources. Or at least, you have to know whether to dirname its > __file__ or not (which might not be too bad, since you'd probably know > how that package dir is organized anyway). Oh, cool gray area. I propose that if there's no __init__.py it prints '<path>/sub1/sun2/' i.e. with a trailing slash; that causes dirname to just strip the '/'. (It would be a backslash on Windows of course). > I dunno. Occasionally it trips me up too, but it's such an obvious and > easy fix that it's never bothered me enough to care. But you're not a newbie. for a newbie who's just learned about packages, is familiar with Java, and missed one line in the docs, it's an easy mistake to make and a tough one to debug. > I can't think of > an example, but I suppose it's still possible that lifting this > requirement could cause some in-package located data directories to be > mistakenly importable. I'd be somewhat more worried about frameworks > that dynamically import things having to be more cautious about > cleansing their __import__() arguments now. But (assuming 2.6 and absolute import) what would be the danger of importing such a package? Presumably it contains no *.py or *.so files so there's no code there; but even so you'd have to go out of your way to import it (since if the directory exists it can't also be a subpackage or submodule name that's in actual use). > I'd be -1 but the remote possibility of you being burned at the stake by > your fellow Googlers makes me -0 :). I'm not sure I understand what your worry is. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From nd at perlig.de Wed Apr 26 20:04:33 2006 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Wed, 26 Apr 2006 20:04:33 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <200604261925.12302@news.perlig.de> <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> Message-ID: <200604262004.34103@news.perlig.de> * Guido van Rossum wrote: > On 4/26/06, Andr? Malo <nd at perlig.de> wrote: > > * Guido van Rossum wrote: > > > So I have a very simple proposal: keep the __init__.py requirement > > > for top-level pacakages, but drop it for subpackages. This should be > > > a small change. I'm hesitant to propose *anything* new for Python > > > 2.5, so I'm proposing it for 2.6; if Neal and Anthony think this > > > would be okay to add to 2.5, they can do so. > > > > Not that it would count in any way, but I'd prefer to keep it. How > > would I mark a subdirectory as "not-a-package" otherwise? > > What's the use case for that? Have you run into this requirement? And > even if you did, was there a requirement that the subdirectory's name > be the same as a standard library module? If the subdirectory's name > is not constrained, the easiest way to mark it as a non-package is to > put a hyphen or dot in its name; if you can't do that, at least name > it something that you don't need to import. Actually I have no problems with the change from inside python, but from the POV of tools, which walk through the directories, collecting/separating python packages and/or supplemental data directories. It's an explicit vs. implicit issue, where implicit would mean "kind of heuristics" from now on. IMHO it's going to break existing stuff [1] and should at least not be done in such a rush. nd [1] Well, it does break some of mine ;-) -- Da f?llt mir ein, wieso gibt es eigentlich in Unicode kein "i" mit einem Herzchen als T?pfelchen? Das w?r sooo s??ss! -- Bj?rn H?hrmann in darw From pje at telecommunity.com Wed Apr 26 20:07:30 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 14:07:30 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> At 10:16 AM 4/26/2006 -0700, Guido van Rossum wrote: >So I have a very simple proposal: keep the __init__.py requirement for >top-level pacakages, but drop it for subpackages. Note that many tools exist which have grown to rely on the presence of __init__ modules. Also, although your proposal would allow imports to work reasonably well, tools that are actively looking for packages would need to have some way to distinguish package directories from others. My counter-proposal: to be considered a package, a directory must contain at least one module (which of course can be __init__). This allows the "is it a package?" question to be answered with only one directory read, as is the case now. Think of it also as a nudge in favor of "flat is better than nested". This tweak would also make it usable for top-level directories, since the mere presence of a 'time' directory wouldn't get in the way of anything. The thing more likely to have potential for problems is that many Python projects have a "test" directory that isn't intended to be a package, and thus may interfere with imports from the stdlib 'test' package. Whether this is really a problem or not, I don't know. But, we could treat packages without __init__ as namespace packages. That is, set their __path__ to encompass similarly-named directories already on sys.path, so that the init-less package doesn't interfere with other packages that have the same name. This would require a bit of expansion to PEP 302, but probably not much. Most of the rest is existing technology, and we've already begun migrating stdlib modules away from doing their own hunting for __init__ and other files, towards using the pkgutil API. By the way, one small precedent for packages without __init__: setuptools generates such packages using .pth files when a package is split between different distributions but are being installed by a system packaging tool. In such cases, *both* parts of the package can't include an __init__, because the packaging tool (e.g. RPM) is going to complain that the shared file is a conflict. So setuptools generates a .pth file that creates a module object with the right name and initializes its __path__ to point to the __init__-less directory. >This should be a small change. Famous last words. :) There's a bunch of tools that it's not going to work properly with, and not just in today's stdlib. (Think documentation tools, distutils extensions, IDEs...) Are you sure you wouldn't rather just write a GoogleImporter class to fix this problem? Append it to sys.path_hooks, clear sys.path_importer_cache, and you're all set. For that matter, if you have only one top-level package, put the class and the installation code in that top-level __init__, and you're set to go. And that approach will work with Python back to version 2.3; no waiting for an upgrade (unless Google is still using 2.2, of course). Let's see, the code would look something like: class GoogleImporter: def __init__(self, path): if not os.path.isdir(path): raise ImportError("Not for me") self.path = os.path.realpath(path) def find_module(self, fullname, path=None): # Note: we ignore 'path' argument since it is only used via meta_path subname = fullname.split(".")[-1] if os.path.isdir(os.path.join(self.path, subname)): return self path = [self.path] try: file, filename, etc = imp.find_module(subname, path) except ImportError: return None return ImpLoader(fullname, file, filename, etc) def load_module(self, fullname): import sys, new subname = fullname.split(".")[-1] path = os.path.join(self.path, subname) module = sys.modules.setdefault(fullname, new.module(fullname)) module.__dict__.setdefault('__path__',[]).append(path) return module class ImpLoader: def __init__(self, fullname, file, filename, etc): self.file = file self.filename = filename self.fullname = fullname self.etc = etc def load_module(self, fullname): try: mod = imp.load_module(fullname, self.file, self.filename, self.etc) finally: if self.file: self.file.close() return mod import sys sys.path_hooks.append(GoogleImporter) sys.path_importer_cache.clear() From pje at telecommunity.com Wed Apr 26 20:11:40 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 14:11:40 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060426141003.01e6bfa0@mail.telecommunity.com> At 02:07 PM 4/26/2006 -0400, Phillip J. Eby wrote: > def find_module(self, fullname, path=None): > # Note: we ignore 'path' argument since it is only used via >meta_path > subname = fullname.split(".")[-1] > if os.path.isdir(os.path.join(self.path, subname)): > return self > path = [self.path] > try: > file, filename, etc = imp.find_module(subname, path) > except ImportError: > return None > return ImpLoader(fullname, file, filename, etc) Feh. The above won't properly handle the case where there *is* an __init__ module. Trying again: def find_module(self, fullname, path=None): subname = fullname.split(".")[-1] path = [self.path] try: file, filename, etc = imp.find_module(subname, path) except ImportError: if os.path.isdir(os.path.join(self.path, subname)): return self else: return None return ImpLoader(fullname, file, filename, etc) There, that should only fall back to __init__-less handling if there's no foo.py or foo/__init__.py present. From jcarlson at uci.edu Wed Apr 26 20:18:18 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 26 Apr 2006 11:18:18 -0700 Subject: [Python-Dev] Google Summer of Code proposal: New class for work with binary trees AVL and RB as with the standard dictionary. In-Reply-To: <444F1E36.30302@renet.ru> References: <20060424114024.66D0.JCARLSON@uci.edu> <444F1E36.30302@renet.ru> Message-ID: <20060426094148.66EE.JCARLSON@uci.edu> "Vladimir 'Yu' Stepanov" <vys at renet.ru> wrote: > Josiah Carlson wrote: > > There exists various C and Python implementations of both AVL and > > Red-Black trees. For users of Python who want to use AVL and/or > > Red-Black trees, I would urge them to use the Python implementations. > > In the case of *needing* the speed of a C extension, there already > > exists a CPython extension module for AVL trees: > > http://www.python.org/pypi/pyavl/1.1 > > > > I would suggest you look through some suggested SoC projects in the > > wiki: > > http://wiki.python.org/moin/SummerOfCode > > > > - Josiah > > > > > Thanks for the answer! > > I already saw pyavl-1.1. But for this reason I would like to see the module > in a standard package python. Functionality for pyavl and dict to compare > difficultly. Functionality of my module will differ from functionality dict > in the best party. I have lead some tests on for work with different types > both for a package pyavl-1.1, and for the prototype of own module. The > script > of check is resulted in attached a file avl-timeit.py In files > timeit-result-*-*.txt results of this test. The first in the name of a file > means quantity of the added elements, the second - argument of a method > timeit. There it is visible, that in spite of the fact that the module > xtree > is more combined in comparison with pyavl the module (for everyone again > inserted pair [the key, value], is created two elements: python object - > pair, > and an internal element of a tree), even in this case it works a little bit > more quickly. Besides the module pyavl is unstable for work in multithread > appendices (look the attached file module-avl-is-thread-unsafe.py). I'm not concerned about the speed of the external AVL module, and I'm not concerned about the speed of trees in Python; so far people have found that dictionaries are generally sufficient. More specifically, while new lambda syntaxes are presented almost weekly, I haven't heard anyone complain about Python's lack of a tree module in months. As a result, I don't find the possible addition of any tree structure to the collections module as being a generally compelling addition. Again, I believe that the existance of 3rd party extension modules which implement AVL and red-black trees, regardless of their lack of thread safety, slower than your module by a constant, or lack of particular functionality to be basically immaterial to this discussion. In my mind, there are only two relevant items to this discussion: 1. Is having a tree implementation (AVL or Red-Black) important, and if so, is it important enough to include in the collections module? 2. Is a tree implementation important enough for google to pay for its inclusion, given that there exists pyavl and a collectionsmodule.c patch for a red-black tree implementation? Then again, you already have your own implementation of a tree module, and it seems as though you would like to be paid for this already-done work. I don't know how google feels about such things, but you should remember to include this information in your application. > I think, that creation of this type (together with type of pair), will make > programming more convenient since sorting by function sort will be required > less often. How often are you sorting data? I've found that very few of my operations involve sorting data of any kind, and even if I did have need of sorting, Python's sorting algorithm is quite fast, likely faster by nearly a factor of 2 (in the number of comparisons) than the on-line construction of an AVL or Red-Black tree. > I can probably borrow in this module beyond the framework of the project > google. The demand of such type of data is interesting. Because of > necessity > of processing `gcmodule.c' and `obmalloc.c' this module cannot be realized > as the external module. It seems to me (after checking collectionsmodule.c and a few others) that proper C modules only seem to import Python.h and maybe structmember.h . If you are manually including gcmodule.c and obmalloc.c, you may be doing something wrong; I'll leave it to others to further comment on this aspect. - Josiah From thomas at python.org Wed Apr 26 20:22:01 2006 From: thomas at python.org (Thomas Wouters) Date: Wed, 26 Apr 2006 20:22:01 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261046m6b9760d3jb2a00831ee8a6875@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <444FAEE4.7090306@benjiyork.com> <ca471dc20604261046m6b9760d3jb2a00831ee8a6875@mail.gmail.com> Message-ID: <9e804ac0604261122ka03b645n60d73d7c92882ad4@mail.gmail.com> On 4/26/06, Guido van Rossum <guido at python.org> wrote: > > On 4/26/06, Benji York <benji at benjiyork.com> wrote: > > Guido van Rossum wrote: > > > So I have a very simple proposal: keep the __init__.py requirement for > > > top-level pacakages, but drop it for subpackages. > I don't particularly like it. You still need __init__.py's for 'import *' to work (not that I like or use 'import *' :). The first question that pops into my mind when I think file-less modules is "where does the package-module come from". That question is a lot easier to answer (not to mention explain) when all packages have an __init__.py. It also adds to Python's consistency (which means people learn something from it that they can apply to other things later; in that case, removing it just hampers their growth.) And besides, it's just not that big a deal. I don't feel strongly enough about it to object, though. However, I would suggest adding a warning for existing, __init__.py-less directories that would-have-been imported in 2.5. (There's an alpha3 scheduled, so it doesn't have to go in alpha2 tonight, and it could probably be last-minuted into beta1 too.) That should fix both Google's problems and that of everyone having existing non-package subdirs :-) Then, if it really matters, we can change the import in 2.6. Note that absolute import (which will be turned on for all in 2.6) 2.7, see the PEP. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060426/9d613135/attachment.htm From goodger at python.org Wed Apr 26 20:30:39 2006 From: goodger at python.org (David Goodger) Date: Wed, 26 Apr 2006 14:30:39 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages Message-ID: <4335d2c40604261130u23572ae7jbb7cb3a7bf86434d@mail.gmail.com> Sounds a bit like the tail wagging the dog. I thought the Google geeks were a smart bunch. ISTM that something like Phillip Eby's code would be the most expedient solution. I would add one extension: if a package directory without an __init__.py file *is* encountered, an empty __init__.py file should automatically be created (and perhaps even "svn add" or equivalent called), and the code should loudly complain "Packages need __init__.py files, noob!" Add sound and light effects for extra credit. -- David Goodger <http://python.net/~goodger> From guido at python.org Wed Apr 26 20:36:13 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 11:36:13 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <200604262004.34103@news.perlig.de> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <200604261925.12302@news.perlig.de> <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> <200604262004.34103@news.perlig.de> Message-ID: <ca471dc20604261136udf80590we7fd0cb23bb7437c@mail.gmail.com> On 4/26/06, Andr? Malo <nd at perlig.de> wrote: > * Guido van Rossum wrote: > > > On 4/26/06, Andr? Malo <nd at perlig.de> wrote: > > > * Guido van Rossum wrote: > > > > So I have a very simple proposal: keep the __init__.py requirement > > > > for top-level pacakages, but drop it for subpackages. This should be > > > > a small change. I'm hesitant to propose *anything* new for Python > > > > 2.5, so I'm proposing it for 2.6; if Neal and Anthony think this > > > > would be okay to add to 2.5, they can do so. > > > > > > Not that it would count in any way, but I'd prefer to keep it. How > > > would I mark a subdirectory as "not-a-package" otherwise? > > > > What's the use case for that? Have you run into this requirement? And > > even if you did, was there a requirement that the subdirectory's name > > be the same as a standard library module? If the subdirectory's name > > is not constrained, the easiest way to mark it as a non-package is to > > put a hyphen or dot in its name; if you can't do that, at least name > > it something that you don't need to import. > > Actually I have no problems with the change from inside python, but from the > POV of tools, which walk through the directories, collecting/separating > python packages and/or supplemental data directories. It's an explicit vs. > implicit issue, where implicit would mean "kind of heuristics" from now on. > IMHO it's going to break existing stuff [1] and should at least not be done > in such a rush. > > nd > > [1] Well, it does break some of mine ;-) Can you elaborate? You could always keep the __init__.py files, you know... -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Wed Apr 26 20:50:15 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 11:50:15 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> Message-ID: <ca471dc20604261150p4a331bday8ee72ca7cde801c7@mail.gmail.com> On 4/26/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 10:16 AM 4/26/2006 -0700, Guido van Rossum wrote: > >So I have a very simple proposal: keep the __init__.py requirement for > >top-level pacakages, but drop it for subpackages. > > Note that many tools exist which have grown to rely on the presence of > __init__ modules. Also, although your proposal would allow imports to work > reasonably well, tools that are actively looking for packages would need to > have some way to distinguish package directories from others. > > My counter-proposal: to be considered a package, a directory must contain > at least one module (which of course can be __init__). This allows the "is > it a package?" question to be answered with only one directory read, as is > the case now. Think of it also as a nudge in favor of "flat is better than > nested". I'm not sure what you mean by "one directory read". You'd have to list the entire directory, which may require reading more than one block if the directory is large. But I'd be happy to define it like this from the POV of tools that want to know about sub-packages; my users complain because they have put .py files in a directory that they consider a sub-package so it would work fine for them. Python itself might attempt to consider the directory as a package and raise ImportError because the requested sub-module isn't found; the creation of a dummy entry in sys.modules in that case doesn't bother me. > This tweak would also make it usable for top-level directories, since the > mere presence of a 'time' directory wouldn't get in the way of anything. Actually, no; the case I remember was a directory full of Python code (all experiments by the user related to a particular topic -- I believe it was "string"). > The thing more likely to have potential for problems is that many Python > projects have a "test" directory that isn't intended to be a package, and > thus may interfere with imports from the stdlib 'test' package. Whether > this is really a problem or not, I don't know. "test" is a top-level package. I'm not proposing to change the rules for toplevel packages. Now you have the reason why. (And the new "absolute import" feature in 2.6 will prevent aliasing problems between subdirectories and top-level modules.) > But, we could treat packages without __init__ as namespace packages. That > is, set their __path__ to encompass similarly-named directories already on > sys.path, so that the init-less package doesn't interfere with other > packages that have the same name. Let's stick to the one feature I'm actually proposing please. > This would require a bit of expansion to PEP 302, but probably not > much. Most of the rest is existing technology, and we've already begun > migrating stdlib modules away from doing their own hunting for __init__ and > other files, towards using the pkgutil API. > > By the way, one small precedent for packages without __init__: setuptools > generates such packages using .pth files when a package is split between > different distributions but are being installed by a system packaging > tool. In such cases, *both* parts of the package can't include an > __init__, because the packaging tool (e.g. RPM) is going to complain that > the shared file is a conflict. So setuptools generates a .pth file that > creates a module object with the right name and initializes its __path__ to > point to the __init__-less directory. > > > >This should be a small change. > > Famous last words. :) There's a bunch of tools that it's not going to > work properly with, and not just in today's stdlib. (Think documentation > tools, distutils extensions, IDEs...) Are you worried about the tools not finding directories that are now subpackages? Then fix the tools. Or are you worried about flagging subdirectories as (empty) packages since they exist, have a valid name (no hyphens, dots etc.) and contain no modules? I'm not sure I would call that failing. I can't see how a tool would crash or produce incorrect results with this change, *unless* you consider it incorrect to list a data directory as an empty package. To me, that's an advantage. > Are you sure you wouldn't rather just write a GoogleImporter class to fix > this problem? No, because that would require more setup code with a requirement to properly enable it, etc., etc., more failure modes, etc., etc. > Append it to sys.path_hooks, clear sys.path_importer_cache, > and you're all set. For that matter, if you have only one top-level > package, put the class and the installation code in that top-level > __init__, and you're set to go. I wish it were that easy. If there was such an easy solution, there wouldn't be pitchforks involved. I can't go into the details, but that just wouldn't work; and the problem happens most frequently to people who are already overloaded with learning new stuff. This is just one more bit of insanity they have to deal with. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mal at egenix.com Wed Apr 26 20:59:07 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 26 Apr 2006 20:59:07 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <1146073820.10733.70.camel@resist.wooz.org> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> Message-ID: <444FC2FB.909@egenix.com> Guido van Rossum wrote: > On 4/26/06, Barry Warsaw <barry at python.org> wrote: >> On Wed, 2006-04-26 at 10:16 -0700, Guido van Rossum wrote: >> >>> So I have a very simple proposal: keep the __init__.py requirement for >>> top-level pacakages, but drop it for subpackages. This should be a >>> small change. I'm hesitant to propose *anything* new for Python 2.5, >>> so I'm proposing it for 2.6; if Neal and Anthony think this would be >>> okay to add to 2.5, they can do so. I'm not really sure what this would buy us. Newbies would still forget the __init__.py in top-level packages (not all newbies work for Google). Oldies would have trouble recognizing a directory as being a Python package, rather than just a collection of modules - you wouldn't go hunting up the path to find the top-level __init__.py file which identifies the directory as being a sub-package of some top-level package (not all oldies work for Google either where you only have a single top-level package). -1. It doesn't appear to make things easier and breaks symmetry. >> But if it's there, then nothing changes, right? IOW, if we want to >> expose names in the subpackage's namespace, we can still do it in the >> subpackage's __init__.py. It's just that otherwise empty subpackage >> __init__.py files won't be required. > > Correct. This is an important clarification. > >> What would the following print? >> >> import toplevel.sub1.sub2 >> print toplevel.sub1.sub2.__file__ >> >> If it's "<path>/sub1/sub2" then that kind of breaks a common idiom of >> using os.path.dirname() on a module's __file__ to find co-located >> resources. Or at least, you have to know whether to dirname its >> __file__ or not (which might not be too bad, since you'd probably know >> how that package dir is organized anyway). > > Oh, cool gray area. I propose that if there's no __init__.py it prints > '<path>/sub1/sun2/' i.e. with a trailing slash; that causes dirname to > just strip the '/'. (It would be a backslash on Windows of course). > >> I dunno. Occasionally it trips me up too, but it's such an obvious and >> easy fix that it's never bothered me enough to care. > > But you're not a newbie. for a newbie who's just learned about > packages, is familiar with Java, and missed one line in the docs, it's > an easy mistake to make and a tough one to debug. Why not make the ImportError's text a little smarter instead, e.g. let the import mechanism check for this particular gotcha ? This would solve the newbie problem without any changes to the import scheme. >> I can't think of >> an example, but I suppose it's still possible that lifting this >> requirement could cause some in-package located data directories to be >> mistakenly importable. I'd be somewhat more worried about frameworks >> that dynamically import things having to be more cautious about >> cleansing their __import__() arguments now. > > But (assuming 2.6 and absolute import) what would be the danger of > importing such a package? Presumably it contains no *.py or *.so files > so there's no code there; but even so you'd have to go out of your way > to import it (since if the directory exists it can't also be a > subpackage or submodule name that's in actual use). Wasn't there agreement on postponing the absolute imports to Py3k due to the common use-case of turning e.g. 3rd party top-level packages into sub-packages of an application ? Without absolute imports your proposed scheme is going to break, since can easily mask top-level packages or modules. >> I'd be -1 but the remote possibility of you being burned at the stake by >> your fellow Googlers makes me -0 :). > > I'm not sure I understand what your worry is. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 26 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From nd at perlig.de Wed Apr 26 21:05:35 2006 From: nd at perlig.de (=?iso-8859-1?q?Andr=E9_Malo?=) Date: Wed, 26 Apr 2006 21:05:35 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261136udf80590we7fd0cb23bb7437c@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <200604262004.34103@news.perlig.de> <ca471dc20604261136udf80590we7fd0cb23bb7437c@mail.gmail.com> Message-ID: <200604262105.35921@news.perlig.de> * Guido van Rossum wrote: [me] > > Actually I have no problems with the change from inside python, but > > from the POV of tools, which walk through the directories, > > collecting/separating python packages and/or supplemental data > > directories. It's an explicit vs. implicit issue, where implicit would > > mean "kind of heuristics" from now on. IMHO it's going to break > > existing stuff [1] and should at least not be done in such a rush. > > > > nd > > > > [1] Well, it does break some of mine ;-) [Guido] > Can you elaborate? You could always keep the __init__.py files, you > know... Okay, here's an example. It's about a non-existant __init__.py, though ;-) I have a test system which collects the test suites from one or more packages automatically by walking through the tree. Now there are subdirectories which explicitly are not packages (no __init__.py), but do contain some python files (helper scripts, spawned for particular tests). The test collector doesn't consider now these subdirectories at all, but in future it would need to (because it should search like python itself). Another point is that one can even hide supplementary packages within such a subdirectory. It's only visible to scripts inside the dir (I admit, that the latter is not a real usecase, just a thought that came up while writing this up). Anyway: sure, one could tweak the naming - just not for existing a.k.a. already released stuff. It's not very nice to force that, too ;-) nd -- If God intended people to be naked, they would be born that way. -- Oscar Wilde From mal at egenix.com Wed Apr 26 21:10:10 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 26 Apr 2006 21:10:10 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261150p4a331bday8ee72ca7cde801c7@mail.gmail.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <ca471dc20604261150p4a331bday8ee72ca7cde801c7@mail.gmail.com> Message-ID: <444FC592.6030806@egenix.com> Guido van Rossum wrote: > On 4/26/06, Phillip J. Eby <pje at telecommunity.com> wrote: >> At 10:16 AM 4/26/2006 -0700, Guido van Rossum wrote: >>> So I have a very simple proposal: keep the __init__.py requirement for >>> top-level pacakages, but drop it for subpackages. >> Note that many tools exist which have grown to rely on the presence of >> __init__ modules. Also, although your proposal would allow imports to work >> reasonably well, tools that are actively looking for packages would need to >> have some way to distinguish package directories from others. >> >> My counter-proposal: to be considered a package, a directory must contain >> at least one module (which of course can be __init__). This allows the "is >> it a package?" question to be answered with only one directory read, as is >> the case now. Think of it also as a nudge in favor of "flat is better than >> nested". > > I'm not sure what you mean by "one directory read". You'd have to list > the entire directory, which may require reading more than one block if > the directory is large. Sounds like a lot of filesystem activity to me :-) Currently, __init__.py is used as landmark by the import code to easily determine whether a directory is a package using two simple I/O operations (fstat on __init__.py, __init__.py[co]). -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 26 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From fdrake at acm.org Wed Apr 26 21:26:05 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 26 Apr 2006 15:26:05 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <200604262105.35921@news.perlig.de> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <ca471dc20604261136udf80590we7fd0cb23bb7437c@mail.gmail.com> <200604262105.35921@news.perlig.de> Message-ID: <200604261526.05671.fdrake@acm.org> On Wednesday 26 April 2006 15:05, Andr? Malo wrote: > Another point is that one can even hide supplementary packages within such > a subdirectory. It's only visible to scripts inside the dir (I admit, that > the latter is not a real usecase, just a thought that came up while > writing this up). I have tests that do this. This is a very real use case. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From jcarlson at uci.edu Wed Apr 26 21:29:31 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 26 Apr 2006 12:29:31 -0700 Subject: [Python-Dev] suggestion: except in list comprehension In-Reply-To: <1d85506f0604261149q79dceeb1q6548e74556e6ccf0@mail.gmail.com> References: <20060426103804.66F1.JCARLSON@uci.edu> <1d85506f0604261149q79dceeb1q6548e74556e6ccf0@mail.gmail.com> Message-ID: <20060426120631.6700.JCARLSON@uci.edu> "tomer filiba" <tomerfiliba at gmail.com> wrote: > first, i posted it two days ago, so it's funny it got posted only now... > the moderators are sleeping on the job :) I don't believe python-dev has moderators...or at least my posts have never been delayed like that. > > Note that of the continue cases you offer, all of them are merely simple > > if condition > > yes, i said that explicitly, did you *read* my mail? After reading your syntax I read over the samples you provided. Generally speaking, I get far too much email to read the ful content of every "I think Python should have feature/syntax/whatever X", especially when those posts begin with "my friend had this idea". Let your friend post if if (s)he thinks it important, or at least post the idea on python-list first. Python-dev shouldn't be the starting point of these syntax discussions, it should be near the ending point. > but i also said it's not always possible. you *can't* always tell prior to > doing something. Of course you can't. But you know what? In all the times I've used list comprehensions over the last few years, I've never wanted, nor needed, the functionality you describe. Would it be convenient? Sure. But there are many other additions that I would find more convenient, and would likely find more use, which have been rejected. > anyway, i guess my friend may have better show-cases, as he's the one who > found the need for it. i just thought i should bring this up here. if you > think better show cases would convince you, i can ask him. I think that the syntax is unnecessary, and no amount of use-cases that your friend can supply will likely convince me otherwise (unless your friend happens to be a core Python developer - in which case he would have posted him/herself). > > If you couldn't guess; -1, you can get equivalent behavior without > > complicating the generator expression/list comprension syntax. > > so how come list comprehensions aren't just a "complication to the syntax"? > you can always do them the old way, > > x = [] > for ....: > x.append(...) List comprehensions were designed with this one particular simple use-case in mind. Look through the standard library at the number of times that simple for loops with appends are used. Go ahead. Take your time. Now go ahead and count the cases where the only difference is a try/except clause. Compare that count. Note how the simple case is likely close to 10 times as common as the try/except case? THAT's why the simple version was included, and that's why I don't believe that except is necessary in list comprehensions. > don't worry, i'm not going to argue it past this. Good, I think it's a nonstarter. - Josiah From abo at minkirri.apana.org.au Wed Apr 26 21:41:13 2006 From: abo at minkirri.apana.org.au (Donovan Baarda) Date: Wed, 26 Apr 2006 20:41:13 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <1146073820.10733.70.camel@resist.wooz.org> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> Message-ID: <444FCCD9.8050006@minkirri.apana.org.au> Guido van Rossum wrote: > On 4/26/06, Barry Warsaw <barry at python.org> wrote: > >>On Wed, 2006-04-26 at 10:16 -0700, Guido van Rossum wrote: >> >> >>>So I have a very simple proposal: keep the __init__.py requirement for >>>top-level pacakages, but drop it for subpackages. This should be a >>>small change. I'm hesitant to propose *anything* new for Python 2.5, >>>so I'm proposing it for 2.6; if Neal and Anthony think this would be >>>okay to add to 2.5, they can do so. [...] >>>>I'd be -1 but the remote possibility of you being burned at the stake by >>your fellow Googlers makes me -0 :). > > > I'm not sure I understand what your worry is. I happen to be a Googler too, but I was a Pythonista first... I'm -1 for minor mainly subjective reasons; 1) explicit is better than implicit. I prefer to be explicit about what is and isn't a module. I have plenty of "doc" and "test" and other directories inside python module source tree's that I don't want to be python modules. 2) It feels more consistant to always require it. /foo/ is a python package because it contains an __init__.py... so package /foo/bar/ should have one one too. 3) It changes things for what feels like very little gain. I've never had problems with it, and don't find the import exception hard to diagnose. Note that I think the vast majority of "newbie missing __init__.py" problems within google occur because people are missing __init__.py at the root of package import tree. This change would not not solve that problem. It wouldn't surprise me if this change would introduce a slew of newbies complaining that "I have /foo on my PYTHONPATH, why can't I import foo/bar/" because they're forgotten the (now) rarely required __init__.py -- Donovan Baarda From pje at telecommunity.com Wed Apr 26 21:50:15 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 15:50:15 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261150p4a331bday8ee72ca7cde801c7@mail.gmail.co m> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060426152621.01e56528@mail.telecommunity.com> At 11:50 AM 4/26/2006 -0700, Guido van Rossum wrote: >I'm not sure what you mean by "one directory read". You'd have to list >the entire directory, which may require reading more than one block if >the directory is large. You have to do this to find an __init__.py too, don't you? Technically, there's going to be a search for a .pyc or .pyo first, anyway. I'm just saying you can stop as soon as you hit an extension that's in imp.get_suffixes(). > > Are you sure you wouldn't rather just write a GoogleImporter class to fix > > this problem? > >No, because that would require more setup code with a requirement to >properly enable it, etc., etc., more failure modes, etc., etc. I don't understand. I thought you said you had only *one* top-level package. Fix *that* package, by putting the code in its __init__.py. Job done. > > Append it to sys.path_hooks, clear sys.path_importer_cache, > > and you're all set. For that matter, if you have only one top-level > > package, put the class and the installation code in that top-level > > __init__, and you're set to go. > >I wish it were that easy. Well, if there's more than one top-level package, put the code in a module called google_imports and "import google_import" in each top-level package's __init__.py. I'm not sure I understand why a solution that works with released versions of Python that allows you to do exactly what you want, is inferior to a hypothetical solution for an unreleased version of Python that forces everybody else to update their tools. Unless of course the problem you're trying to solve is a political one rather than a technical one, that is. Or perhaps it wasn't clear from my explanation that my proposal will work the way you need it to, or I misunderstand what you're trying to do. Anyway, I'm not opposed to the idea of supporting this in future Pythons, but I definitely think it falls under the "but sometimes never is better than RIGHT now" rule where 2.5 is concerned. :) In particular, I'm worried that you're shrugging off the extent of the collateral damage here, and I'd be happiest if we waited until 3.0 before changing this particular rule -- and if we changed it in favor of namespace packages, which will more closely match naive user expectations. However, the "fix the tools" argument is weak, IMO. Zipfile imports have been a fairly half-assed feature for 2.3 and 2.4 because nobody took the time to make the *rest* of the stdlib work with zip imports. It's not a good idea to change machinery like this without considering at least what's going to have to be fixed in the stdlib. At a minimum, pydoc and distutils have embedded assumptions regarding __init__ modules, and I wouldn't be surprised if ihooks, imputil, and others do as well. If we can't keep the stdlib up to date with changes in the language, how can we expect anybody else to keep their code up to date? Finally, as others have pointed out, requiring __init__ at the top level just means that this isn't going to help anybody but Google. ISTM that in most cases outside Google, Python newbies are more likely to be creating top-level packages *first*, so the implicit __init__ doesn't help them. So, to summarize: 1. It only really helps Google 2. It inconveniences others who have to update their tools in order to support people who end up using it (even if by accident) 3. It's not a small change, unless you leave the rest of the stdlib unreviewed for impacts 4. It could be fixed at Google by adding a very small amount of code to the top of your __init__.py files (although apparently this is prevented for mysterious reasons that can't be shared) What's not to like? ;) From martin at v.loewis.de Wed Apr 26 21:56:07 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 26 Apr 2006 21:56:07 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> Message-ID: <444FD057.50306@v.loewis.de> Phillip J. Eby wrote: > My counter-proposal: to be considered a package, a directory must contain > at least one module (which of course can be __init__). This allows the "is > it a package?" question to be answered with only one directory read, as is > the case now. Think of it also as a nudge in favor of "flat is better than > nested". I assume you want import x.y to fail if y is an empty directory (or non-empty, but without .py files). I don't see a value in implementing such a restriction. If there are no .py files in a tree, then there would be no point in importing it, so applications will typically not import an empty directory. Implementing an expensive test that will never give a positive result and causes no problems if skipped should be skipped. I can't see the problem this would cause to tools: they should assume any subdirectory can be a package, with all consequences this causes. If the consequences are undesirable, users should just stop putting non-package subdirectories into a package if they want to use the tool. However, I doubt there are undesirable consequences (although consequences might be surprising at first). Regards, Martin From aahz at pythoncraft.com Wed Apr 26 22:04:07 2006 From: aahz at pythoncraft.com (Aahz) Date: Wed, 26 Apr 2006 13:04:07 -0700 Subject: [Python-Dev] suggestion: except in list comprehension In-Reply-To: <20060426120631.6700.JCARLSON@uci.edu> References: <20060426103804.66F1.JCARLSON@uci.edu> <1d85506f0604261149q79dceeb1q6548e74556e6ccf0@mail.gmail.com> <20060426120631.6700.JCARLSON@uci.edu> Message-ID: <20060426200407.GA21885@panix.com> On Wed, Apr 26, 2006, Josiah Carlson wrote: > "tomer filiba" <tomerfiliba at gmail.com> wrote: >> >> first, i posted it two days ago, so it's funny it got posted only now... >> the moderators are sleeping on the job :) > > I don't believe python-dev has moderators...or at least my posts have > never been delayed like that. I'm not active, but I've seen Tomer's posts getting delayed due to non-subscriber moderation. Tomer, you should make sure that all addresses you use to post are subscribed; if you have multiple addresses, you can set all but one to "no-mail". -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From tds333+pydev at gmail.com Wed Apr 26 21:57:45 2006 From: tds333+pydev at gmail.com (Wolfgang Langner) Date: Wed, 26 Apr 2006 21:57:45 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <4c45c1530604261257m7918f14ey4367b207ad692e53@mail.gmail.com> On 4/26/06, Guido van Rossum <guido at python.org> wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. -1 from me. I had never a problem with __init__.py to mark a package or subpackage. Better add __namespace__.py to state a package dir as namespace package. And support multiple occurrences of "on_python_path/namespace_name/package". -- bye by Wolfgang From pje at telecommunity.com Wed Apr 26 22:11:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 16:11:53 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <444FD057.50306@v.loewis.de> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060426161005.042e6f30@mail.telecommunity.com> At 09:56 PM 4/26/2006 +0200, Martin v. L?wis wrote: >Phillip J. Eby wrote: > > My counter-proposal: to be considered a package, a directory must contain > > at least one module (which of course can be __init__). This allows the > "is > > it a package?" question to be answered with only one directory read, as is > > the case now. Think of it also as a nudge in favor of "flat is better > than > > nested". > >I assume you want > >import x.y > >to fail if y is an empty directory (or non-empty, but without .py >files). I don't see a value in implementing such a restriction. No, I'm saying that tools which are looking for packages and asking, "Is this directory a package?" should decide "no" in the case where it contains no modules. From martin at v.loewis.de Wed Apr 26 22:17:58 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 26 Apr 2006 22:17:58 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426152621.01e56528@mail.telecommunity.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426152621.01e56528@mail.telecommunity.com> Message-ID: <444FD576.7060705@v.loewis.de> Phillip J. Eby wrote: > At 11:50 AM 4/26/2006 -0700, Guido van Rossum wrote: >> I'm not sure what you mean by "one directory read". You'd have to list >> the entire directory, which may require reading more than one block if >> the directory is large. > > You have to do this to find an __init__.py too, don't you? Technically, > there's going to be a search for a .pyc or .pyo first, anyway. No. Python does stat(2) and open(2) to determine whether a file is present in a directory. Whether or not that causes a full directory scan depends on the operating system. On most operating systems, it is *not* a full directory scan: - on network file systems, the directory is read only on the server; a full directory read would also cause a network transmission of the entire directory contents - on an advanced filesystem (such as NTFS), a lookup operation is a search in a balanced tree, rather than a linear search, bypassing many directory blocks for a large directory - on an advanced operating system (such as Linux), a repeated directory lookup for the file will not go to the file system each time, but cache the result of the lookup in an efficient memory structure. In all cases, the directory contents (whether read from disk into memory or not) does not have to be copied into python's address space for stat(2), but does for readdir(3). Regards, Martin From martin at v.loewis.de Wed Apr 26 22:27:19 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 26 Apr 2006 22:27:19 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426161005.042e6f30@mail.telecommunity.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426161005.042e6f30@mail.telecommunity.com> Message-ID: <444FD7A7.7010606@v.loewis.de> Phillip J. Eby wrote: >> I assume you want >> >> import x.y >> >> to fail if y is an empty directory (or non-empty, but without .py >> files). I don't see a value in implementing such a restriction. > > No, I'm saying that tools which are looking for packages and asking, "Is > this directory a package?" should decide "no" in the case where it > contains no modules. Ah. Tools are of course free to do that. It would slightly deviate from Python's implementation of import, but the difference wouldn't matter for all practical purposes. So from a language lawyers' point of view, I would specify: "A sub-package is a sub-directory of a package that contains at least one module file. Python implementations MAY accept sub-directories as sub-packages even if they contain no module files as package, instead of raising ImportError on attempts to import that sub-package." (a module file, in that context, would be a file file which matches imp.get_suffixes(), in case that isn't clear) Regards, Martin From unknown_kev_cat at hotmail.com Wed Apr 26 22:33:06 2006 From: unknown_kev_cat at hotmail.com (Joe Smith) Date: Wed, 26 Apr 2006 16:33:06 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <e2ole6$dd$1@sea.gmane.org> "Guido van Rossum" <guido at python.org> wrote in message news:ca471dc20604261016g14854274i970d6f4fc72561c7 at mail.gmail.com... > (Context: There's a large crowd with pitchforks and other sharp pointy > farm implements just outside the door of my office at Google. They are > making an unbelievable racket. It appears they are Google engineers > who have been bitten by a misfeature of Python, and they won't let me > go home before I have posted this message.) > > One particular egregious problem is that *subpackage* are subject to > the same rule. It so happens that there is essentially only one > top-level package in the Google code base, and it already has an > __init__.py file. But developers create new subpackages at a > frightening rate, and forgetting to do "touch __init__.py" has caused > many hours of lost work, not to mention injuries due to heads banging > against walls. > It seems to me that the right way to fix this is to simply make a small change to the error message. On a failed import, have the code check if there is a directory that would have been the requested package if it had contained an __init__ module. If there is then append a message like "You might be missing an __init__.py file". It might also be good to check that the directory actually contained python modules. From guido at python.org Wed Apr 26 22:49:10 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 13:49:10 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <e2ole6$dd$1@sea.gmane.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> Message-ID: <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> OK, forget it. I'll face the pitchforks. I'm disappointed though -- it sounds like we can never change anything about Python any more because it will upset the oldtimers. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Wed Apr 26 22:52:33 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 16:52:33 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <e2ole6$dd$1@sea.gmane.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <5.1.1.6.0.20060426164925.01e33228@mail.telecommunity.com> At 04:33 PM 4/26/2006 -0400, Joe Smith wrote: >It seems to me that the right way to fix this is to simply make a small >change to the error message. >On a failed import, have the code check if there is a directory that would >have been the requested package if >it had contained an __init__ module. If there is then append a message like >"You might be missing an __init__.py file". > >It might also be good to check that the directory actually contained python >modules. This is a great idea, but might be hard to implement in practice with the current C implementation of import, at least for the general case. But if we're talking about subpackages only, the common case is a one-element __path__, and for that case there might be something we could do. (The number of path items is relevant because the existence of a correctly-named but init-less directory should not stop later path items from being searched, so the actual "error" occurs far from the point where the empty directory would've been detected.) From tcj25 at cam.ac.uk Wed Apr 26 22:28:56 2006 From: tcj25 at cam.ac.uk (Terry Jones) Date: Wed, 26 Apr 2006 22:28:56 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: Your message at 16:11:53 on Wednesday, 26 April 2006 References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426161005.042e6f30@mail.telecommunity.com> Message-ID: <17487.55304.637819.333310@terry.jones.tc> It might be helpful to consider how people would tackle Guido's problem by pretending that a regular Joe (i.e., someone who couldn't consider changing the semantics of Python itself) had asked this question. I would suggest adding a hook to their version control system to automatically create (and preferably also check out) an __init__.py file whenever a new (source code) directory was placed under version control (supposing you can distinguish source code directories from the check in dirname). From one point of view this is a file system issue, so a file system solution might be a good way to solve it. This approach would also allow you to add this behavior/support on a per-user basis. Terry From pje at telecommunity.com Wed Apr 26 23:05:06 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 17:05:06 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com > References: <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> Message-ID: <5.1.1.6.0.20060426170010.04432110@mail.telecommunity.com> At 01:49 PM 4/26/2006 -0700, Guido van Rossum wrote: >OK, forget it. I'll face the pitchforks. > >I'm disappointed though -- it sounds like we can never change anything >about Python any more because it will upset the oldtimers. I know exactly how you feel. :) But there's always Python 3.0, and if we're refactoring the import machinery there, we can do this the right way, not just the "right now" way. ;) IMO, if Py3K does this, it can and should be inclusive of top-level packages and assemble __path__ using all the sys.path entries. If we're going to break it, let's break it all the way. :) I'm still really curious why the importer solution (especially if tucked away in a Google-defined sitecustomize) won't work, though. From guido at python.org Wed Apr 26 23:07:01 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 14:07:01 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <17487.55304.637819.333310@terry.jones.tc> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426161005.042e6f30@mail.telecommunity.com> <17487.55304.637819.333310@terry.jones.tc> Message-ID: <ca471dc20604261407u6ecf02a5m10b61c7e03b22df8@mail.gmail.com> On 4/26/06, Terry Jones <tcj25 at cam.ac.uk> wrote: > I would suggest adding a hook to their version control system to > automatically create (and preferably also check out) an __init__.py file > whenever a new (source code) directory was placed under version control > (supposing you can distinguish source code directories from the check in > dirname). This wouldn't work of course -- the newbie would try to test it before checking it in, so the hook would not be run. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From dh at triple-media.com Wed Apr 26 23:08:20 2006 From: dh at triple-media.com (Dennis Heuer) Date: Wed, 26 Apr 2006 23:08:20 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> Message-ID: <20060426230820.028976ea.dh@triple-media.com> OK, let's get back to the bitarray type. To first be clear about the *type* of bitarray: The type I talk about is really a *bit*-array and not a *bytewise*-array, as most developers seem to think of it. The array doesn't provide the boolean manipulation of single bytes in the array, it provides the manipulation of single bits or slices of bits in a bit sequence of unlimited length. Think of it like of a data sequence for a touring mashine (just bits). The reason why I'd like to use the long type as the base of my bitarray type is that the long type is already implemented as an array and working efficiently with large amounts of bytes. Also, it already supports a lot of necessary (including boolean) operators. The slicing/ manipulation of ranges can be implemented in the bitarray class. That's working similar to the example below: class bitarray(long): ... def __setitem__(self, key, value): # in this example only decimal values are supported if type(value) not in (int, long): raise ValueError # slices are supported if type(key) not in (int, long, slice): raise KeyError # checking if the target is a single bit if type(key) in (int, long) and value not in (0, 1): raise ValueError # checking if a range of bits is targeted and if the value fits if type(key) == slice: # let's assume that the slice object # provides only positive values and # that step is set to default (1) if value < 0 or value > (key.stop - key.start): raise ValueError key = key.start # only the offset is needed from now on # pushing up value to the offset # and adding it to the bit sequence long.__add__(2**key * value) # Should actually overwrite the # range but I couldn't even get # this to work. long complains # about the value being of type # int. But how can I convert it # to long if the term long is # already reserved for the # superclass? At least I was # too stupid to find it out. This small hack shows well that the long type could be a very good base for the bitarray type when it comes to processing (what comes out). However, when it comes to retrieving (what comes in), the situation is quite different. The bitarray type is a type for fiddling around. It shall also provide configurable truthtables and carry rules. To be able to work with the bitarray type more *natively*, it shall support bitstring input like "101010". But if the bitarray type is based on the long type and one tries the following: x = bitarray() x += "101010" the above bitstring is interpreted as the decimal value 101010, which is wrong. There seems to be no other way to change this than overwriting all the inherited methods to let them interpret the bitstring correctly. This is counterproductive. If there was a way to advice the interpreter to *filter* the input values before they are passed to the called attribute, one could keep the original methods of the long type and still provide bitstring input support. A method like __validate__ or similar could provide this filter. The interpreter first calls __validate__ to receive the return values and pass them on to the called method/attribute. Do you now understand the case? Dennis On Wed, 26 Apr 2006 10:47:55 -0700 "Guido van Rossum" <guido at python.org> wrote: > I doubt you'll get many answers. I have no idea what you're talking > about. How about an example or two? > > On 4/26/06, Dennis Heuer <dh at triple-media.com> wrote: > > To bring the real problem more upfront. Up to now, inheriting classes > > is all about processing (the output channel) but not about retrieving > > (the input channel). However, though a new type can advance from an > > existing type if it just needs to provide some few more methods, it can > > not advance from an existing type if it needs to support some few other > > input formats--even if they all convert to the inherited type easily. > > The input channel is completely forgotten. > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > > From guido at python.org Wed Apr 26 23:09:08 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 14:09:08 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426170010.04432110@mail.telecommunity.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <5.1.1.6.0.20060426170010.04432110@mail.telecommunity.com> Message-ID: <ca471dc20604261409q1fb84299r2b3d2be490bad601@mail.gmail.com> On 4/26/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 01:49 PM 4/26/2006 -0700, Guido van Rossum wrote: > >OK, forget it. I'll face the pitchforks. > > > >I'm disappointed though -- it sounds like we can never change anything > >about Python any more because it will upset the oldtimers. > > I know exactly how you feel. :) Hardly -- you're not the BDFL. :) > But there's always Python 3.0, and if we're refactoring the import > machinery there, we can do this the right way, not just the "right now" > way. ;) IMO, if Py3K does this, it can and should be inclusive of > top-level packages and assemble __path__ using all the sys.path > entries. If we're going to break it, let's break it all the way. :) No -- I'm actually quite happy with most of the existing behavior (though not with the APIs). > I'm still really curious why the importer solution (especially if tucked > away in a Google-defined sitecustomize) won't work, though. Well, we have a sitecustomize, and it's been decided that it's such a pain to get it to do the right thing that we're trying to get rid of it. So proposing to add more magic there would not work. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From foom at fuhm.net Wed Apr 26 23:12:01 2006 From: foom at fuhm.net (James Y Knight) Date: Wed, 26 Apr 2006 17:12:01 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> Message-ID: <408CF647-0A01-4CB3-8B65-9A0C6843B895@fuhm.net> On Apr 26, 2006, at 4:49 PM, Guido van Rossum wrote: > OK, forget it. I'll face the pitchforks. > > I'm disappointed though -- it sounds like we can never change anything > about Python any more because it will upset the oldtimers. > No, you can not make a change which has a tiny (and arguably negative) advantage but large compatibility risks. Like it or not, Python is a stable, widely deployed program. Making almost arbitrary sideways changes is not something you can do to such a system without a lot of pushback. Breaking things requires (and *should* require) well thought out reasoning as to why the new way is actually better enough to justify the breakage. James From tjreedy at udel.edu Wed Apr 26 23:23:12 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 26 Apr 2006 17:23:12 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <5.1.1.6.0.20060426164925.01e33228@mail.telecommunity.com> Message-ID: <e2ooc0$bqk$1@sea.gmane.org> "Phillip J. Eby" <pje at telecommunity.com> wrote in message news:5.1.1.6.0.20060426164925.01e33228 at mail.telecommunity.com... >>It might also be good to check that the directory actually contained >>python >>modules. > > This is a great idea, but might be hard to implement in practice with the > current C implementation of import, at least for the general case. Perhaps checking and autogeneration of __init__.py should better be part of a Python-aware editor/IDE. A File menu could have a New Package entry to create a directory + empty __init__.py. Terry Jan Reedy From guido at python.org Wed Apr 26 23:28:51 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 14:28:51 -0700 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060426230820.028976ea.dh@triple-media.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> <20060426230820.028976ea.dh@triple-media.com> Message-ID: <ca471dc20604261428n5f47379dx29efa207f1cfba00@mail.gmail.com> I still don't understand what you are asking for. However there's one problem with your code: On 4/26/06, Dennis Heuer <dh at triple-media.com> wrote: > > class bitarray(long): > ... > def __setitem__(self, key, value): > [...] > long.__add__(2**key * value) # Should actually overwrite the [...] What on earth are you trying to do here? There are at least two bugs in this line: (a) long.__add__ is an unbound method so requires two arguments; (b) __setitem__ suggests that you're creating a mutable object; but long is immutable and even if your subclass isn't immutable, it still can't modify the immutable underlying long. You probably want to write a class that *has* a long (I agree that it is a good type to implement a bit array) instead of one that *is* a long. Subclassing is overrated! I don't understand at all what you're saying about decimal. You should take this to comp.lang.python; python-dev is about developing Python, not about programming questions. Even if you believe that your problem can only be solved by changes to Python, I gurantee you that it's much better to discuss this on c.l.py; most likely there's a solution to your problem (once you understand it) that does not involve changing the language or its implementation. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Thu Apr 27 00:05:58 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 26 Apr 2006 18:05:58 -0400 Subject: [Python-Dev] Google Summer of Code proposal: New class for workwith binary trees AVL and RB as with the standard dictionary. References: <20060424114024.66D0.JCARLSON@uci.edu> <444F1E36.30302@renet.ru> <20060426094148.66EE.JCARLSON@uci.edu> Message-ID: <e2oqs5$khk$1@sea.gmane.org> "Josiah Carlson" <jcarlson at uci.edu> wrote in message news:20060426094148.66EE.JCARLSON at uci.edu... > Then again, you already have your own implementation of a tree module, > and it seems as though you would like to be paid for this already-done > work. I don't know how google feels about such things, They are explicitly against that in the student FAQ. They are paying for new code writen after the project is approved and before the endpoint. > but you should > remember to include this information in your application. Existing code can legitimately be used to show feasibility of the project and competancy of the student Terry Jan Reedy From ianb at colorstudy.com Thu Apr 27 00:35:20 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Wed, 26 Apr 2006 17:35:20 -0500 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <e2ole6$dd$1@sea.gmane.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> Message-ID: <444FF5A8.6010801@colorstudy.com> Joe Smith wrote: > It seems to me that the right way to fix this is to simply make a small > change to the error message. > On a failed import, have the code check if there is a directory that would > have been the requested package if > it had contained an __init__ module. If there is then append a message like > "You might be missing an __init__.py file". +1. It's not that putting an __init__.py file in is hard, it's that people have a hard time realizing when they've forgotten to do it. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From thomas at python.org Thu Apr 27 00:40:15 2006 From: thomas at python.org (Thomas Wouters) Date: Thu, 27 Apr 2006 00:40:15 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> Message-ID: <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> On 4/26/06, Guido van Rossum <guido at python.org> wrote: > > OK, forget it. I'll face the pitchforks. Maybe this'll help: http://python.org/sf/1477281 (You can call it 'oldtimer-repellant' if you want to use it to convince people there isn't any *real* backward-compatibility issue.) I'm disappointed though -- it sounds like we can never change anything > about Python any more because it will upset the oldtimers. That's a bit unfair, Guido. There are valid reasons not to change Python's behaviour in this respect, regardless of upset old-timers. Besides, you're the BDFL; if you think the old-timers are wrong, I implore you to put their worries aside (after dutiful contemplation.) I've long since decided that any change what so ever will have activist luddites opposing it. I think most of them would stop when you make a clear decision -- how much whining have you had about the if-else syntax since you made the choice? I've heard lots of people gripe about it in private (at PyCon, of course, I never see Pythonistas anywhere else :-P), but I haven't seen any python-dev rants about it. I certainly hate PEP-308's guts, but if if-else is your decision, if-else is what we'll do. And so it is, I believe, with this case. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/9e0d4591/attachment.htm From guido at python.org Thu Apr 27 00:58:41 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 15:58:41 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> Message-ID: <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> On 4/26/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/26/06, Guido van Rossum <guido at python.org> wrote: > > OK, forget it. I'll face the pitchforks. > > > Maybe this'll help: > > http://python.org/sf/1477281 > > (You can call it 'oldtimer-repellant' if you want to use it to convince > people there isn't any *real* backward-compatibility issue.) I'd worry that it'll cause complaints when the warning is incorrect and a certain directory is being skipped intentionally. E.g. the "string" directory that someone had. Getting a warning like this can be just as upsetting to newbies! > > I'm disappointed though -- it sounds like we can never change anything > > about Python any more because it will upset the oldtimers. > > That's a bit unfair, Guido. There are valid reasons not to change Python's > behaviour in this respect, regardless of upset old-timers. Where are the valid reasons? All I see is knee-jerk -1, -1, -1, and "this might cause tools to do the wrong thing". Not a single person attempted to see it from the newbie POV; several people explicitly rejected the newbie POV as invalid. I still don't know the name of any tool that would break due to this *and where the breakage wouldn't be easy to fix by adjusting the tool's behavior*. Yes, fixing tools is a pain. But they have to be fixed for every new Python version anyway -- new keywords, new syntax, new bytecodes, etc. > Besides, you're > the BDFL; if you think the old-timers are wrong, I implore you to put their > worries aside (after dutiful contemplation.) I can only do that so many times before I'm no longer the BDFL. It's one thing to break a tie when there is widespread disagreement amongst developers (like about the perfect decorator syntax). It's another to go against a see of -1's. > I've long since decided that > any change what so ever will have activist luddites opposing it. I think > most of them would stop when you make a clear decision -- how much whining > have you had about the if-else syntax since you made the choice? I've heard > lots of people gripe about it in private (at PyCon, of course, I never see > Pythonistas anywhere else :-P), but I haven't seen any python-dev rants > about it. I certainly hate PEP-308's guts, but if if-else is your decision, > if-else is what we'll do. And so it is, I believe, with this case. OK. Then I implore you, please check in that patch (after adding error checking for PyErr_Warn() -- and of course after a2 hs been shipped), and damn the torpedoes. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From thomas at python.org Thu Apr 27 01:10:44 2006 From: thomas at python.org (Thomas Wouters) Date: Thu, 27 Apr 2006 01:10:44 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> Message-ID: <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> On 4/27/06, Guido van Rossum <guido at python.org> wrote: > > On 4/26/06, Thomas Wouters <thomas at python.org> wrote: > > > > > > On 4/26/06, Guido van Rossum <guido at python.org> wrote: > > > OK, forget it. I'll face the pitchforks. > > > > > > Maybe this'll help: > > > > http://python.org/sf/1477281 > > > > (You can call it 'oldtimer-repellant' if you want to use it to convince > > people there isn't any *real* backward-compatibility issue.) > > I'd worry that it'll cause complaints when the warning is incorrect > and a certain directory is being skipped intentionally. E.g. the > "string" directory that someone had. Getting a warning like this can > be just as upsetting to newbies! I don't think getting a spurious warning is as upsetting as getting no warning but the damned thing just not working. At least you have something to google for. And the warning includes the original line of source that triggered it *and* the directory (or directories) it's complaining about, which is quite a lot of helpful hints. The clashes with directories that aren't intended to be packages *and* a module of the same name is imported, yes, that's a real problem. It's not any worse than if we change package imports the way you originally proposed, though, and I think the actual number of spurious errors is very small (which self-respecting module still does 'import string', eh? :-) I don't think the fix for such a warning is going to be non-trivial. Where are the valid reasons? Of course, I only consider *my* reasons to be valid, and mine weren't knee-jerk or tool-related. I don't think Python should be going "Oh, what you wanted wasn't possible, but I think I know what you wanted, let me do it for you", first of all because it's not very Pythonic, and second of all because it doesn't lower the learning curve, it just delays some upward motion a little (meaning the curve may become steeper, later.) A clear warning, on the other hand, can be a helpful nudge towards the 'a-HA' moment. > Then I implore you, please check in that patch (after adding error > checking for PyErr_Warn() -- and of course after a2 hs been shipped), > and damn the torpedoes. Alrighty then. The list has about 12 hours to convince me (and you) that it's a bad idea to generate that warning. I'll be asleep by the time the trunk un-freezes, and I have a string of early meetings tomorrow. I'll get to it somewhere in the afternoon :) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/dbfc16b1/attachment.html From tdelaney at avaya.com Thu Apr 27 01:12:02 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Thu, 27 Apr 2006 09:12:02 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages Message-ID: <2773CAC687FD5F4689F526998C7E4E5FF1E68D@au3010avexu1.global.avaya.com> Guido van Rossum wrote: >> http://python.org/sf/1477281 >> >> (You can call it 'oldtimer-repellant' if you want to use it to >> convince people there isn't any *real* backward-compatibility issue.) > > I'd worry that it'll cause complaints when the warning is incorrect > and a certain directory is being skipped intentionally. E.g. the > "string" directory that someone had. Getting a warning like this can > be just as upsetting to newbies! I really think it would be more useful having an ImportError containing a suggestion than having a warning. Anyone who knows it's bogus can just ignore it. I'd probably make it that all directories that could have been imports get listed. FWIW I was definitely a kneejerk -1. After reading all the messages in this thread, I think I'm now a non-kneejerk -1. It seems like gratuitous change introducing inconsistency for minimal benefit - esp. if there is a notification that a directory *could* have been a package on import failure. I think it's a useful feature of Python that it's simple to distinguish a package from a non-package. Tim Delaney From guido at python.org Thu Apr 27 01:24:19 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 16:24:19 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> Message-ID: <ca471dc20604261624o6317b183v71c0e9ef196a0a01@mail.gmail.com> On 4/26/06, Thomas Wouters <thomas at python.org> wrote: > Of course, I only consider *my* reasons to be valid, and mine weren't > knee-jerk or tool-related. I don't think Python should be going "Oh, what > you wanted wasn't possible, but I think I know what you wanted, let me do it > for you", first of all because it's not very Pythonic, and second of all > because it doesn't lower the learning curve, it just delays some upward > motion a little (meaning the curve may become steeper, later.) A clear > warning, on the other hand, can be a helpful nudge towards the 'a-HA' > moment. That still sounds like old-timer reasoning. Long ago we were very close to defining a package as "a directory" -- with none of this "must contain __init__.py or another *.py file" nonsense. IIRC the decision to make __init__.py mandatory faced opposition too, since people were already doing packages with just directories (which is quite clean and elegant, and that's also how it was in Java), but I added it after seeing a few newbies tear out their hair. I believe that if at that time __init__.py had remained optional, and today I had proposed to require it, the change would have been derided as unpythonic as well. There's nothing particularly unpythonic about optional behavior; e.g. classes may or may not provide an __init__ method. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Thu Apr 27 01:32:20 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 16:32:20 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5FF1E68D@au3010avexu1.global.avaya.com> References: <2773CAC687FD5F4689F526998C7E4E5FF1E68D@au3010avexu1.global.avaya.com> Message-ID: <ca471dc20604261632u7a7c5836ob6aabb3822bfe13e@mail.gmail.com> On 4/26/06, Delaney, Timothy (Tim) <tdelaney at avaya.com> wrote: > I really think it would be more useful having an ImportError containing > a suggestion than having a warning. Anyone who knows it's bogus can just > ignore it. That's effectively what Thomas's patch does though -- if at the end the path the package isn't found, you'll still get an ImportError; but it will be preceded by ImportWarnings showing you the non-package directories found on the way. The difference is that if you find a valid module package later on the path, you'll still get warnings. But occasionally those will be useful too -- when you are trying to create a package that happens to have the same name as a standard library package or module, it's a lot easier to debug the missing __init__.py if you get a warning about it instead of having to figure out that the foo you imported is not the foo you intended. The symptom is usually a mysterious AttributeError that takes a bit of determination to debug. Of course print foo.__file__ usually sets you right, but that's not the first thing a newbie would try -- they aren't quite sure about all the rules of import, so they are likely to first try to fiddle with $PYTHONPATH or sys.path, then put print statements in their package, etc. > I'd probably make it that all directories that could have been imports > get listed. Thomas' patch does this automatically -- you get a warning for each directory that is weighed and found too light. > FWIW I was definitely a kneejerk -1. After reading all the messages in > this thread, I think I'm now a non-kneejerk -1. It seems like gratuitous > change introducing inconsistency for minimal benefit - esp. if there is > a notification that a directory *could* have been a package on import > failure. I think it's a useful feature of Python that it's simple to > distinguish a package from a non-package. The change is only gratuitous if you disagree with it. The inconsistency is real but we all know the line about a foolish consistency. The benefit is minimal unless you've wasted an hour debugging this situation. Is it also useful to distinguish a subpackage from a non-subpackage? Anyway, the warning is more compatible and just as helpful so we'll go with that. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tdelaney at avaya.com Thu Apr 27 01:48:55 2006 From: tdelaney at avaya.com (Delaney, Timothy (Tim)) Date: Thu, 27 Apr 2006 09:48:55 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages Message-ID: <2773CAC687FD5F4689F526998C7E4E5F07436F@au3010avexu1.global.avaya.com> Guido van Rossum wrote: > The difference is that if you find a valid module package later on the > path, you'll still get warnings. This is the bit I don't like about it. Currently the warnings are displayed as the packages are found. I'd be quite happy with the warnings if they were suppressed until there is an actual ImportError - at which time they'll be displayed. > But occasionally those will be useful > too -- when you are trying to create a package that happens to have > the same name as a standard library package or module, it's a lot > easier to debug the missing __init__.py if you get a warning about it > instead of having to figure out that the foo you imported is not the > foo you intended. I suspect that more often any warnings when there is a successful import will be spurious. But testing in the field will reveal that. So I say alpha 3 should have Thomas' patch as is, and it can be changed afterwards if necessary. > Thomas' patch does this automatically -- you get a warning for each > directory that is weighed and found too light. Not exactly - you'll get warnings for directories earlier in sys.path than a matching package. Potential packages later in the path won't be warned about. If you're trying to resolve import problems, it's just as likely that the package you really want is later in sys.path than earlier. Obviously in the case that you get an ImportError this goes away. > Is it also useful to distinguish a subpackage from a non-subpackage? Possibly. Perhaps it would be useful to have `is_package(dirname)`, `is_rootpackage(dirname)` and `is_subpackage(dirname)` functions somewhere (pkgutils?). Then the tools objections go away, and whatever mechanism is necessary to determine this (e.g. is_rootpackage checks if the directory is importable via sys.path) can be implemented. > Anyway, the warning is more compatible and just as helpful so we'll go > with that. Fair enough. Tim Delaney From pje at telecommunity.com Thu Apr 27 01:59:30 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 19:59:30 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com > References: <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> Message-ID: <5.1.1.6.0.20060426194813.01e276f0@mail.telecommunity.com> At 01:10 AM 4/27/2006 +0200, Thomas Wouters wrote: >On 4/27/06, Guido van Rossum <<mailto:guido at python.org>guido at python.org> >wrote: >>I'd worry that it'll cause complaints when the warning is incorrect >>and a certain directory is being skipped intentionally. E.g. the >>"string" directory that someone had. Getting a warning like this can >>be just as upsetting to newbies! > >I don't think getting a spurious warning is as upsetting as getting no >warning but the damned thing just not working. At least you have something >to google for. And the warning includes the original line of source that >triggered it *and* the directory (or directories) it's complaining about, >which is quite a lot of helpful hints. +1. If the warning is off-base, you can rename the directory or suppress the warning. As for the newbie situation, ISTM that this warning will generally come close enough in time to an environment change (new path entry, newly created conflicting directory) to be seen as informative. The only time it might be confusing is if you had just added an "import foo" after having a "foo" directory sitting around for a while. But even then, the warning is saying, "hey, it looked like you might have meant *this* foo directory... if so, you're missing an __init__". So, at that point I rename the directory... or maybe add the __init__ and break my code. So then I back it out and put up with the warning and complain to c.l.p, or maybe threaten Guido with a pitchfork if I work at Google. Or maybe just a regular-sized fork, since the warning is just annoying. :) >Alrighty then. The list has about 12 hours to convince me (and you) that >it's a bad idea to generate that warning. I'll be asleep by the time the >trunk un-freezes, and I have a string of early meetings tomorrow. I'll get >to it somewhere in the afternoon :) I like the patch in general, but may I suggest PackageWarning or maybe BrokenPackageWarning instead of ImportWarning as the class name? From guido at python.org Thu Apr 27 01:57:13 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 16:57:13 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <2773CAC687FD5F4689F526998C7E4E5F07436F@au3010avexu1.global.avaya.com> References: <2773CAC687FD5F4689F526998C7E4E5F07436F@au3010avexu1.global.avaya.com> Message-ID: <ca471dc20604261657o84b11cekdfa81f9b0e19ea1e@mail.gmail.com> On 4/26/06, Delaney, Timothy (Tim) <tdelaney at avaya.com> wrote: > Potential packages later in the path won't be > warned about. If you're trying to resolve import problems, it's just as > likely that the package you really want is later in sys.path than > earlier. But module hiding is a feature, and anyway, we're not going to continue to search sys.path after we've found a valid match (imagine doing this on *every* successful import!), so you're not going to get warnings about those either way. > Obviously in the case that you get an ImportError this goes > away. Right. > > Is it also useful to distinguish a subpackage from a non-subpackage? > > Possibly. Perhaps it would be useful to have `is_package(dirname)`, > `is_rootpackage(dirname)` and `is_subpackage(dirname)` functions > somewhere (pkgutils?). YAGNI. Also note that not all modules or packages are represented by pathnames -- they could live in zip files, or be accessed via whatever other magic an import handler users. (Thomas's warning won't happen in those cases BTW -- it only affects the default import handler.) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Thu Apr 27 02:05:16 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Wed, 26 Apr 2006 20:05:16 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261657o84b11cekdfa81f9b0e19ea1e@mail.gmail.com > References: <2773CAC687FD5F4689F526998C7E4E5F07436F@au3010avexu1.global.avaya.com> <2773CAC687FD5F4689F526998C7E4E5F07436F@au3010avexu1.global.avaya.com> Message-ID: <5.1.1.6.0.20060426200202.0201b470@mail.telecommunity.com> At 04:57 PM 4/26/2006 -0700, Guido van Rossum wrote: >On 4/26/06, Delaney, Timothy (Tim) <tdelaney at avaya.com> wrote: > > Possibly. Perhaps it would be useful to have `is_package(dirname)`, > > `is_rootpackage(dirname)` and `is_subpackage(dirname)` functions > > somewhere (pkgutils?). > >YAGNI. Also note that not all modules or packages are represented by >pathnames -- they could live in zip files, or be accessed via whatever >other magic an import handler users. FYI, pkgutil in 2.5 has utilities to walk a package tree, starting from sys.path or a package __path__, and it's PEP 302 compliant. pydoc now uses this in place of directory inspection, so that documenting zipped packages works correctly. These functions aren't documented yet, though, and probably won't be until next week at the earliest. From tomerfiliba at gmail.com Wed Apr 26 20:49:57 2006 From: tomerfiliba at gmail.com (tomer filiba) Date: Wed, 26 Apr 2006 20:49:57 +0200 Subject: [Python-Dev] suggestion: except in list comprehension In-Reply-To: <20060426103804.66F1.JCARLSON@uci.edu> References: <1d85506f0604241356g11ebf29emdb15c789089afb25@mail.gmail.com> <20060426103804.66F1.JCARLSON@uci.edu> Message-ID: <1d85506f0604261149q79dceeb1q6548e74556e6ccf0@mail.gmail.com> first, i posted it two days ago, so it's funny it got posted only now... the moderators are sleeping on the job :) anyway. > Note that of the continue cases you offer, all of them are merely simple > if condition yes, i said that explicitly, did you *read* my mail? but i also said it's not always possible. you *can't* always tell prior to doing something that it's bound to fail. you may have os.path.isfile, but not an arbitrary "is_this_going_to_fail(x)" and doing > [1.0 / x for x in y if z(x != 0)] is quite an awkward way to say "break"... if then_break(cond) instead of if cond then break - - - - - anyway, i guess my friend may have better show-cases, as he's the one who found the need for it. i just thought i should bring this up here. if you think better show cases would convince you, i can ask him. > If you couldn't guess; -1, you can get equivalent behavior without > complicating the generator expression/list comprension syntax. so how come list comprehensions aren't just a "complication to the syntax"? you can always do them the old way, x = [] for ....: x.append(...) but i since people find {shorter / more to-the-point / convenience} reason enough to have introduced the list-comprehension syntax in the first place, there's also a case for adding exception handling to it. if the above snippet deserves a unique syntax, why not extend it to cover this as well? x = [] for ....: try: x.append(...) except: ... and it's such a big syntactic change. don't worry, i'm not going to argue it past this. -tomer On 4/26/06, Josiah Carlson <jcarlson at uci.edu> wrote: > > > "tomer filiba" <tomerfiliba at gmail.com> wrote: > > "[" <expr> for <expr> in <expr> [if <cond>] [except > > <exception-class-or-tuple>: <action>] "]" > > Note that of the continue cases you offer, all of them are merely simple > if condition (though the file example could use a better test than > os.path.isfile). > > [x for x in a if x.startswith("y") except AttributeError: continue] > [x for x in a if hasattr(x, 'startswith') and x.startswith("y")] > > [1.0 / x for x in y except ZeroDivisionError: continue] > [1.0 / x for x in y if x != 0] > > [open(filename) for filename in filelist except IOError: continue] > [open(filename) for filename in filelist if os.path.isfile(filename)] > > The break case can be implemented with particular kind of instance > object, though doesn't have the short-circuiting behavior... > > class StopWhenFalse: > def __init__(self): > self.t = 1 > def __call__(self, t): > if not t: > self.t = 0 > return 0 > return self.t > > z = StopWhenFalse() > > Assuming you create a new instance z of StopWhenFalse before doing the > list comprehensions... > > [x for x in a if z(hasattr(x, 'startswith') and x.startswith("y"))] > [1.0 / x for x in y if z(x != 0)] > [open(filename) for filename in filelist if z(os.path.isfile > (filename))] > > > If you couldn't guess; -1, you can get equivalent behavior without > complicating the generator expression/list comprension syntax. > > - Josiah > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060426/21b8fdfe/attachment-0001.htm From anthony at interlink.com.au Thu Apr 27 04:18:41 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 27 Apr 2006 12:18:41 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060426152621.01e56528@mail.telecommunity.com> References: <5.1.1.6.0.20060426133510.040ecaf8@mail.telecommunity.com> <5.1.1.6.0.20060426152621.01e56528@mail.telecommunity.com> Message-ID: <200604271218.47385.anthony@interlink.com.au> On Thursday 27 April 2006 05:50, Phillip J. Eby wrote: > Anyway, I'm not opposed to the idea of supporting this in future > Pythons, but I definitely think it falls under the "but sometimes > never is better than RIGHT now" rule where 2.5 is concerned. :) I agree fully. I don't think we should try and shove this into Python 2.5 on short notice, but I could be convinced otherwise. Right now, though, I'm a strong -1 for now for this in 2.5. If it's to go forward, I think it _definitely_ needs a PEP outlining the potential breakages (and I'm not sure we're aware of them all yet). > In particular, I'm worried that you're shrugging off the extent of > the collateral damage here, and I'd be happiest if we waited until > 3.0 before changing this particular rule -- and if we changed it in > favor of namespace packages, which will more closely match naive > user expectations. The breakage of tools and the like is my concern, too. Python's import machinery is already a delicate mess of subtle rules. -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From anthony at interlink.com.au Thu Apr 27 04:22:53 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 27 Apr 2006 12:22:53 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> Message-ID: <200604271222.54167.anthony@interlink.com.au> On Thursday 27 April 2006 06:49, Guido van Rossum wrote: > OK, forget it. I'll face the pitchforks. > > I'm disappointed though -- it sounds like we can never change > anything about Python any more because it will upset the oldtimers. I'm not averse to changing this - just not to changing it on short notice for 2.5 and causing me pain from having to cut new releases to fix some breakage in the stdlib caused by this <wink> -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From guido at python.org Thu Apr 27 04:39:34 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 19:39:34 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <200604271222.54167.anthony@interlink.com.au> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <200604271222.54167.anthony@interlink.com.au> Message-ID: <ca471dc20604261939o76b11349gb47eb2004fe06f8d@mail.gmail.com> On 4/26/06, Anthony Baxter <anthony at interlink.com.au> wrote: > On Thursday 27 April 2006 06:49, Guido van Rossum wrote: > > OK, forget it. I'll face the pitchforks. > > > > I'm disappointed though -- it sounds like we can never change > > anything about Python any more because it will upset the oldtimers. > > I'm not averse to changing this - just not to changing it on short > notice for 2.5 and causing me pain from having to cut new releases to > fix some breakage in the stdlib caused by this <wink> I wasn't proposing it for 2.5 (for this very reason) -- AFAICT most responses were again st this for 2.6 or 2.7. Hopefully you're okay with Thomas's patch going into 2.5a3 -- it *warns* about directories without __init__.py that otherwise match the requested name. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at python.org Thu Apr 27 04:42:55 2006 From: barry at python.org (Barry Warsaw) Date: Wed, 26 Apr 2006 22:42:55 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <1146073820.10733.70.camel@resist.wooz.org> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> Message-ID: <1146105776.11947.100.camel@geddy.wooz.org> Boy, threads here sure move fast when there's work to be done :). Although largely moot now, I'll follow up for posterity's sake. On Wed, 2006-04-26 at 10:59 -0700, Guido van Rossum wrote: > Oh, cool gray area. I propose that if there's no __init__.py it prints > '<path>/sub1/sun2/' i.e. with a trailing slash; that causes dirname to > just strip the '/'. (It would be a backslash on Windows of course). Yep, that's the right thing to do. Given all the other traffic in this thread, I don't have much more to add except: I probably don't remember things accurately, but ISTR that __init__.py was added as a way to expose names in the package's namespace first. We probably went from that to requiring __init__.py as a way to be explicit about what directories are packages and which aren't. So I suspect you're right when you say that if the rule had already been relaxed and you were now proposing to tighten the rules, we probably get just as many complaints. alternative-universe-ly y'rs, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 309 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/python-dev/attachments/20060426/c6eba8e8/attachment.pgp From fdrake at acm.org Thu Apr 27 04:59:34 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Wed, 26 Apr 2006 22:59:34 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <1146105776.11947.100.camel@geddy.wooz.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> <1146105776.11947.100.camel@geddy.wooz.org> Message-ID: <200604262259.34279.fdrake@acm.org> On Wednesday 26 April 2006 22:42, Barry Warsaw wrote: > So I suspect > you're right when you say that if the rule had already been relaxed and > you were now proposing to tighten the rules, we probably get just as > many complaints. Indeed. I think the problem many of us have with the proposal isn't the new behavior, but the change in the behavior. That's certainly it for me. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From guido at python.org Thu Apr 27 05:02:48 2006 From: guido at python.org (Guido van Rossum) Date: Wed, 26 Apr 2006 20:02:48 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <200604262259.34279.fdrake@acm.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> <1146105776.11947.100.camel@geddy.wooz.org> <200604262259.34279.fdrake@acm.org> Message-ID: <ca471dc20604262002j722ccec1sb9043e703afaa9e7@mail.gmail.com> On 4/26/06, Fred L. Drake, Jr. <fdrake at acm.org> wrote: > Indeed. I think the problem many of us have with the proposal isn't the new > behavior, but the change in the behavior. That's certainly it for me. Which is why I said earlier that I felt disappointed that we can't change anything any more. But I'm fine with the warning -- it should be enough to keep Google's newbies from wasting their time. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Thu Apr 27 05:22:31 2006 From: brett at python.org (Brett Cannon) Date: Wed, 26 Apr 2006 21:22:31 -0600 Subject: [Python-Dev] draft of externally maintained packages PEP In-Reply-To: <444F0B62.7010207@python.net> References: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> <444F0B62.7010207@python.net> Message-ID: <bbaeab100604262022q241697eemcbadab940dd1066c@mail.gmail.com> On 4/25/06, Thomas Heller <theller at python.net> wrote: > Brett Cannon wrote: > > here is the rough draft of the PEP for packages maintained externally > > from Python itself. There is some missing, though, that I would like > > help filling in. > > > > I don't know who to list as the contact person (i.e., the Python > > developer in charge of the code) for Expat, Optik or pysqlite. I know > > Greg Ward wrote Optik, but I don't know if he is still active at all > > (I saw AMK's email on Greg being busy). I also thought Gerhard > > Haering was in charge of pysqlite, but he has never responded to any > > emails about this topic. Maybe the responsibility should go to > > Anthony since I know he worked to get the package in and probably > > cares about keeping it updated? As for Expat (the parser), is that > > Fred? > > > > I also don't know what version of ctypes is in 2.5 . Thomas, can you tell me? > > > ctypes > > ------ > > - Web page > > http://starship.python.net/crew/theller/ctypes/ > > - Standard library name > > ctypes > > - Contact person > > Thomas Heller > > - Synchronisation history > > * 0.9.9.4 (2.5a1) > * 0.9.9.6 (2.5a2) I am not going to worry about alphas, so I just set it to 0.9.9.6 for now and can update once 2.5.0 is released. -Brett From brett at python.org Thu Apr 27 05:23:51 2006 From: brett at python.org (Brett Cannon) Date: Wed, 26 Apr 2006 21:23:51 -0600 Subject: [Python-Dev] draft of externally maintained packages PEP In-Reply-To: <444F2CE3.2070707@ghaering.de> References: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> <444F2CE3.2070707@ghaering.de> Message-ID: <bbaeab100604262023q6e5b0b36i73dcb17e9d966e7d@mail.gmail.com> On 4/26/06, Gerhard H?ring <gh at ghaering.de> wrote: > Brett Cannon wrote: > > here is the rough draft of the PEP for packages maintained externally > > from Python itself. There is some missing, though, that I would like > > help filling in. > > > > I don't know who to list as the contact person (i.e., the Python > > developer in charge of the code) for Expat, Optik or pysqlite. [...] > > I also thought Gerhard Haering was in charge of pysqlite, but he has > > never responded to any emails about this topic. > > Sorry for not answering any sooner. Please list me as contact person for > the SQLite module. > OK, great. > > Maybe the responsibility should go to Anthony since I know he worked > > to get the package in and probably cares about keeping it updated? > > As for Expat (the parser), is that Fred? [...] > > > > > > pysqlite > > -------- > > - Web site > > http://www.sqlite.org/ > > - Standard library name > > sqlite3 > > - Contact person > > XXX > > - Synchronisation history > > * 2.1.3 (2.5) > > You can add > > * 2.2.0 > * 2.2.2 I put in 2.2.2 for my copy since I am not going to worry about alphas. -Brett From aahz at pythoncraft.com Thu Apr 27 05:48:02 2006 From: aahz at pythoncraft.com (Aahz) Date: Wed, 26 Apr 2006 20:48:02 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604262002j722ccec1sb9043e703afaa9e7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <ca471dc20604261059u13cf9ff4v1760a9ab13d2cdd9@mail.gmail.com> <1146105776.11947.100.camel@geddy.wooz.org> <200604262259.34279.fdrake@acm.org> <ca471dc20604262002j722ccec1sb9043e703afaa9e7@mail.gmail.com> Message-ID: <20060427034802.GA25698@panix.com> On Wed, Apr 26, 2006, Guido van Rossum wrote: > > Which is why I said earlier that I felt disappointed that we can't > change anything any more. I've been here since Python 1.5.1. I don't understand why this issue in particular makes you feel disappointed. I also think your statement is just plain untrue. Looking at the What's New for Python 2.5, we have changes to the import machinery, the with statement, new functionality for generators, conditional expressions, ssize_t, exceptions as new-style classes, deleted modules (regex, regsub, and whrandom), and on and on and on. Some of these changes are going to cause moderate breakage for some people; they cumulatively represent a *lot* of change. Python 2.5 is gonna be a *huge* release. Quite honestly, I think you've let yourself get emotionally attached to this issue for some reason. On the other hand, I think that a lot of the blowback you got comes from you bringing up this issue right before 2.5a2. My suggestion is that you let this sit and make yourself a calendar entry to think about this again in October. If it still seems like a good idea, go ahead and bring it up. (Or just punt to 3.0.) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From brett at python.org Thu Apr 27 06:09:21 2006 From: brett at python.org (Brett Cannon) Date: Wed, 26 Apr 2006 22:09:21 -0600 Subject: [Python-Dev] need info for externally maintained modules PEP In-Reply-To: <444F2FA4.7080406@ghaering.de> References: <bbaeab100604081447x5f368d82qc242945e467dea7c@mail.gmail.com> <444F2FA4.7080406@ghaering.de> Message-ID: <bbaeab100604262109n49dd302clf1131fde063001e7@mail.gmail.com> On 4/26/06, Gerhard H?ring <gh at ghaering.de> wrote: > Brett Cannon wrote: > > OK, I am going to write the PEP I proposed a week or so ago, listing > > all modules and packages within the stdlib that are maintained > > externally so we have a central place to go for contact info or where > > to report bugs on issues. This should only apply to modules that want > > bugs reported outside of the Python tracker and have a separate dev > > track. People who just use the Python repository as their mainline > > version can just be left out. [...] > > I prefer to have bugs on the sqlite module reported in the pysqlite > tracker at http://pysqlite.org/ aka http://initd.org/tracker/pysqlite > > For bug fixes I have the same position as Fredrik: security fixes, bugs > that block the build and warnings from automatic checkers should be done > through the Python repository and I will port them to the external version. > > For any changes that reach "deeper" I'd like to have them handed over to me. > Done in my copy. That just leaves who is to be listed for in charge of the expat parser. > As it's unrealistic that all bugs are reported through the pysqlite > tracker, can it please be arranged that if somebody enters a SQLite > related bug through the Sourceforge tracker, that I get notified? > Perhaps by defining a category SQLite here and adding me as default > responsible, if that's possible on SF. > It is, although I can't do it. > Currently I'm not subscribed to python-checkins and didn't see a need > to. Is there a need to for Python core developers? I think there's no > better way except subscribing and defining a filter for SQLite-related > commits to be notified if other people commit changes to the SQLite > module in Python? > > It's not that I'm too lazy, but I'd like to keep the number of things I > need to monitor low. Understandable. I think setting up a filter is probably the best way. Can always get the digest to also cut down on the email as well. -Brett From brett at python.org Thu Apr 27 07:19:46 2006 From: brett at python.org (Brett Cannon) Date: Wed, 26 Apr 2006 23:19:46 -0600 Subject: [Python-Dev] PY_FORMAT_SIZE_T warnings on OS X In-Reply-To: <1f7befae0604261049s240b9d02te72061f4386c0410@mail.gmail.com> References: <bbaeab100604011543x6cfb76fcjc6902b59c9d020d7@mail.gmail.com> <1f7befae0604011559j19b25c20v4415ce15ede88082@mail.gmail.com> <bbaeab100604011640y47dfd95fi784fa21d4a51e034@mail.gmail.com> <1f7befae0604011722h639182a1s25472d0e8961b7a8@mail.gmail.com> <442F6F50.2030002@v.loewis.de> <bbaeab100604251912n5f7a06dcgd2e195283c761206@mail.gmail.com> <444F9CEB.9030303@v.loewis.de> <1f7befae0604261049s240b9d02te72061f4386c0410@mail.gmail.com> Message-ID: <bbaeab100604262219k674399dbv99efb15644124b69@mail.gmail.com> On 4/26/06, Tim Peters <tim.peters at gmail.com> wrote: > [Brett Cannon] > >> I created patch 1474907 with a fix for it. Checks if %zd works for > >> size_t and if so sets PY_FORMAT_SIZE_T to "z", otherwise just doesn't > >> set the macro def. > >> > >> Assigned to Martin to make sure I didn't foul it up, but pretty much > >> anyone could probably double-check it. > > [Martin v. L?wis] > > Unfortunately, SF stopped sending emails when you get assigned an issue, > > so I didn't receive any message when you created that patch. > > > > I now reviewed it, and think the test might not fail properly on systems > > where %zd isn't supported. > > I agree with Martin's patch comments. The C standards don't require > that printf fail if an unrecognized format gimmick is used (behavior > is explicitly "undefined" then). For example, > > """ > #include <stdio.h> > #include <stdlib.h> > > int main() > { > int i; > i = printf("%zd\n", (size_t)0); > printf("%d\n", i); > return 0; > } > """ > > prints "zd\n3\n" under all of MSVC 6.0, MSVC 7.1, and gcc (Cygwin) on > WinXP. Using sprintf instead and checking the string produced seems > much more likely to work. > > After the trunk freeze melts, I suggest just _trying_ stuff. The > buildbot system covers a bunch of platforms now, and when trying to > out-hack ill-defined C stuff "try it and see" is easier than thinking > <0.5 wink>. > I uploaded a new version that uses sprintf() and then does strncmp() on the result to see if it matches what is expected. That should be explicit enough to make sure that only a properly supported z modifier results in a successful test. If anyone has a chance to check it again before the trunk unfreezes, that would be great. -Brett From martin at v.loewis.de Thu Apr 27 07:20:47 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 27 Apr 2006 07:20:47 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060426230820.028976ea.dh@triple-media.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> <20060426230820.028976ea.dh@triple-media.com> Message-ID: <445054AF.9050702@v.loewis.de> Dennis Heuer wrote: > The reason why I'd like to use the long type as the base of my bitarray > type is that the long type is already implemented as an array and > working efficiently with large amounts of bytes. This is wrong; you are mixing up the "is-a" and "has-a" relationships. While it might be useful to have long as the *representation* of a bitarray, it's not the case that a bitarray *is* a long. A bitarray is a sequence type and should implement all aspects of the sequence protocol; long is a numeric type and should implement numeric operations. While the operators for these somewhat overlap (for + and *), the semantics of the operators never overlaps. So long and bitarray are completely distinct types, not subtypes of each other. IOW, you should do class bitarray: def __init__(self): self.value = 0L ... Regards, Martin From jcarlson at uci.edu Thu Apr 27 08:20:13 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Wed, 26 Apr 2006 23:20:13 -0700 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060426230820.028976ea.dh@triple-media.com> References: <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> <20060426230820.028976ea.dh@triple-media.com> Message-ID: <20060426231324.6714.JCARLSON@uci.edu> Dennis Heuer <dh at triple-media.com> wrote: > > OK, let's get back to the bitarray type. To first be clear about the > *type* of bitarray: The type I talk about is really a *bit*-array and > not a *bytewise*-array, as most developers seem to think of it. The > array doesn't provide the boolean manipulation of single bytes in the > array, it provides the manipulation of single bits or slices of bits in > a bit sequence of unlimited length. Think of it like of a data sequence > for a touring mashine (just bits). Rather than trying to force bitarrays into longs (which will actually tend to be much slower than necessary), you could use actual arrays (using 8, 16, or 32 bit integers), and just mask the final few bits as necessary. See this post: http://lists.copyleft.no/pipermail/pyrex/2005-October/001502.html for a sample implementation in Python. - Josiah From p.f.moore at gmail.com Thu Apr 27 09:47:44 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 27 Apr 2006 08:47:44 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261624o6317b183v71c0e9ef196a0a01@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <ca471dc20604261624o6317b183v71c0e9ef196a0a01@mail.gmail.com> Message-ID: <79990c6b0604270047t361d537bj8aed399fc2bd48e2@mail.gmail.com> On 4/27/06, Guido van Rossum <guido at python.org> wrote: > On 4/26/06, Thomas Wouters <thomas at python.org> wrote: > > Of course, I only consider *my* reasons to be valid, and mine weren't > > knee-jerk or tool-related. I don't think Python should be going "Oh, what > > you wanted wasn't possible, but I think I know what you wanted, let me do it > > for you", first of all because it's not very Pythonic, and second of all > > because it doesn't lower the learning curve, it just delays some upward > > motion a little (meaning the curve may become steeper, later.) A clear > > warning, on the other hand, can be a helpful nudge towards the 'a-HA' > > moment. > > That still sounds like old-timer reasoning. Long ago we were very > close to defining a package as "a directory" -- with none of this > "must contain __init__.py or another *.py file" nonsense. IIRC the > decision to make __init__.py mandatory faced opposition too, since > people were already doing packages with just directories (which is > quite clean and elegant, and that's also how it was in Java), but I > added it after seeing a few newbies tear out their hair. OK. After due consideration, I'm happy to accept the change. (Call that +0, it doesn't bother me much personally either way). Although reading the above paragraph, I get the impression that you are saying that __init__.py was originally added to help newbies, and yet you are now saying the exact opposite. I'll leave you to resolve that particular contradiction, though... FWIW, I still have every confidence in your judgement about features. However, I'd have to say that your timing sucks :-) Your initial message read to me as "Quick! I'm about to get lynched here - can I have the OK to shove this change in before a2 goes out?" (OK, 2.5 isn't feature frozen until a3, so maybe you only meant a3, but you clearly wanted a quick response). So it's hard to expect anything other than immediate knee-jerk responses. And those are usually driven by personal experience (implying less consideration of newbie mistakes from this type of audience) and unfocused fear of breakage. Paul. From p.f.moore at gmail.com Thu Apr 27 09:55:52 2006 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 27 Apr 2006 08:55:52 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <79990c6b0604270047t361d537bj8aed399fc2bd48e2@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <ca471dc20604261624o6317b183v71c0e9ef196a0a01@mail.gmail.com> <79990c6b0604270047t361d537bj8aed399fc2bd48e2@mail.gmail.com> Message-ID: <79990c6b0604270055p719714d0h820040bc7a5aa423@mail.gmail.com> On 4/27/06, Paul Moore <p.f.moore at gmail.com> wrote: > However, I'd have to say that your timing sucks :-) Your initial > message read to me as "Quick! I'm about to get lynched here - can I > have the OK to shove this change in before a2 goes out?" And this just proves that my response wasn't anywhere near as considered as I thought. You explicitly said 2.6, which I'd forgotten. I have no problem with going with your instinct in 2.6. But I do believe that there may be more breakage than you have considered (PEP 302 experience talking here :-)) so get it in early rather than late! Of course this is all a bit moot now, as the decision seems to have been made. Sigh. Paul. From anthony at interlink.com.au Thu Apr 27 10:06:05 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 27 Apr 2006 18:06:05 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <79990c6b0604270047t361d537bj8aed399fc2bd48e2@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <ca471dc20604261624o6317b183v71c0e9ef196a0a01@mail.gmail.com> <79990c6b0604270047t361d537bj8aed399fc2bd48e2@mail.gmail.com> Message-ID: <200604271806.06116.anthony@interlink.com.au> On Thursday 27 April 2006 17:47, Paul Moore wrote: > FWIW, I still have every confidence in your judgement about > features. However, I'd have to say that your timing sucks :-) Your > initial message read to me as "Quick! I'm about to get lynched here > - can I have the OK to shove this change in before a2 goes out?" > (OK, 2.5 isn't feature frozen until a3, so maybe you only meant a3, > but you clearly wanted a quick response). Feature freeze is the day we release beta1. New features go in until then - after that, not without approval and a general concensus that the changes have a substantial cost/benefit for breaking the feature freeze. Or if Guido gets Google developers parading him in effigy around the office and needs to get them off his back. <wink> Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From talin at acm.org Thu Apr 27 10:44:18 2006 From: talin at acm.org (Talin) Date: Thu, 27 Apr 2006 08:44:18 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Dropping_=5F=5Finit=5F=5F=2Epy_requirement?= =?utf-8?q?_for_subpackages?= References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <loom.20060427T103044-592@post.gmane.org> Guido van Rossum <guido <at> python.org> writes: > The requirement that a directlry must contain an __init__.py file > before it is considered a valid package has always been controversial. > It's designed to prevent the existence of a directory with a common > name like "time" or "string" from preventing fundamental imports to > work. But the feature often trips newbie Python programmers (of which > there are a lot at Google, at our current growth rate we're probably > training more new Python programmers than any other organization > worldwide . Might I suggest an alternative? I too find it cumbersome to have to litter my directory tree with __init__.py iles. However, rather than making modules be implicit ("Explicit is better than implicit"), I would rather see a more powerful syntax for declaring modules. Specifically, what I would like to see is a way for a top-level __init__.py to explicitly list which subdirectories are modules. Rather than spreading that information over a bunch of different __init__.py files, I would much rather have the information be centralized in a single text file for the whole package. Just as we have an __all__ variable that indicates which symbols are to be exported, we could have a '__submodules__' array which explicitly calls out the list of submodule directory names. Or perhaps more simply, just have some code in the top-level __init__.py that creates (but does not load) the module objects for the various sub-modules. The presence of "__init__.py" could perhaps also indicate the root of a *standalone* module tree; submodules that don't have their own __init__.py, but which are declared indirectly via an ancestor are assumed by convention to be 'component' modules which are not intended to operate outside of their parent. (In other words, the presence of the __init__.py signals that that tree is separately exportable.) I'm sure that someone who is familiar with the import machinery could whip up something like this in a matter of minutes. As far as the "compatibility with tools" argument goes, I say, break em :) Those tool programmers need a hobby anyway :) -- Talin From vys at renet.ru Thu Apr 27 11:01:26 2006 From: vys at renet.ru (Vladimir 'Yu' Stepanov) Date: Thu, 27 Apr 2006 13:01:26 +0400 Subject: [Python-Dev] binary trees. Review obmalloc.c In-Reply-To: <20060426094148.66EE.JCARLSON@uci.edu> References: <20060424114024.66D0.JCARLSON@uci.edu> <444F1E36.30302@renet.ru> <20060426094148.66EE.JCARLSON@uci.edu> Message-ID: <44508866.60503@renet.ru> Josiah Carlson wrote: > "Vladimir 'Yu' Stepanov" <vys at renet.ru> wrote: > >> Josiah Carlson wrote: >> >>> There exists various C and Python implementations of both AVL and >>> Red-Black trees. For users of Python who want to use AVL and/or >>> Red-Black trees, I would urge them to use the Python implementations. >>> In the case of *needing* the speed of a C extension, there already >>> exists a CPython extension module for AVL trees: >>> http://www.python.org/pypi/pyavl/1.1 >>> >>> I would suggest you look through some suggested SoC projects in the >>> wiki: >>> http://wiki.python.org/moin/SummerOfCode >>> >>> - Josiah >>> >>> >>> >> Thanks for the answer! >> >> I already saw pyavl-1.1. But for this reason I would like to see the module >> in a standard package python. Functionality for pyavl and dict to compare >> difficultly. Functionality of my module will differ from functionality dict >> in the best party. I have lead some tests on for work with different types >> both for a package pyavl-1.1, and for the prototype of own module. The >> script >> of check is resulted in attached a file avl-timeit.py In files >> timeit-result-*-*.txt results of this test. The first in the name of a file >> means quantity of the added elements, the second - argument of a method >> timeit. There it is visible, that in spite of the fact that the module >> xtree >> is more combined in comparison with pyavl the module (for everyone again >> inserted pair [the key, value], is created two elements: python object - >> pair, >> and an internal element of a tree), even in this case it works a little bit >> more quickly. Besides the module pyavl is unstable for work in multithread >> appendices (look the attached file module-avl-is-thread-unsafe.py). >> > > I'm not concerned about the speed of the external AVL module, and I'm > not concerned about the speed of trees in Python; so far people have > found that dictionaries are generally sufficient. More specifically, > while new lambda syntaxes are presented almost weekly, I haven't heard > anyone complain about Python's lack of a tree module in months. As a > result, I don't find the possible addition of any tree structure to the > collections module as being a generally compelling addition. > The object dict is irreplaceable in most cases, it so. I do not assume, that binary trees can sometime replace standard `dict' object. Sorting of the list also is and will be irreplaceable function. But there is a number of problems which can be rather easily solved by means of binary trees. Most likely for them there will be an alternative. But, as a rule, it is less obvious to the programmer. And most likely realization will be not so fast. Probably that nobody mentioned necessity of binary trees. In my opinion it is not a reason in an estimation of necessity of the given type. To take for example PHP. The array and the dictionary there is realized as the list. There is no even a hint on hash object. Those users to whom was necessary hash already have chosen perl/python/ruby/etc. The similar situation can arise and concerning binary trees. > Again, I believe that the existance of 3rd party extension modules which > implement AVL and red-black trees, regardless of their lack of thread > safety, slower than your module by a constant, or lack of particular > functionality to be basically immaterial to this discussion. In my mind, > there are only two relevant items to this discussion: > > 1. Is having a tree implementation (AVL or Red-Black) important, and if > so, is it important enough to include in the collections module? > 2. Is a tree implementation important enough for google to pay for its > inclusion, given that there exists pyavl and a collectionsmodule.c patch > for a red-black tree implementation? > > Then again, you already have your own implementation of a tree module, > and it seems as though you would like to be paid for this already-done > work. I don't know how google feels about such things, but you should > remember to include this information in your application. > > I have looked a patch for collectionsmodule.c. In my opinion it is fine realization of binary trees. If there is this class that to me there is no need to create one more copy. My persistence speaks a demand of this type of data. As to my message on the beginning of the project for SoC I ask to consider that this theme has not been lifted :) For me it was a minor question. >> I think, that creation of this type (together with type of pair), will make >> programming more convenient since sorting by function sort will be required >> less often. >> > > How often are you sorting data? I've found that very few of my > operations involve sorting data of any kind, and even if I did have need > of sorting, Python's sorting algorithm is quite fast, likely faster by > nearly a factor of 2 (in the number of comparisons) than the on-line > construction of an AVL or Red-Black tree. > Really it is not frequent. Only when there is one of problems which is difficult for solving by means of already available means, whether means it that I should look aside C, C ++? It not seems to me that presence of structures of self-balanced binary trees are distinctive feature of any language. Comparison of functions of sorting and binary trees not absolutely correctly. I think that function sort will lose considerably on greater lists. Especially after an insert or removal of all one element. >> I can probably borrow in this module beyond the framework of the project >> google. The demand of such type of data is interesting. Because of >> necessity >> of processing `gcmodule.c' and `obmalloc.c' this module cannot be realized >> as the external module. >> > > It seems to me (after checking collectionsmodule.c and a few others) > that proper C modules only seem to import Python.h and maybe > structmember.h . If you are manually including gcmodule.c and obmalloc.c, > you may be doing something wrong; I'll leave it to others to further > comment on this aspect. > Thanks that at once has not shot me :) I was not going to work directly neither with that, nor with another. There were difficulties with translation :) When I discussed obmalloc.c, I meant its internal code. Ways of its improvement: * To adapt allocation of blocks of memory with other alignment. Now alignment is rigidly set on 8 bytes. As a variant, it is possible to use alignment on 4 bytes. And this value can be set at start of the interpreter through arguments/variable environments/etc. At testing with alignment on 4 or 8 bytes difference in speed of work was not appreciable. * To expand the maximal size which can be allocated by means of the given module. Now the maximal size is 256 bytes. * At the size of the allocated memory close to maximal, the allocation of blocks becomes inefficient. For example, for the sizes of blocks 248 and 256 (blocksize), values of quantity of elements (PAGESIZE/blocksize) it is equal 16. I.e. it would be possible to use for the sizes of the block 248 same page, as for the size of the block 256. From g.brandl at gmx.net Thu Apr 27 11:16:16 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 27 Apr 2006 11:16:16 +0200 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <44473F32.3020700@v.loewis.de> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> <44473F32.3020700@v.loewis.de> Message-ID: <e2q250$57n$1@sea.gmane.org> Martin v. L?wis wrote: > Fredrik Lundh wrote: >> you do wonderful stuff, and then you post the announcement as a >> followup to a really old message, to make sure that people using a >> threaded reader only stumbles upon this by accident... this should >> be on the python.org frontpage! > > I also wonder what the actions should be for the Windows release. > > Are these "contributed" to Python? With work of art, I'm particular > cautious to include them without a specific permission of the artist, > and licensing terms under which to use them. > > And then, technically: I assume The non-vista versions should be > included the subversion repository, and the vista versions ignored? > Or how else should that work? I would say yes. Vista isn't out there yet, and the icons can always be updated in 2.6. Georg From simon.dahlbacka at gmail.com Thu Apr 27 11:22:22 2006 From: simon.dahlbacka at gmail.com (Simon Dahlbacka) Date: Thu, 27 Apr 2006 12:22:22 +0300 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <e2q250$57n$1@sea.gmane.org> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> <44473F32.3020700@v.loewis.de> <e2q250$57n$1@sea.gmane.org> Message-ID: <57124720604270222o13f29d56m8d15901f070bdd17@mail.gmail.com> On 4/27/06, Georg Brandl <g.brandl at gmx.net> wrote: > > Martin v. L?wis wrote: > > Fredrik Lundh wrote: > >> you do wonderful stuff, and then you post the announcement as a > >> followup to a really old message, to make sure that people using a > >> threaded reader only stumbles upon this by accident... this should > >> be on the python.org frontpage! > > > > I also wonder what the actions should be for the Windows release. > > > > Are these "contributed" to Python? With work of art, I'm particular > > cautious to include them without a specific permission of the artist, > > and licensing terms under which to use them. > > > > And then, technically: I assume The non-vista versions should be > > included the subversion repository, and the vista versions ignored? > > Or how else should that work? > > I would say yes. Vista isn't out there yet, and the icons can always be > updated in 2.6. > <delurk> OTOH, the ETA for Vista is "just after" 2.5 release (end of 2006 for OEM:s, beginning of 2007 for customers), long before 2.6 That said, I don't have any strong preferences either way. (..but I do have a x64 Vista machine running ATM) ..just my ?0.02 </delurk> -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/7dda7e8c/attachment.html From fredrik at pythonware.com Thu Apr 27 11:30:49 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Thu, 27 Apr 2006 11:30:49 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <e2q308$89t$1@sea.gmane.org> Guido van Rossum wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. I'm going to skip the discussion thread (or is it a flame war? cannot tell from the thread pattern), but here are my votes: +0 on dropping the requirement for subpackages in 2.X (this will break tools, but probably not break much code, and fixing the tools should be straightforward) (+1 on dropping it in 3.X) -1 on dropping the requirement for packages in 2.X (-0 on dropping it in 3.X) +0 on making this change in 2.5 </F> From dh at triple-media.com Thu Apr 27 12:14:26 2006 From: dh at triple-media.com (Dennis Heuer) Date: Thu, 27 Apr 2006 12:14:26 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <445054AF.9050702@v.loewis.de> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> <20060426230820.028976ea.dh@triple-media.com> <445054AF.9050702@v.loewis.de> Message-ID: <20060427121426.cd1f7bd7.dh@triple-media.com> The real misunderstanding lies somewhere else. I thought that the bitarray's instance would have more control over the long type's instance, like with the mutable types. I thought that the long type's superclass would provide methods similar to __setitem__ that would allow the bitarray instance to even *refresh* (or substitute) the long instance in place. The result would be a freshly created long instance substituting the old one. But the immuntable types do not offer such a feature because one cannot substitute the long instance without breaking the bitarray instance too. On Thu, 27 Apr 2006 07:20:47 +0200 "Martin v. L?wis" <martin at v.loewis.de> wrote: > Dennis Heuer wrote: > > The reason why I'd like to use the long type as the base of my bitarray > > type is that the long type is already implemented as an array and > > working efficiently with large amounts of bytes. > > This is wrong; you are mixing up the "is-a" and "has-a" relationships. > > While it might be useful to have long as the *representation* of a > bitarray, it's not the case that a bitarray *is* a long. A bitarray > is a sequence type and should implement all aspects of the sequence > protocol; long is a numeric type and should implement numeric > operations. While the operators for these somewhat overlap (for + and > *), the semantics of the operators never overlaps. So long and > bitarray are completely distinct types, not subtypes of each other. > > IOW, you should do > > class bitarray: > def __init__(self): > self.value = 0L > ... > > Regards, > Martin > > From ncoghlan at gmail.com Thu Apr 27 12:43:51 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 27 Apr 2006 20:43:51 +1000 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <4450A067.20200@gmail.com> Guido van Rossum wrote: > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. I haven't scanned this whole thread yet, but my first thought was to just try to find a way to give a better error message if we find a candidate package directory, but there's no __init__.py file. i.e. something like: ImportError: __init__.py not found in directory '<whatever>/foo' for package 'foo' I like the fact that __init__.py documents, right there in the file system directory listing, that the current directory is a Python package. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From anthony at python.org Thu Apr 27 14:50:25 2006 From: anthony at python.org (Anthony Baxter) Date: Thu, 27 Apr 2006 22:50:25 +1000 Subject: [Python-Dev] RELEASED Python 2.5 (alpha 2) Message-ID: <200604272250.37293.anthony@python.org> On behalf of the Python development team and the Python community, I'm happy to announce the second alpha release of Python 2.5. This is an *alpha* release of Python 2.5. As such, it is not suitable for a production environment. It is being released to solicit feedback and hopefully discover bugs, as well as allowing you to determine how changes in 2.5 might impact you. If you find things broken or incorrect, please log a bug on Sourceforge. In particular, note that changes to improve Python's support of 64 bit systems might require authors of C extensions to change their code. More information (as well as source distributions and Windows installers) are available from the 2.5 website: http://www.python.org/2.5/ Since the first alpha, a host of bug fixes and smaller new features have been added. See the release notes (available from the 2.5 webpage) for more. The plan from here is for either one more alpha release, or (more likely) moving to the beta releases, then moving to a 2.5 final release around August. PEP 356 includes the schedule and will be updated as the schedule evolves. The new features in Python 2.5 are described in Andrew Kuchling's What's New In Python 2.5. It's available from the 2.5 web page. Amongst the language features added include conditional expressions, the with statement, the merge of try/except and try/finally into try/except/finally, enhancements to generators to produce a coroutine kind of functionality, and a brand new AST-based compiler implementation. New modules added include hashlib, ElementTree, sqlite3 and ctypes. In addition, a new profiling module cProfile was added. In addition, in the second alpha we have the new 'mailbox' module (a product of last years Google Summer of Code). Enjoy this new release, Anthony Anthony Baxter anthony at python.org Python Release Manager (on behalf of the entire python-dev team) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://mail.python.org/pipermail/python-dev/attachments/20060427/7f603d50/attachment.pgp From g.brandl at gmx.net Thu Apr 27 15:05:20 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 27 Apr 2006 15:05:20 +0200 Subject: [Python-Dev] what do you like about other trackers and what do you hate about SF? In-Reply-To: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> References: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> Message-ID: <e2qfig$i3b$1@sea.gmane.org> Brett Cannon wrote: > I am starting to hash out what the Call for Trackers is going to say > on the Infrastructure mailing list. Laura Creighton suggested we have > a list of features that we would like to see and what we all hate > about SF so as to provide some guidelines in terms of how to set up > the test trackers that people try to sell us on. > > So, if you could, please reply to this message with ONE thing you have > found in a tracker other than SF that you have liked (especially > compared to SF) and ONE thing you dislike/hate about SF's tracker. I > will use the replies as a quick-and-dirty features list of stuff that > we would like to see demonstrated in the test trackers. I'd like to have an ability to interconnect bugs (this patch fixes this bug, this bug is related to this other bug, etc.) with automatic closing of dependents etc. In SF, I hate that there is no possibility to close a bug with a checkin. And that SF displays only the full name of an item submitter, but not commenters. Georg From skip at pobox.com Thu Apr 27 15:18:50 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 27 Apr 2006 08:18:50 -0500 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <17488.50362.230241.860848@montanaro.dyndns.org> Guido> One particular egregious problem is that *subpackage* are subject Guido> to the same rule. It so happens that there is essentially only Guido> one top-level package in the Google code base, and it already has Guido> an __init__.py file. But developers create new subpackages at a Guido> frightening rate, and forgetting to do "touch __init__.py" has Guido> caused many hours of lost work, not to mention injuries due to Guido> heads banging against walls. That's why God created make: install: touch __init__.py blah blah blah Skip From skip at pobox.com Thu Apr 27 15:21:09 2006 From: skip at pobox.com (skip at pobox.com) Date: Thu, 27 Apr 2006 08:21:09 -0500 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <200604261925.12302@news.perlig.de> <ca471dc20604261044s577d4bb1i6c34ed2bb67ffc08@mail.gmail.com> Message-ID: <17488.50501.618413.395045@montanaro.dyndns.org> >> Not that it would count in any way, but I'd prefer to keep it. How >> would I mark a subdirectory as "not-a-package" otherwise? Guido> What's the use case for that? Have you run into this requirement? Yes, we run into it. We typically install a package with any resources in a resources subdirectory. Now, those resources subdirectories generally don't contain Python files (Glade files are the most frequent occupants), but there's no reason they couldn't contain Python files. Skip From anthony at interlink.com.au Thu Apr 27 15:21:05 2006 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu, 27 Apr 2006 23:21:05 +1000 Subject: [Python-Dev] trunk is UNFROZEN Message-ID: <200604272321.07144.anthony@interlink.com.au> The release is done. The trunk is now unfrozen. Thanks, Anthony -- Anthony Baxter <anthony at interlink.com.au> It's never too late to have a happy childhood. From gjcarneiro at gmail.com Thu Apr 27 15:42:03 2006 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Thu, 27 Apr 2006 14:42:03 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> Message-ID: <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> On 4/26/06, Guido van Rossum <guido at python.org> wrote: [...] > So I have a very simple proposal: keep the __init__.py requirement for > top-level pacakages, but drop it for subpackages. This should be a > small change. I'm hesitant to propose *anything* new for Python 2.5, > so I'm proposing it for 2.6; if Neal and Anthony think this would be > okay to add to 2.5, they can do so. Damn these threads are so quick they are born and die off in 24 hours and don't give enough time for people to comment :( I'm a bit late in the thread, but I'm +1 (for 2.5) for the following reason. There are mainly two use cases for having a couple of modules foo.bar, and foo.zbr: 1. foo.bar and foo.zbr are both part of the same _package_, and are distributed together; 2. foo.bar and foo.zbr are two independent modules, distributed separately, but which share a common 'foo' _namespace_, to denote affiliation with a project. The use case #1 is arguably more common, but use case #2 is also very relevant. It happens for a lot of GNOME python bindings, for example, where we used to have gnome, gnome.ui, gnome.vfs, gnome.applet, etc. Now the problem. Suppose you have the source package python-foo-bar, which installs $pythondir/foo/__init__.py and $pythondir/foo/bar.py. This would make a module called "foo.bar" available. Likewise, you can have the source package python-foo-zbr, which installs $pythondir/foo/__init__.py and $pythondir/foo/zbr.py. This would make a module called "foo.zbr" available. The two packages above install the file $pythondir/foo/__init__.py. If one of them adds some content to __init__.py, the other one will overwrite it. Packaging these two packages for e.g. debian would be extremely difficult, because no two .deb packages are allowed to intall the same file. One solution is to generate the __init__.py file with post-install hooks and shell scripts. Another solution would be for example to have only python-foo-bar install the __init__.py file, but then python-foo-zbr would have to depend on python-foo-bar, while they're not really related. I hope I made the problem clear enough, and I hope people find this a compelling argument in favour of eliminating the need for __init__.py. Regards, Gustavo Carneiro. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/0dfbd77a/attachment.htm From dh at triple-media.com Thu Apr 27 16:01:43 2006 From: dh at triple-media.com (Dennis Heuer) Date: Thu, 27 Apr 2006 16:01:43 +0200 Subject: [Python-Dev] A better and more basic array type Message-ID: <20060427160143.57e89fd5.dh@triple-media.com> Yes, this was previously "inheriting basic types more efficiently" but now I want something different ;) I looked at the array type and found it quite C-ish. It is also not suited for arithmetics because it's a sequence type like a constrained list and not efficiently (and comfortably) usable like a *sliceable* integer. I'd rather like to find a basic built-in array type in python, which doesn't even ask for the format but just deals with the technical issues internally. This array type works like the long type but is mutable and supports *slicing* (focusing on a bit-range in the presentation of the number). The advantage of this array is that it is fully supporting arithmetics--even on slices--, that there is no need for boundary-checking, and that this mutable array type is actually usable as a superclass for new types. The other type of array, the constrained list, is inclusive: x = array() x += 255 # x now holds the number 255 or, in other words, the first # 8 bits are set to 1 x += "1010" # x interprets "1010" as a bit-pattern and adds 10 x.step(8) # telling x that, from now on, a *unit* is 8-bit long # (for correct slicing and iteration; this is not # related to the real/internal array layout) x[:] = 0 # clearing all bits or setting x to zero f = open("binary file", "rb") x += f.read() for n in x: # n holds always the next 8 bit (a byte) <do something with n> y = x[4:6] # stores the fifth and the sixth byte in y x.step(1) # telling x that, from now on, a *unit* is 1-bit long z = x[4:6] # stores the fifth and the sixth bit in z x[0:2] = z # overwrites the first two bits in x with the two bits in z Possibly the step attribute can become problematic and it's better to define a fix step at initialization. Smaller bit-packages from otherwise compatible types could be packed up with trailing zeroes. I hope that somebody agrees and is already starting to implement this new array type. My best wishes are with you. From bclum at cs.ucsd.edu Thu Apr 27 15:09:40 2006 From: bclum at cs.ucsd.edu (Brian C. Lum) Date: Thu, 27 Apr 2006 06:09:40 -0700 (PDT) Subject: [Python-Dev] Type-Def-ing Python Message-ID: <Pine.LNX.4.64.0604270607370.30352@csegrad.ucsd.edu> Dear Python Community, I have been trying to research how to type-def python. I want to type-def python so that I can use it as a static analyzer to check for bugs. I have been going over research on this field and I came upon Brett Cannon's thesis in which he tweaks the compiler and shows that type-defing python would not help the compiler achieve a 5% performace increase. Brett Cannon, "Localized Type Inference of Atomic Types in Python": http://www.ocf.berkeley.edu/~bac/thesis.pdf I was wondering if anyone could help me contact him so that I could might ask him for his source code and try to use type-defing as a bug-finder. With thanks, Brian From bh at intevation.de Thu Apr 27 15:48:01 2006 From: bh at intevation.de (Bernhard Herzog) Date: Thu, 27 Apr 2006 15:48:01 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> (Gustavo Carneiro's message of "Thu, 27 Apr 2006 14:42:03 +0100") References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> Message-ID: <s9zejzji0z2.fsf@thetis.intevation.de> "Gustavo Carneiro" <gjcarneiro at gmail.com> writes: > Now the problem. Suppose you have the source package python-foo-bar, > which installs $pythondir/foo/__init__.py and $pythondir/foo/bar.py. This > would make a module called "foo.bar" available. Likewise, you can have the > source package python-foo-zbr, which installs $pythondir/foo/__init__.py and > $pythondir/foo/zbr.py. This would make a module called "foo.zbr" available. > > The two packages above install the file $pythondir/foo/__init__.py. If > one of them adds some content to __init__.py, the other one will overwrite > it. Packaging these two packages for e.g. debian would be extremely > difficult, because no two .deb packages are allowed to intall the same file. > > One solution is to generate the __init__.py file with post-install hooks > and shell scripts. Another solution would be for example to have only > python-foo-bar install the __init__.py file, but then python-foo-zbr would > have to depend on python-foo-bar, while they're not really related. Yet another solution would be to put foo/__init__.py into a third package, e.g. python-foo, on which both python-foo-bar and python-foo-zbr depend. Bernhard -- Intevation GmbH http://intevation.de/ Skencil http://skencil.org/ Thuban http://thuban.intevation.org/ From thomas at python.org Thu Apr 27 16:39:15 2006 From: thomas at python.org (Thomas Wouters) Date: Thu, 27 Apr 2006 16:39:15 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> Message-ID: <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> On 4/27/06, Thomas Wouters <thomas at python.org> wrote: > > Alrighty then. The list has about 12 hours to convince me (and you) that > it's a bad idea to generate that warning. I'll be asleep by the time the > trunk un-freezes, and I have a string of early meetings tomorrow. I'll get > to it somewhere in the afternoon :) > I could check it in, except the make-testall I ran overnight showed a small problem: the patch would generate a number of spurious warnings in the trunk: /home/thomas/python/python/trunk/Lib/gzip.py:9: ImportWarning: Not importing directory '/home/thomas/python/python/trunk/Modules/zlib': missing __init__.py /home/thomas/python/python/trunk/Lib/ctypes/__init__.py:8: ImportWarning: Not importing directory '/home/thomas/python/python/trunk/Modules/_ctypes': missing __init__.py (and a few more zlib ones.) The reason for that is that ./Modules is added to the import path, by a non-installed Python. This is because of the pre-distutils Modules/Setup-style build method of modules (which is still sometimes used.) I can't find where Modules is added to sys.path, though, even if I wanted to remove it :) So, do we: a) forget about the warning because of the layout of the svn tree (bad, imho) 2) rename Modules/zlib and Modules/_ctypes to avoid the warning (inconvenient, but I don't know how inconvenient) - fix the build procedure so Modules isn't added to sys.path unless it absolutely has to (which is only very rarely the case, I believe) or lastly, make regrtest.py ignore those specific warnings? -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/81fe22c9/attachment.htm From pje at telecommunity.com Thu Apr 27 16:50:39 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 27 Apr 2006 10:50:39 -0400 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <s9zejzji0z2.fsf@thetis.intevation.de> References: <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> Message-ID: <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> At 03:48 PM 4/27/2006 +0200, Bernhard Herzog wrote: >"Gustavo Carneiro" <gjcarneiro at gmail.com> writes: > > > Now the problem. Suppose you have the source package python-foo-bar, > > which installs $pythondir/foo/__init__.py and $pythondir/foo/bar.py. This > > would make a module called "foo.bar" available. Likewise, you can have the > > source package python-foo-zbr, which installs > $pythondir/foo/__init__.py and > > $pythondir/foo/zbr.py. This would make a module called "foo.zbr" > available. > > > > The two packages above install the file $pythondir/foo/__init__.py. If > > one of them adds some content to __init__.py, the other one will overwrite > > it. Packaging these two packages for e.g. debian would be extremely > > difficult, because no two .deb packages are allowed to intall the same > file. > > > > One solution is to generate the __init__.py file with post-install hooks > > and shell scripts. Another solution would be for example to have only > > python-foo-bar install the __init__.py file, but then python-foo-zbr would > > have to depend on python-foo-bar, while they're not really related. > >Yet another solution would be to put foo/__init__.py into a third >package, e.g. python-foo, on which both python-foo-bar and >python-foo-zbr depend. Or you can package them with setuptools, and declare foo to be a namespace package. If installing in the mode used for building RPMs and debs, there will be no __init__.py. Instead, each installs a .pth file that ensures a dummy package object is created in sys.modules with an appropriate __path__. This solution is packaging-system agnostic and doesn't require any special support from the packaging tool. (The downside, however, is that neither foo.bar nor foo.zbr's __init__.py will be allowed to have any content, since in some installation scenarios there will be no __init__.py at all.) From gjcarneiro at gmail.com Thu Apr 27 17:14:58 2006 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Thu, 27 Apr 2006 16:14:58 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <s9zejzji0z2.fsf@thetis.intevation.de> <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> Message-ID: <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> On 4/27/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > At 03:48 PM 4/27/2006 +0200, Bernhard Herzog wrote: > >"Gustavo Carneiro" <gjcarneiro at gmail.com> writes: > > > > > Now the problem. Suppose you have the source package > python-foo-bar, > > > which installs $pythondir/foo/__init__.py and > $pythondir/foo/bar.py. This > > > would make a module called "foo.bar" available. Likewise, you can > have the > > > source package python-foo-zbr, which installs > > $pythondir/foo/__init__.py and > > > $pythondir/foo/zbr.py. This would make a module called "foo.zbr" > > available. > > > > > > The two packages above install the file > $pythondir/foo/__init__.py. If > > > one of them adds some content to __init__.py, the other one will > overwrite > > > it. Packaging these two packages for e.g. debian would be extremely > > > difficult, because no two .deb packages are allowed to intall the same > > file. > > > > > > One solution is to generate the __init__.py file with post-install > hooks > > > and shell scripts. Another solution would be for example to have only > > > python-foo-bar install the __init__.py file, but then python-foo-zbr > would > > > have to depend on python-foo-bar, while they're not really related. > > > >Yet another solution would be to put foo/__init__.py into a third > >package, e.g. python-foo, on which both python-foo-bar and > >python-foo-zbr depend. You can't be serious. One package just to install a __init__.py file? Or you can package them with setuptools, and declare foo to be a namespace > package. Let's not assume setuptools are always used, shall we? If installing in the mode used for building RPMs and debs, there > will be no __init__.py. Instead, each installs a .pth file that ensures a > dummy package object is created in sys.modules with an appropriate > __path__. This solution is packaging-system agnostic and doesn't require > any special support from the packaging tool. I don't understand this solution. How can a .pth file create a 'dummy package'? Remember that the objective is to have "foo.bar" and "foo.zbr" modules, not just "bar" and "zbr". But in any case, it already sounds like a convoluted solution. No way it can beat the obvious/simple solution: to remove the need to have __init__.py in the first place. (The downside, however, is that neither foo.bar nor foo.zbr's __init__.py > will be allowed to have any content, since in some installation scenarios > there will be no __init__.py at all.) That's ok in the context of this proposal (not having __init__.py at all). -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/bb6f3ff6/attachment.html From thomas at python.org Thu Apr 27 17:31:09 2006 From: thomas at python.org (Thomas Wouters) Date: Thu, 27 Apr 2006 17:31:09 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <s9zejzji0z2.fsf@thetis.intevation.de> <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> Message-ID: <9e804ac0604270831s43da3fe4lbf91db7c710a5dfa@mail.gmail.com> On 4/27/06, Gustavo Carneiro <gjcarneiro at gmail.com> wrote: > On 4/27/06, Phillip J. Eby <pje at telecommunity.com> wrote: > > > > At 03:48 PM 4/27/2006 +0200, Bernhard Herzog wrote: > > >"Gustavo Carneiro" <gjcarneiro at gmail.com> writes: > > > > > > > Now the problem. Suppose you have the source package > > python-foo-bar, > > > > which installs $pythondir/foo/__init__.py and > > $pythondir/foo/bar.py. This > > > > would make a module called "foo.bar" available. Likewise, you can > > have the > > > > source package python-foo-zbr, which installs > > > $pythondir/foo/__init__.py and > > > > $pythondir/foo/zbr.py. This would make a module called "foo.zbr" > > > available. > > > > > > > > The two packages above install the file > > $pythondir/foo/__init__.py. If > > > > one of them adds some content to __init__.py, the other one will > > overwrite > > > > it. Packaging these two packages for e.g. debian would be extremely > > > > difficult, because no two .deb packages are allowed to intall the > > same file. > > > > > >Yet another solution would be to put foo/__init__.py into a third > > >package, e.g. python-foo, on which both python-foo-bar and > > >python-foo-zbr depend. > > > You can't be serious. One package just to install a __init__.py file? > Sure. Have you counted the number of 'empty' packages in Debian lately? Besides, Guido's original proposal is not a fix for your problem, either; he only proposes to change the requirement for *sub*packages. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/71af164c/attachment.htm From gjcarneiro at gmail.com Thu Apr 27 17:42:03 2006 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Thu, 27 Apr 2006 16:42:03 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604270831s43da3fe4lbf91db7c710a5dfa@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <s9zejzji0z2.fsf@thetis.intevation.de> <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> <9e804ac0604270831s43da3fe4lbf91db7c710a5dfa@mail.gmail.com> Message-ID: <a467ca4f0604270842g1ee95d8nce3827267567dcc1@mail.gmail.com> On 4/27/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/27/06, Gustavo Carneiro <gjcarneiro at gmail.com> wrote: > > > On 4/27/06, Phillip J. Eby < pje at telecommunity.com> wrote: > > > > > At 03:48 PM 4/27/2006 +0200, Bernhard Herzog wrote: > > >"Gustavo Carneiro" <gjcarneiro at gmail.com > writes: > > > > > > > Now the problem. Suppose you have the source package > > python-foo-bar, > > > > which installs $pythondir/foo/__init__.py and > > $pythondir/foo/bar.py. This > > > > would make a module called "foo.bar" available. Likewise, you can > > have the > > > > source package python-foo-zbr, which installs > > > $pythondir/foo/__init__.py and > > > > $pythondir/foo/zbr.py. This would make a module called "foo.zbr" > > > available. > > > > > > > > The two packages above install the file > > $pythondir/foo/__init__.py. If > > > > one of them adds some content to __init__.py, the other one will > > overwrite > > > > it. Packaging these two packages for e.g. debian would be extremely > > > > difficult, because no two .deb packages are allowed to intall the > > same file. > > > > > >Yet another solution would be to put foo/__init__.py into a third > > >package, e.g. python-foo, on which both python-foo-bar and > > >python-foo-zbr depend. > > > > You can't be serious. One package just to install a __init__.py file? > > Sure. Have you counted the number of 'empty' packages in Debian lately? > Sure. That is already a problem; let's not make it a worse problem for no good reason; I haven't heard a good reason to keep the __init__.py file besides backward compatibility concerns. Besides, Guido's original proposal is not a fix for your problem, either; he > only proposes to change the requirement for *sub*packages. > It *is* a solution for my problem. I don't need the __init__.py file for anything, since I don't need anything defined in the the 'foo' namespace, only the subpackages foo.bar and foo.zbr. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/802a20f6/attachment.html From thomas at python.org Thu Apr 27 17:49:23 2006 From: thomas at python.org (Thomas Wouters) Date: Thu, 27 Apr 2006 17:49:23 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <a467ca4f0604270842g1ee95d8nce3827267567dcc1@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <s9zejzji0z2.fsf@thetis.intevation.de> <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> <9e804ac0604270831s43da3fe4lbf91db7c710a5dfa@mail.gmail.com> <a467ca4f0604270842g1ee95d8nce3827267567dcc1@mail.gmail.com> Message-ID: <9e804ac0604270849r32e3d3fhcc18a04a9c9c0f1f@mail.gmail.com> On 4/27/06, Gustavo Carneiro <gjcarneiro at gmail.com> wrote: > Besides, Guido's original proposal is not a fix for your problem, either; > > he only proposes to change the requirement for *sub*packages. > > > > It *is* a solution for my problem. I don't need the __init__.py file > for anything, since I don't need anything defined in the the 'foo' > namespace, only the subpackages foo.bar and foo.zbr . > ... No. Guido's original proposal is not a fix for your problem, because *it doesn't affect the 'foo' namespace*. Guido's original proposal still requires foo/__init__.py for your namespace to work, it just makes foo/bar/__init__.py and foo/zbr/__init__.py optional. -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/cd4a0488/attachment.htm From janssen at parc.com Thu Apr 27 17:57:03 2006 From: janssen at parc.com (Bill Janssen) Date: Thu, 27 Apr 2006 08:57:03 PDT Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: Your message of "Thu, 27 Apr 2006 02:16:16 PDT." <e2q250$57n$1@sea.gmane.org> Message-ID: <06Apr27.085708pdt."58641"@synergy1.parc.xerox.com> By the way, check out the new Python/Mac iconography that Jacob Rus has put together (with lots of advice from others :-), at http://hcs.harvard.edu/~jrus/python/prettified-py-icons.png. Tim Parkin's new logo sure started something. Bill From gjcarneiro at gmail.com Thu Apr 27 18:20:31 2006 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Thu, 27 Apr 2006 17:20:31 +0100 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604270849r32e3d3fhcc18a04a9c9c0f1f@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <a467ca4f0604270642r435e303dy2e2f643f462789fe@mail.gmail.com> <s9zejzji0z2.fsf@thetis.intevation.de> <5.1.1.6.0.20060427104717.01e2fe70@mail.telecommunity.com> <a467ca4f0604270814j6eb8864dma73cdefee946f8c2@mail.gmail.com> <9e804ac0604270831s43da3fe4lbf91db7c710a5dfa@mail.gmail.com> <a467ca4f0604270842g1ee95d8nce3827267567dcc1@mail.gmail.com> <9e804ac0604270849r32e3d3fhcc18a04a9c9c0f1f@mail.gmail.com> Message-ID: <a467ca4f0604270920q5295190y18efd6ce5c4fbd8f@mail.gmail.com> On 4/27/06, Thomas Wouters <thomas at python.org> wrote: > > > > On 4/27/06, Gustavo Carneiro <gjcarneiro at gmail.com> wrote: > > > Besides, Guido's original proposal is not a fix for your problem, > > > either; he only proposes to change the requirement for *sub*packages. > > > > > > > It *is* a solution for my problem. I don't need the __init__.py file > > for anything, since I don't need anything defined in the the 'foo' > > namespace, only the subpackages foo.bar and foo.zbr . > > > > ... No. Guido's original proposal is not a fix for your problem, because > *it doesn't affect the 'foo' namespace*. Guido's original proposal still > requires foo/__init__.py for your namespace to work, it just makes > foo/bar/__init__.py and foo/zbr/__init__.py optional. > Damn, you're right, I confused subpackage with submodule :P In that case, can I counter-propose to stop requiring the __init__.py file in [foo/__init__.py, foo/bar.py] ? ;-) The proposal would mean that a directory 'foo' with a single file bar.pywould make the module ' foo.bar' available if the parent directory of 'foo' is in sys.path. /me faces the pitchforks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/463e0135/attachment.html From mal at egenix.com Thu Apr 27 18:47:44 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 27 Apr 2006 18:47:44 +0200 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> References: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> Message-ID: <4450F5B0.6060107@egenix.com> Phillip J. Eby wrote: > Please see > http://svn.python.org/projects/sandbox/trunk/setuptools/doc/formats.txt for > source, or http://peak.telecommunity.com/DevCenter/EggFormats for an > HTML-formatted version. > > Included are summary descriptions of the formats of all of the standard > metadata produced by setuptools, along with pointers to the existing > manuals that describe the syntax used for representing requirements, entry > points, etc. as text. The .egg, .egg-info, and .egg-link formats and > layouts are also specified, along with the filename syntax used to embed > project/version/Python version/platform metadata. Last, but not least, > there are detailed explanations of how resources (such as C extensions) are > extracted on-the-fly and cached, how C extensions get imported from > zipfiles, and how EasyInstall works around the limitations of Python's > default sys.path initialization. > > If there's anything else you'd like in there, please let me know. Thanks. Just read that you are hijacking site.py for setuptools' "just works" purposes. Please be aware that by allowing .pth files in all PYTHONPATH directories you are opening up a security hole - anyone with write-permission to one of these .pth files could manipulate other user's use of Python. That's the reason why only site-packages .pth files are taken into account, since normally only root has write access to this directory. The added startup time for scanning PYTHONPATH for .pth files and processing them is also not mentioned in the documentation. Every Python invocation would have to pay for this - regardless of whether eggs are used or not. I really wish that we could agree on an installation format for package (meta) data that does *not* rely on ZIP files. All the unnecessary magic that you have in your design would just go away - together with most of the issues people on this list have with it. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 27 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From fdrake at acm.org Thu Apr 27 19:20:23 2006 From: fdrake at acm.org (Fred L. Drake, Jr.) Date: Thu, 27 Apr 2006 13:20:23 -0400 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <06Apr27.085708pdt."58641"@synergy1.parc.xerox.com> References: <06Apr27.085708pdt."58641"@synergy1.parc.xerox.com> Message-ID: <200604271320.24051.fdrake@acm.org> On Thursday 27 April 2006 11:57, Bill Janssen wrote: > By the way, check out the new Python/Mac iconography that Jacob Rus > has put together (with lots of advice from others :-), at > http://hcs.harvard.edu/~jrus/python/prettified-py-icons.png. Very nice! I just might have to start using some of these modern desktops just to get all the pretty pictures. -Fred -- Fred L. Drake, Jr. <fdrake at acm.org> From guido at python.org Thu Apr 27 19:25:33 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Apr 2006 10:25:33 -0700 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> Message-ID: <ca471dc20604271025g5b6ae98av613907d26932687f@mail.gmail.com> On 4/27/06, Thomas Wouters <thomas at python.org> wrote: > I could check it in, except the make-testall I ran overnight showed a small > problem: the patch would generate a number of spurious warnings in the > trunk: > > /home/thomas/python/python/trunk/Lib/gzip.py:9: > ImportWarning: Not importing directory > '/home/thomas/python/python/trunk/Modules/zlib': missing > __init__.py > > /home/thomas/python/python/trunk/Lib/ctypes/__init__.py:8: > ImportWarning: Not importing directory > '/home/thomas/python/python/trunk/Modules/_ctypes': missing > __init__.py > > (and a few more zlib ones.) The reason for that is that ./Modules is added > to the import path, by a non-installed Python. This is because of the > pre-distutils Modules/Setup-style build method of modules (which is still > sometimes used.) I can't find where Modules is added to sys.path, though, > even if I wanted to remove it :) > > So, do we: > a) forget about the warning because of the layout of the svn tree (bad, > imho) > 2) rename Modules/zlib and Modules/_ctypes to avoid the warning > (inconvenient, but I don't know how inconvenient) > - fix the build procedure so Modules isn't added to sys.path unless it > absolutely has to (which is only very rarely the case, I believe) > or lastly, make regrtest.py ignore those specific warnings? I'd say the latter. That's how we deal with warnings during the test suite in general don't we? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Thu Apr 27 19:52:02 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 27 Apr 2006 13:52:02 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <4450F5B0.6060107@egenix.com> References: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> At 06:47 PM 4/27/2006 +0200, M.-A. Lemburg wrote: >Just read that you are hijacking site.py for setuptools' >"just works" purposes. "hijacking" isn't the word I'd use; "wrapping" is what it actually does. The standard site.py is executed, there is just some pre- and post-processing of sys.path. >Please be aware that by allowing .pth files in all PYTHONPATH >directories you are opening up a security hole - anyone with >write-permission to one of these .pth files could manipulate >other user's use of Python. FUD. Write access to a PYTHONPATH-listed directory already implies complete control over the user's Python environment. This doesn't introduce any issues that aren't implicit in the very existence of PYTHONPATH. >That's the reason why only site-packages .pth files are >taken into account, since normally only root has write >access to this directory. False. On OS X, Python processes any .pth files that are found in the ~/Library/Python2.x/site-packages directory. (Which means, by the way, that OS X users don't need most of these hacks; they just point their install directory to their personal site-packages, and it already Just Works. Setuptools only introduces PYTHONPATH hacks to make this work on *other* platforms.) >The added startup time for scanning PYTHONPATH for .pth >files and processing them is also not mentioned in the >documentation. Every Python invocation would have to pay >for this - regardless of whether eggs are used or not. FUD again. This happens if and only if: 1. You used easy_install to install a package in such a way that the .pth file is required (i.e., you don't choose multi-version mode or single-version externally-managed mode) 2. You include the affected directory in PYTHONPATH So the idea that "every Python invocation would have to pay for this" is false. People who care about the performance have plenty of options for controlling this. Is there a nice HOWTO that explains what to do if you care more about performance than convenience? No. Feel free to contribute one. >I really wish that we could agree on an installation format >for package (meta) data that does *not* rely on ZIP files. There is one already, and it's used if you select single-version externally-managed mode explicitly, or if you install using --root. >All the unnecessary magic that you have in your design would >just go away - together with most of the issues people on this >list have with it. Would you settle for a way to make a one-time ~/.pydistutils.cfg entry that will make setuptools act the way you want it to? That is, a way to make setuptools-based packages default to --single-version-externally-managed mode for installation on a given machine or machine(s)? That way, you'll never have to wonder whether a package uses setuptools or not, you can just "setup.py install" and be happy. From tjreedy at udel.edu Thu Apr 27 20:17:02 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 27 Apr 2006 14:17:02 -0400 Subject: [Python-Dev] Type-Def-ing Python References: <Pine.LNX.4.64.0604270607370.30352@csegrad.ucsd.edu> Message-ID: <e2r1qu$tk2$1@sea.gmane.org> "Brian C. Lum" <bclum at cs.ucsd.edu> wrote in message news:Pine.LNX.4.64.0604270607370.30352 at csegrad.ucsd.edu... > Brett Cannon, "Localized Type Inference of Atomic Types in Python": > http://www.ocf.berkeley.edu/~bac/thesis.pdf > > I was wondering if anyone could help me contact him so that I could might > ask him for his source code and try to use type-defing as a bug-finder. He will probably see this, but if he does not, he posts here regularly so just check the archives for the last week. From guido at python.org Thu Apr 27 20:38:48 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Apr 2006 11:38:48 -0700 Subject: [Python-Dev] traceback.py still broken in 2.5a2 Message-ID: <ca471dc20604271138y11273867h81222566c2119abd@mail.gmail.com> The change below was rolled back because it broke other stuff. But IMO it is actually necessary to fix this, otherwise those few exceptions that don't derive from Exception won't be printed correctly by the traceback module: guido:~/p/osx guido$ ./python.exe Python 2.5a2 (trunk:45765, Apr 27 2006, 11:36:10) [GCC 4.0.0 20041026 (Apple Computer, Inc. build 4061)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> def f(): ... raise KeyboardInterrupt ... >>> f() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 2, in f KeyboardInterrupt >>> import traceback >>> try: ... f() ... except: ... traceback.print_exc() ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "<stdin>", line 2, in f <class 'exceptions.KeyboardInterrupt'> >>> --Guido ---------- Forwarded message ---------- From: phillip.eby <python-checkins at python.org> Date: Mar 24, 2006 3:10 PM Subject: [Python-checkins] r43299 - python/trunk/Lib/traceback.py To: python-checkins at python.org Author: phillip.eby Date: Fri Mar 24 23:10:54 2006 New Revision: 43299 Modified: python/trunk/Lib/traceback.py Log: Revert r42719, because the isinstance() check wasn't redundant; formatting a string exception was causing a TypeError. Modified: python/trunk/Lib/traceback.py ============================================================================== --- python/trunk/Lib/traceback.py (original) +++ python/trunk/Lib/traceback.py Fri Mar 24 23:10:54 2006 @@ -158,7 +158,7 @@ """ list = [] if (type(etype) == types.ClassType - or issubclass(etype, Exception)): + or (isinstance(etype, type) and issubclass(etype, Exception))): stype = etype.__name__ else: stype = etype _______________________________________________ Python-checkins mailing list Python-checkins at python.org http://mail.python.org/mailman/listinfo/python-checkins -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Thu Apr 27 20:49:54 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 27 Apr 2006 14:49:54 -0400 Subject: [Python-Dev] traceback.py still broken in 2.5a2 In-Reply-To: <ca471dc20604271138y11273867h81222566c2119abd@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060427144629.03c333b8@mail.telecommunity.com> At 11:38 AM 4/27/2006 -0700, Guido van Rossum wrote: >The change below was rolled back because it broke other stuff. But IMO >it is actually necessary to fix this, Huh? The change you showed wasn't reverted AFAICT; it's still on the trunk. > otherwise those few exceptions >that don't derive from Exception won't be printed correctly by the >traceback module: It looks like the original change (not the change you listed) should've been using "issubclass(etype, BaseException)". (I only reverted the removal of 'isinstance()', which was causing string exceptions to break.) Anyway, looks like a four-letter fix (i.e. add "Base" there), unless there was some other problem I'm missing? From guido at python.org Thu Apr 27 21:01:15 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Apr 2006 12:01:15 -0700 Subject: [Python-Dev] traceback.py still broken in 2.5a2 In-Reply-To: <5.1.1.6.0.20060427144629.03c333b8@mail.telecommunity.com> References: <5.1.1.6.0.20060427144629.03c333b8@mail.telecommunity.com> Message-ID: <ca471dc20604271201s4227663enff08698d16482fe8@mail.gmail.com> On 4/27/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 11:38 AM 4/27/2006 -0700, Guido van Rossum wrote: > >The change below was rolled back because it broke other stuff. But IMO > >it is actually necessary to fix this, > > Huh? The change you showed wasn't reverted AFAICT; it's still on the trunk. Sorry. I meant a different change in the same piece of code. > > otherwise those few exceptions > >that don't derive from Exception won't be printed correctly by the > >traceback module: > > It looks like the original change (not the change you listed) should've > been using "issubclass(etype, BaseException)". (I only reverted the > removal of 'isinstance()', which was causing string exceptions to break.) > > Anyway, looks like a four-letter fix (i.e. add "Base" there), unless there > was some other problem I'm missing? Correct, that's the fix I'm looking for. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Thu Apr 27 21:44:09 2006 From: martin at v.loewis.de (=?windows-1252?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 27 Apr 2006 21:44:09 +0200 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <57124720604270222o13f29d56m8d15901f070bdd17@mail.gmail.com> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> <44473F32.3020700@v.loewis.de> <e2q250$57n$1@sea.gmane.org> <57124720604270222o13f29d56m8d15901f070bdd17@mail.gmail.com> Message-ID: <44511F09.3080801@v.loewis.de> Simon Dahlbacka wrote: > OTOH, the ETA for Vista is "just after" 2.5 release (end of 2006 for > OEM:s, beginning of 2007 for customers), long before 2.6 > > That said, I don't have any strong preferences either way. (..but I do > have a x64 Vista machine running ATM) Good to know, but unfortunately, not very helpful in the end: Even if I wanted to include these icons, I still had no clue whatsoever on how to do that. What files do I need to deploy into what locations for this to "work", and how do I determine whether or not to use the "Vista approach" (assuming it is different from the "pre-Vista approach")? Regards, Martin From martin at v.loewis.de Thu Apr 27 21:46:18 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 27 Apr 2006 21:46:18 +0200 Subject: [Python-Dev] inheriting basic types more efficiently In-Reply-To: <20060427121426.cd1f7bd7.dh@triple-media.com> References: <mailman.2570.1146065722.27773.python-dev@python.org> <20060426182836.400f2917.dh@triple-media.com> <20060426192726.b4b654c1.dh@triple-media.com> <ca471dc20604261047w47bf2972l3f9433d7def872c6@mail.gmail.com> <20060426230820.028976ea.dh@triple-media.com> <445054AF.9050702@v.loewis.de> <20060427121426.cd1f7bd7.dh@triple-media.com> Message-ID: <44511F8A.7090004@v.loewis.de> Dennis Heuer wrote: > The real misunderstanding lies somewhere else. I thought that the > bitarray's instance would have more control over the long type's > instance, like with the mutable types. I thought that the long type's > superclass would provide methods similar to __setitem__ that would > allow the bitarray instance to even *refresh* (or substitute) the long > instance in place. The result would be a freshly created long instance > substituting the old one. But the immuntable types do not offer such a > feature because one cannot substitute the long instance without > breaking the bitarray instance too. Maybe that's the misunderstanding: but then you are still left with the mis-design. Even if long was mutable, or even if you used a mutable type as the base type (such as array.array), you *still* shouldn't inherit from it - these types are not in an "is-a" relationship. Regards, Martin From martin at v.loewis.de Thu Apr 27 21:48:03 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 27 Apr 2006 21:48:03 +0200 Subject: [Python-Dev] A better and more basic array type In-Reply-To: <20060427160143.57e89fd5.dh@triple-media.com> References: <20060427160143.57e89fd5.dh@triple-media.com> Message-ID: <44511FF3.7030308@v.loewis.de> Dennis Heuer wrote: > I hope that somebody agrees and is already starting to implement this > new array type. My best wishes are with you. This is off-topic for python-dev. Please post to comp.lang.python instead. Regards, Martin From mal at egenix.com Thu Apr 27 21:54:24 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 27 Apr 2006 21:54:24 +0200 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> References: <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> Message-ID: <44512170.70502@egenix.com> Phillip J. Eby wrote: > At 06:47 PM 4/27/2006 +0200, M.-A. Lemburg wrote: >> Just read that you are hijacking site.py for setuptools' >> "just works" purposes. > > "hijacking" isn't the word I'd use; "wrapping" is what it actually > does. The standard site.py is executed, there is just some pre- and > post-processing of sys.path. Whatever you call it: you're fiddling with the standard Python distribution again, without letting your users know about it. Note that sitecustomize.py is intended for site local changes that admin wants to implement. >> Please be aware that by allowing .pth files in all PYTHONPATH >> directories you are opening up a security hole - anyone with >> write-permission to one of these .pth files could manipulate >> other user's use of Python. > > FUD. Write access to a PYTHONPATH-listed directory already implies > complete control over the user's Python environment. This doesn't > introduce any issues that aren't implicit in the very existence of > PYTHONPATH. Really ? How often do you check the contents of a .pth file ? How often do you check sys.path of a running interpreter to see whether some .pth file has added extra entries that you weren't aware of ? Note that I was talking about the .pth file being writable, not the directory. Even if they are not writable by non-admins, the .pth files can point to directories which are. Combine that with the -m option of the Python interpreter and you have the basis for a nice exploit. >> That's the reason why only site-packages .pth files are >> taken into account, since normally only root has write >> access to this directory. > > False. On OS X, Python processes any .pth files that are found in the > ~/Library/Python2.x/site-packages directory. Hmm, that was new to me. OTOH, it's a very special case and falls under the "normally only root" clause :-) - Mac users are often enough their own admins; I suppose that's why Jack added this: if sys.platform == 'darwin': # for framework builds *only* we add the standard Apple # locations. Currently only per-user, but /Library and # /Network/Library could be added too > (Which means, by the way, > that OS X users don't need most of these hacks; they just point their > install directory to their personal site-packages, and it already Just > Works. Setuptools only introduces PYTHONPATH hacks to make this work on > *other* platforms.) > >> The added startup time for scanning PYTHONPATH for .pth >> files and processing them is also not mentioned in the >> documentation. Every Python invocation would have to pay >> for this - regardless of whether eggs are used or not. > > FUD again. This happens if and only if: > > 1. You used easy_install to install a package in such a way that the > .pth file is required (i.e., you don't choose multi-version mode or > single-version externally-managed mode) > > 2. You include the affected directory in PYTHONPATH > > So the idea that "every Python invocation would have to pay for this" is > false. People who care about the performance have plenty of options for > controlling this. > > Is there a nice HOWTO that explains what to do if you care more about > performance than convenience? No. Feel free to contribute one. Here's a HOWTO to optimize startup time, without loosing convenience: * build Python with all builtin modules statically linked into the interpreter * setup PYTHONPATH to just include the directories you really care about * avoid putting any ZIP archives on PYTHONPATH, except for python24.zip (there are better alternatives; see below) * avoid using .pth files If you run a lot of scripts, you'll also want to do keep this in mind: * start Python with option -s (avoids importing site.py and among other things, parsing .pth files), if you're running a script More startup time can be saved by using an approach like the one described in mxCGIPython which adds the complete Python stdlib to the interpreter binary, effectively turning the Python installation into a single file: http://www.egenix.com/files/python/mxCGIPython.html >> I really wish that we could agree on an installation format >> for package (meta) data that does *not* rely on ZIP files. > > There is one already, and it's used if you select single-version > externally-managed mode explicitly, or if you install using --root. No, I'm talking about a format which has the same if not more benefits as what you're trying to achieve with the .egg file approach, but without all the magic and hacks. It's not like this wouldn't be possible to achieve. >> All the unnecessary magic that you have in your design would >> just go away - together with most of the issues people on this >> list have with it. > > Would you settle for a way to make a one-time ~/.pydistutils.cfg entry > that will make setuptools act the way you want it to? That is, a way to > make setuptools-based packages default to > --single-version-externally-managed mode for installation on a given > machine or machine(s)? That way, you'll never have to wonder whether a > package uses setuptools or not, you can just "setup.py install" and be > happy. Not really. I would like Python to remain a place that's free of unnecessary magic in the first place, not by having to configure setuptools to disable it's magic. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 27 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From simon.dahlbacka at gmail.com Thu Apr 27 22:17:03 2006 From: simon.dahlbacka at gmail.com (Simon Dahlbacka) Date: Thu, 27 Apr 2006 23:17:03 +0300 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <44511F09.3080801@v.loewis.de> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> <44473F32.3020700@v.loewis.de> <e2q250$57n$1@sea.gmane.org> <57124720604270222o13f29d56m8d15901f070bdd17@mail.gmail.com> <44511F09.3080801@v.loewis.de> Message-ID: <57124720604271317r61c0dd06o182b1e25e2f14c7b@mail.gmail.com> On 4/27/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > > Simon Dahlbacka wrote: > > OTOH, the ETA for Vista is "just after" 2.5 release (end of 2006 for > > OEM:s, beginning of 2007 for customers), long before 2.6 > > > > That said, I don't have any strong preferences either way. (..but I do > > have a x64 Vista machine running ATM) > > Good to know, but unfortunately, not very helpful in the end: Even if > I wanted to include these icons, I still had no clue whatsoever on how > to do that. What files do I need to deploy into what locations for this > to "work", and how do I determine whether or not to use the "Vista > approach" (assuming it is different from the "pre-Vista approach")? [Disclaimer: the following is what google told me] it seems that the big icons 256x256 are in fact (or at least can be) PNG format within the .ico file. However, the support for these kinds of icons seems to be somewhat lacking, in particular, the current(including VS2005) resource compilers barf when given such an icon. Given that, it does not really seem feasible to include them.. Speaking of icons, do the bundled ico files have to be named py.ico and pyc.ico? (These names does not play along with tab-completion, making the user, me that is, tab twice "unnecessarily" to get to python.exe) /Simon A little bit about me: I've almost completed my Computer Engineering(Embedded systems) degree, just need to get my Master's written down. Professionally I've been developing with python for a few years, but currently I mostly work in C# land.. Lurking around on py-dev for a year or so.. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060427/2ff2a125/attachment.htm From martin at v.loewis.de Thu Apr 27 22:33:00 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 27 Apr 2006 22:33:00 +0200 Subject: [Python-Dev] New-style icons, .desktop file In-Reply-To: <57124720604271317r61c0dd06o182b1e25e2f14c7b@mail.gmail.com> References: <443F7B09.50605@doxdesk.com> <e27e2e$uug$1@sea.gmane.org> <44473F32.3020700@v.loewis.de> <e2q250$57n$1@sea.gmane.org> <57124720604270222o13f29d56m8d15901f070bdd17@mail.gmail.com> <44511F09.3080801@v.loewis.de> <57124720604271317r61c0dd06o182b1e25e2f14c7b@mail.gmail.com> Message-ID: <44512A7C.2030304@v.loewis.de> Simon Dahlbacka wrote: > Given that, it does not really seem feasible to include them.. Ok, thanks for the investigation. > Speaking of icons, do the bundled ico files have to be named py.ico and > pyc.ico? No. I think I'll try to drop them altogether, getting the icons from python_icon.exe only. Regards, Martin From trentm at ActiveState.com Thu Apr 27 22:49:41 2006 From: trentm at ActiveState.com (Trent Mick) Date: Thu, 27 Apr 2006 13:49:41 -0700 Subject: [Python-Dev] "mick-windows" buildbot uptime Message-ID: <44512E65.4040901@activestate.com> We've pretty much gotten settled into our new diggs at work here (ActiveState) so my Windows buildbot machine should have better uptime from now on. Cheers, Trent -- Trent Mick trentm at activestate.com From pje at telecommunity.com Fri Apr 28 00:04:48 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Thu, 27 Apr 2006 18:04:48 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <44512170.70502@egenix.com> References: <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> At 09:54 PM 4/27/2006 +0200, M.-A. Lemburg wrote: >Note that I was talking about the .pth file being >writable, not the directory. Please stop this vague, handwaving FUD. You have yet to explain how this situation is supposed to arise. Is there some platform on which files with a .pth extension are automatically writable by users *when .py files are not also*? If not, then you are talking out of your... um, hat. If files are writable by other users by default, then .py files are too. Once again: *no new vector*. >Even if they are not >writable by non-admins, the .pth files can point >to directories which are. Uh huh. And how does that happen, exactly? Um, the user puts them there? What is your point, exactly? That people can do things insecurely if they're allowed near a computer? >Here's a HOWTO to optimize startup time, without loosing >convenience: I meant, a HOWTO for setuptools users who care about this, although at the moment I have heard only from one -- and you're not, AFAIK, actually a setuptools user. >No, I'm talking about a format which has the same if not >more benefits as what you're trying to achieve with the >.egg file approach, but without all the magic and hacks. > >It's not like this wouldn't be possible to achieve. That may or may not be true. Perhaps if you had participated in the original call to the distutils-sig for developing such a format (back in December 2004), perhaps the design would've been more to your liking. Oh wait... you did: http://mail.python.org/pipermail/distutils-sig/2004-December/004351.html And if you replace 'syspathtools.use()' in that email, with 'pkg_resources.require()', then it describes *exactly how setuptools works with .egg directories today*. Apparently, you hadn't yet thought of any the objections that you are now raising to *the very scheme that you yourself proposed*, until somebody else took the trouble to actually implement it. And now you want to say that I never listen to or implement your proposals? Please. Your email is the first documentation on record of how this system works! >Not really. Then I won't bother adding it, since nobody else asked for it. But don't ever say that I didn't offer you a real solution to the behavior you complained about. Meanwhile, I will take your choice as prima facie evidence that the things you are griping about have no real impact on you, and you are simply trying to make other people conform to your views of how things "should" be, rather than being interested in solving actual problems. It also makes it clear that your opinion about setuptools default installation behavior isn't relevant to any future Python-Dev discussion about setuptools' inclusion in the standard library, because it's: 1. Obviously not a real problem for you (else you'd accept the offer of a feature that would permanently solve it for you) 2. Not something that anybody else has complained about since I made --root automatically activate distutils compatibility In short, the credibility of your whining about this point and my supposed failure to accomodate you is now thoroughly debunked. I added an option to enable this behavior, I made other options enable the behavior where it could be reasonably assumed valid, and I offered you an option you could use to globally disable it for *any* package using setuptools, so that it would never affect you again. (And all of this... to disable the behavior that implements a scheme that you yourself proposed as the best way to do this sort of thing!) From aahz at pythoncraft.com Fri Apr 28 00:28:58 2006 From: aahz at pythoncraft.com (Aahz) Date: Thu, 27 Apr 2006 15:28:58 -0700 Subject: [Python-Dev] what do you like about other trackers and what do you hate about SF? In-Reply-To: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> References: <bbaeab100604251313r1987c13fwd73df678c84e7da2@mail.gmail.com> Message-ID: <20060427222857.GA2067@panix.com> On Tue, Apr 25, 2006, Brett Cannon wrote: > > So, if you could, please reply to this message with ONE thing you have > found in a tracker other than SF that you have liked (especially > compared to SF) and ONE thing you dislike/hate about SF's tracker. I > will use the replies as a quick-and-dirty features list of stuff that > we would like to see demonstrated in the test trackers. My biggest peeve about SF is the cluttered interface. The basic tracker interface should take two or three screens in 25x80 mode with Lynx. Oh, yes, the tracker must work with Lynx. SF technically qualifies as of the last time I checked, but for several years I was unable to use Lynx to submit bugs. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From tim.peters at gmail.com Fri Apr 28 02:13:30 2006 From: tim.peters at gmail.com (Tim Peters) Date: Thu, 27 Apr 2006 20:13:30 -0400 Subject: [Python-Dev] [Python-checkins] r45721 - python/trunk/Lib/test/test_with.py In-Reply-To: <444F637B.6080402@gmail.com> References: <20060426011553.D37D51E400C@bag.python.org> <444F637B.6080402@gmail.com> Message-ID: <1f7befae0604271713m75569286l826d01c2583c4ebd@mail.gmail.com> >> Author: tim.peters >> Date: Wed Apr 26 03:15:53 2006 >> New Revision: 45721 >> >> Modified: >> python/trunk/Lib/test/test_with.py >> Log: >> Rev 45706 renamed stuff in contextlib.py, but didn't rename >> uses of it in test_with.py. As a result, test_with has been skipped >> (due to failing imports) on all buildbot boxes since. Alas, that's >> not a test failure -- you have to pay attention to the >> >> 1 skip unexpected on PLATFORM: >> test_with >> >> kinds of output at the ends of test runs to notice that this got >> broken. [Nick Coghlan] > That would be my fault - I've got about four unexpected skips I actually > expect because I don't have the relevant external modules built. I must have > missed this new one joining the list. Oh, I don't know. It's always been dubious that regrtest.py treats ImportError exactly the same as test_support.TestSkipped. Hmm. Think I'll do something about that ;-) > ... From edloper at gradient.cis.upenn.edu Fri Apr 28 02:16:18 2006 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Thu, 27 Apr 2006 20:16:18 -0400 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <444F4322.8020009@gmail.com> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> <ca471dc20604252123y64b8cc66jc0413f6b027c0664@mail.gmail.com> <444F4322.8020009@gmail.com> Message-ID: <44515ED2.1080502@gradient.cis.upenn.edu> Nick Coghlan wrote: > OTOH, if the two protocols are made orthogonal, then it's clear that the > manager is always the original object with the __context__ method. Then the > three cases are: > > - a pure context manager (only provides __context__) > - a pure managed context (only provides __enter__/__exit__) > - a managed context which can be its own context manager (provides all three) +1 on keeping the two protocols orthogonal and using this terminology (context manager/managed context). -Edward From guido at python.org Fri Apr 28 02:20:21 2006 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Apr 2006 17:20:21 -0700 Subject: [Python-Dev] Must objects with __enter__/__exit__ also supply __context__? In-Reply-To: <44515ED2.1080502@gradient.cis.upenn.edu> References: <5.1.1.6.0.20060424172255.01fbe648@mail.telecommunity.com> <444D99B0.6020008@gmail.com> <5.1.1.6.0.20060425145313.04255dc8@mail.telecommunity.com> <5.1.1.6.0.20060425201412.01e44f70@mail.telecommunity.com> <5.1.1.6.0.20060425203233.041f3f58@mail.telecommunity.com> <5.1.1.6.0.20060425231441.01e785c8@mail.telecommunity.com> <5.1.1.6.0.20060425233346.01e7b548@mail.telecommunity.com> <ca471dc20604252123y64b8cc66jc0413f6b027c0664@mail.gmail.com> <444F4322.8020009@gmail.com> <44515ED2.1080502@gradient.cis.upenn.edu> Message-ID: <ca471dc20604271720t71206003x3935ef4740af96b7@mail.gmail.com> +1 from me too. On 4/27/06, Edward Loper <edloper at gradient.cis.upenn.edu> wrote: > Nick Coghlan wrote: > > OTOH, if the two protocols are made orthogonal, then it's clear that the > > manager is always the original object with the __context__ method. Then the > > three cases are: > > > > - a pure context manager (only provides __context__) > > - a pure managed context (only provides __enter__/__exit__) > > - a managed context which can be its own context manager (provides all three) > > +1 on keeping the two protocols orthogonal and using this terminology > (context manager/managed context). > > -Edward > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Fri Apr 28 05:47:09 2006 From: brett at python.org (Brett Cannon) Date: Thu, 27 Apr 2006 21:47:09 -0600 Subject: [Python-Dev] Type-Def-ing Python In-Reply-To: <Pine.LNX.4.64.0604270607370.30352@csegrad.ucsd.edu> References: <Pine.LNX.4.64.0604270607370.30352@csegrad.ucsd.edu> Message-ID: <bbaeab100604272047g5f293a57gf4e59a735b8b7366@mail.gmail.com> On 4/27/06, Brian C. Lum <bclum at cs.ucsd.edu> wrote: > Dear Python Community, > > I have been trying to research how to type-def python. I want to type-def > python so that I can use it as a static analyzer to check for bugs. I > have been going over research on this field and I came upon > Brett Cannon's thesis in which he tweaks the compiler and shows that > type-defing python would not help the compiler achieve a 5% performace > increase. > > Brett Cannon, "Localized Type Inference of Atomic Types in Python": > http://www.ocf.berkeley.edu/~bac/thesis.pdf > > I was wondering if anyone could help me contact him so that I could might > ask him for his source code and try to use type-defing as a bug-finder. > Just so people know, I obviously got the email and will take this off-list. -Brett From nnorwitz at gmail.com Fri Apr 28 05:49:26 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 27 Apr 2006 20:49:26 -0700 Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1) In-Reply-To: <1f7befae0604221911x479f7300xb588daba1e566791@mail.gmail.com> References: <20060419090608.GA4346@python.psfb.org> <1f7befae0604221911x479f7300xb588daba1e566791@mail.gmail.com> Message-ID: <ee2a432c0604272049n7a857957tb515b861049dd6c6@mail.gmail.com> On 4/22/06, Tim Peters <tim.peters at gmail.com> wrote: > [19 Apr 2006, Neal Norwitz] > > test_cmd_line leaked [0, 17, -17] references > > test_filecmp leaked [0, 13, 0] references > > test_threading_local leaked [-93, 0, 0] references > > test_urllib2 leaked [-121, 88, 99] references > > Thanks to Thomas digging into test_threading_local, I checked in what > appeared to be a total leak fix for it last week. On my Windows box, > it's steady as a rock now: > > """ > $ python_d -E -tt ../lib/test/regrtest.py -R:50: test_threading_local > test_threading_local > beginning 55 repetitions > 1234567890123456789012345678901234567890123456789012345 > ....................................................... > 1 test OK. > [27145 refs] > """ > > Is it still flaky on other platforms? > > If not, maybe the reported > > test_threading_local leaked [-93, 0, 0] references > > is due to stuff from a _previous_ test getting cleaned up (later than > expected/hoped)? When you first sent this message I was able to reproduce the leaks inconsistently on the box that runs this test. I *think* all that was required was test_cmd_line test_suprocess and test_threading_local (in that order). I suspect that the problem is some process is slow to die. I don't think I can provoke any of these in isolation. I definitely can't provoke them consistently. Today, I wasn't able to provoke them at all. I have disabled the leak warning for: LEAKY_TESTS="test_(cmd_line|ctypes|filecmp|socket|threadedtempfile|threading|urllib2) This is an attempt to reduce the spam. Would people rather me reduce this list so we can try to find the problems? The test runs 2 times per day. Sometimes it gets stuck. But the most you should ever receive is 2 mails a day. n From tim.peters at gmail.com Fri Apr 28 07:21:32 2006 From: tim.peters at gmail.com (Tim Peters) Date: Fri, 28 Apr 2006 01:21:32 -0400 Subject: [Python-Dev] [Python-checkins] Python Regression Test Failures refleak (1) In-Reply-To: <ee2a432c0604272049n7a857957tb515b861049dd6c6@mail.gmail.com> References: <20060419090608.GA4346@python.psfb.org> <1f7befae0604221911x479f7300xb588daba1e566791@mail.gmail.com> <ee2a432c0604272049n7a857957tb515b861049dd6c6@mail.gmail.com> Message-ID: <1f7befae0604272221g61181e93h7e0b153191dd824f@mail.gmail.com> [Neal Norwitz] > ... > I have disabled the leak warning for: > > LEAKY_TESTS="test_(cmd_line|ctypes|filecmp|socket|threadedtempfile|threading|urllib2) > > This is an attempt to reduce the spam. Would people rather me reduce > this list so we can try to find the problems? The test runs 2 times > per day. Sometimes it gets stuck. But the most you should ever > receive is 2 mails a day. I see so much email that 100/day more or less from any particular source wouldn't be noticed. It's possible that increasing the repetition count for _some_ of these tests would make them easier to understand. For example, test_cmd_line settles into a very regular pattern on my box when run more often: C:\Code\python\PCbuild>python_d -E -tt ../lib/test/regrtest.py -R:20: test_cmd_line test_cmd_line beginning 25 repetitions 1234567890123456789012345 ......................... test_cmd_line leaked [36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36, 36] references 1 test OK. IOW, after 5 "warmup " runs, each of the 20 following runs leaked 36 refs. That may be unique to Windows (especially since all the leaks here are due to the tests that call CmdLineTest.exit_code()). From nnorwitz at gmail.com Fri Apr 28 07:58:49 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 27 Apr 2006 22:58:49 -0700 Subject: [Python-Dev] 2.5 open issues Message-ID: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> If you are addressed on this message, it means you have open issues that need to be resolved for 2.5. Some of these issues are documentation, others are code issues. This information comes from PEP 356. The best way to get me to stop bugging you is to close out these tasks. :-) Who is the owner for getting new icons installed with the new logo? Code issues: +++++++++++ Fred, Fredrik, Martin: xmlplus/xmlcore situation re: ElementTree Jeremy: AST compiler issues We should really try to get these resolved soon, especially the XML issue since that could result in some sort of API change. The AST issues should just be bug fixes AFAIK. Documentation missing: +++++++++++++++++++ Fredrik: ElementTree Gerhard: pysqlite Martin: msilib Thomas: ctypes Thomas, I know you checked in some ctypes docs. Is that everything or is there more coming? It seemed pretty short given the size of ctypes. If you don't expect to have time to finish these tasks, then it's your job to find someone else who will. n From tzot at mediconsa.com Fri Apr 28 09:33:34 2006 From: tzot at mediconsa.com (Christos Georgiou) Date: Fri, 28 Apr 2006 10:33:34 +0300 Subject: [Python-Dev] Python Grammar Ambiguity References: <444CD3DB.7050704@voidspace.org.uk> Message-ID: <e2sgh8$a5u$1@sea.gmane.org> "Michael Foord" <fuzzyman at voidspace.org.uk> wrote in message news:444CD3DB.7050704 at voidspace.org.uk... <snip> > It worries me that there might be a valid expression allowed here that I > haven't thought of. My current rules allow anything that looks like > ``(a, [b, c, (d, e)], f)`` - any nested identifier list. Would anything > else be allowed ? Anything that goes in the left hand side of an assignment: # example 1 >>> a=[1,2] >>> for a[0] in xrange(10): pass # example 2 >>> class A(object): pass >>> a=A() >>> for a.x in xrange(10): pass From theller at python.net Fri Apr 28 10:27:45 2006 From: theller at python.net (Thomas Heller) Date: Fri, 28 Apr 2006 10:27:45 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> Message-ID: <4451D201.2030107@python.net> Neal Norwitz wrote: > If you are addressed on this message, it means you have open issues > that need to be resolved for 2.5. Some of these issues are > documentation, others are code issues. > Documentation missing: > +++++++++++++++++++ > Fredrik: ElementTree > Gerhard: pysqlite > Martin: msilib > Thomas: ctypes > > Thomas, I know you checked in some ctypes docs. Is that everything or > is there more coming? It seemed pretty short given the size of > ctypes. > > If you don't expect to have time to finish these tasks, then it's your > job to find someone else who will. I was fearing it is getting too long. How many pages do you expect or will you tolerate (yes, this is a serious question)? I could imaging three parts of the ctypes docs: 1. The tutorial, which is already checked in. 2. A reference manual, listing all the available functions and types (it will probably duplicate a lot of what is in the tutorial). 3. Some articles/howtos which cover advanced issues. I have the start of a reference manual, which is about 12 pages now. Incomplete, some sections are not yet or no longer correct, but you can take a look at it here: http://starship.python.net/crew/theller/manual/manual.html This could be completed and committed into SVN soon. Thomas From gh at ghaering.de Fri Apr 28 11:19:23 2006 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Fri, 28 Apr 2006 11:19:23 +0200 Subject: [Python-Dev] rest2latex - was: Re: Raising objections In-Reply-To: <4445FADF.90307@python.net> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FADF.90307@python.net> Message-ID: <4451DE1B.2090804@ghaering.de> Thomas Heller wrote: > [...] I'm now happy with the tool that converts the ctypes tutorial from reST to LaTeX, > I will later (today or tomorrow) commit that into Python SVN. Did you commit that already? Alternatively, can you send it to me, please? -- Gerhard From theller at python.net Fri Apr 28 11:56:17 2006 From: theller at python.net (Thomas Heller) Date: Fri, 28 Apr 2006 11:56:17 +0200 Subject: [Python-Dev] rest2latex - was: Re: Raising objections In-Reply-To: <4451DE1B.2090804@ghaering.de> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FADF.90307@python.net> <4451DE1B.2090804@ghaering.de> Message-ID: <4451E6C1.4040506@python.net> Gerhard H?ring wrote: > Thomas Heller wrote: >> [...] I'm now happy with the tool that converts the ctypes tutorial >> from reST to LaTeX, >> I will later (today or tomorrow) commit that into Python SVN. > > Did you commit that already? Alternatively, can you send it to me, please? > > -- Gerhard I meant the ctypes tutorial in latex format, not the tool itself. Anyway, the tool is mkydoc.py, in combination with missing.py: http://svn.python.org/view/external/ctypes-0.9.9.6/docs/manual/ I'm not sure it is ready for public consumption ;-). Thomas From engelbert.gruber at ssg.co.at Fri Apr 28 12:16:57 2006 From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at) Date: Fri, 28 Apr 2006 12:16:57 +0200 (CEST) Subject: [Python-Dev] rest2latex - was: Re: Raising objections In-Reply-To: <4451E6C1.4040506@python.net> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FADF.90307@python.net> <4451DE1B.2090804@ghaering.de> <4451E6C1.4040506@python.net> Message-ID: <Pine.LNX.4.64.0604281216430.5662@lx3.local> On Fri, 28 Apr 2006, Thomas Heller wrote: > Gerhard H?ring wrote: >> Thomas Heller wrote: >>> [...] I'm now happy with the tool that converts the ctypes tutorial >>> from reST to LaTeX, >>> I will later (today or tomorrow) commit that into Python SVN. >> >> Did you commit that already? Alternatively, can you send it to me, please? >> >> -- Gerhard > > I meant the ctypes tutorial in latex format, not the tool itself. > > Anyway, the tool is mkydoc.py, in combination with missing.py: > > http://svn.python.org/view/external/ctypes-0.9.9.6/docs/manual/ > > I'm not sure it is ready for public consumption ;-). from what i saw it is not, because paths are hardcoded, but aside from this it should be usable and i am willing to take it up and into docutils, but this requires feedback cheers -- From amk at amk.ca Fri Apr 28 14:18:58 2006 From: amk at amk.ca (A.M. Kuchling) Date: Fri, 28 Apr 2006 08:18:58 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <4451D201.2030107@python.net> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <4451D201.2030107@python.net> Message-ID: <20060428121858.GA8336@localhost.localdomain> On Fri, Apr 28, 2006 at 10:27:45AM +0200, Thomas Heller wrote: > I could imagine three parts of the ctypes docs: ... > 3. Some articles/howtos which cover advanced issues. Note that there's now a Doc/howto directory, so you could put articles there. The howtos aren't built as part of the documentation set, but maybe we should work on fixing that. Also note that one existing howto is in reST, the first use of reST in the Python docs. So integrating the howtos will mean supporting reST, and you can therefore write howtos in reST if that's convenient. --amk From amk at amk.ca Fri Apr 28 14:20:45 2006 From: amk at amk.ca (A.M. Kuchling) Date: Fri, 28 Apr 2006 08:20:45 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> Message-ID: <20060428122045.GB8336@localhost.localdomain> On Thu, Apr 27, 2006 at 10:58:49PM -0700, Neal Norwitz wrote: > If you are addressed on this message, it means you have open issues > that need to be resolved for 2.5. Some of these issues are > documentation, others are code issues. This information comes from > PEP 356. There are also these items in the 'possible features' section: ================ Modules under consideration for inclusion: - bdist_deb in distutils package (Owner: ???) http://mail.python.org/pipermail/python-dev/2006-February/060926.html - wsgiref to the standard library (Owner: Phillip Eby) - pure python pgen module (Owner: Guido) - Support for building "fat" Mac binaries (Intel and PPC) (Owner: Ronald Oussoren) ================ wsgiref is the most important one, I think. If there's anything I can do to help, please let me know. --amk From skip at pobox.com Fri Apr 28 14:48:07 2006 From: skip at pobox.com (skip at pobox.com) Date: Fri, 28 Apr 2006 07:48:07 -0500 Subject: [Python-Dev] Is this a bad idea: picky floats? Message-ID: <17490.3847.709297.608432@montanaro.dyndns.org> >From a numerical standpoint, floats shouldn't generally be compared using equality. I came across a bug at work yesterday where I had written: if not delta: return 0.0 where delta was a floating point number. After a series of calculations piling up round-off error delta took on a value on the order of 1e-8. Not zero, but it should have been. The fix was easy enough: if abs(delta) < EPSILON: return 0.0 for a suitable value of EPSILON. That got me to thinking... I'm sure I have plenty of other similar mistakes in my code. (Never was much of a numerical analysis guy...) What if there was a picky-float setting that generated warnings if you compared two floats using "==" (or implicitly using "not")? Does that make sense to try for testing purposes? The implementation seemed straightforward enough: http://python.org/sf/1478364 I'm sure at the very least the idea needs more thought than I've given it. It's just a half-baked idea at this point. Skip From guido at python.org Fri Apr 28 16:38:42 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 28 Apr 2006 07:38:42 -0700 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <20060428122045.GB8336@localhost.localdomain> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> Message-ID: <ca471dc20604280738w55deb6a2g9bbfa4cebc650e2b@mail.gmail.com> On 4/28/06, A.M. Kuchling <amk at amk.ca> wrote: > There are also these items in the 'possible features' section: > ================ > Modules under consideration for inclusion: > > - bdist_deb in distutils package > (Owner: ???) > http://mail.python.org/pipermail/python-dev/2006-February/060926.html Isn't that MvL? > - wsgiref to the standard library > (Owner: Phillip Eby) I still hope this can go in; it will help web framework authors do the right thing long term. > - pure python pgen module > (Owner: Guido) I'm withdrawing this for 2.5 and resubmitting it to 2.6; I have no time to clean it up. I know it is possible using this code to create a Python source-to-bytecode compiler in pure Python (using tokenizer.py for the lexer and the compiler package as the bytecode generator, and accepting the latter's imperfections) but few people have that need and those that do can download it from SF. > - Support for building "fat" Mac binaries (Intel and PPC) > (Owner: Ronald Oussoren) Yes, this would be cool. > wsgiref is the most important one, I think. If there's anything I can > do to help, please let me know. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From pje at telecommunity.com Fri Apr 28 17:02:07 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 11:02:07 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ca471dc20604280738w55deb6a2g9bbfa4cebc650e2b@mail.gmail.co m> References: <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> Message-ID: <5.1.1.6.0.20060428105534.01e1ce18@mail.telecommunity.com> At 07:38 AM 4/28/2006 -0700, Guido van Rossum wrote: >On 4/28/06, A.M. Kuchling <amk at amk.ca> wrote: > > - wsgiref to the standard library > > (Owner: Phillip Eby) > >I still hope this can go in; it will help web framework authors do the >right thing long term. I doubt I'll have time to write documentation for it before alpha 3. If it's okay for the docs to wait for one of the beta releases -- or better yet, if someone could volunteer to create rough draft documentation that I could just then edit -- then it shouldn't be a problem getting it in and integrated. However, just to avoid the sort of thing that happened with setuptools, I would suggest, Guido, that you make a last call for objections on the Web-SIG, which has previously voiced more criticism of wsgiref than the Distutils-SIG ever had about setuptools. Granted, most of the Web-SIG comments were essentially feature requests, but some complained about the presence of the handler framework. Anyway, after the setuptools flap I'm a little shy of checking in a new library without a little more visible process. :) From theller at python.net Fri Apr 28 17:07:11 2006 From: theller at python.net (Thomas Heller) Date: Fri, 28 Apr 2006 17:07:11 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> Message-ID: <e2tb2v$816$1@sea.gmane.org> Thomas Wouters wrote: > On 4/27/06, Thomas Wouters <thomas at python.org> wrote: >> Alrighty then. The list has about 12 hours to convince me (and you) that >> it's a bad idea to generate that warning. I'll be asleep by the time the >> trunk un-freezes, and I have a string of early meetings tomorrow. I'll get >> to it somewhere in the afternoon :) >> > > I could check it in, except the make-testall I ran overnight showed a small > problem: the patch would generate a number of spurious warnings in the > trunk: > > /home/thomas/python/python/trunk/Lib/gzip.py:9: ImportWarning: Not importing > directory '/home/thomas/python/python/trunk/Modules/zlib': missing > __init__.py > > /home/thomas/python/python/trunk/Lib/ctypes/__init__.py:8: ImportWarning: > Not importing directory '/home/thomas/python/python/trunk/Modules/_ctypes': > missing __init__.py > > (and a few more zlib ones.) The reason for that is that ./Modules is added > to the import path, by a non-installed Python. This is because of the > pre-distutils Modules/Setup-style build method of modules (which is still > sometimes used.) I can't find where Modules is added to sys.path, though, > even if I wanted to remove it :) > > So, do we: > a) forget about the warning because of the layout of the svn tree (bad, > imho) > 2) rename Modules/zlib and Modules/_ctypes to avoid the warning > (inconvenient, but I don't know how inconvenient) > - fix the build procedure so Modules isn't added to sys.path unless it > absolutely has to (which is only very rarely the case, I believe) > or lastly, make regrtest.py ignore those specific warnings? Would not another way be to make sure Modules is moved *behind* the setup.py build directory on sys.path? Thomas From martin at v.loewis.de Fri Apr 28 17:07:55 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 28 Apr 2006 17:07:55 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> Message-ID: <44522FCB.1060106@v.loewis.de> Neal Norwitz wrote: > Who is the owner for getting new icons installed with the new logo? Nobody, so far (for Windows). Somebody should contact the owner and find out what the license on this work is, and then tell me what to do with them. I assume the py and pyc icons are to be replaced, and I also assume the Vista icons are to be ignored, but what about the other(s)? For the OSX icons, I guess Ronald Oussoren owns the task of getting them into the distribution. > Fred, Fredrik, Martin: xmlplus/xmlcore situation re: ElementTree I still hope Fred could make progress; I had no time to look into this further, so far. > Documentation missing: > +++++++++++++++++++ > Fredrik: ElementTree > Gerhard: pysqlite > Martin: msilib > Thomas: ctypes > msilib is on my agenda, but I might not be able to work for it in the next weeks. Regards, Martin From theller at python.net Fri Apr 28 17:13:50 2006 From: theller at python.net (Thomas Heller) Date: Fri, 28 Apr 2006 17:13:50 +0200 Subject: [Python-Dev] rest2latex - was: Re: Raising objections In-Reply-To: <Pine.LNX.4.64.0604281216430.5662@lx3.local> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FADF.90307@python.net> <4451DE1B.2090804@ghaering.de> <4451E6C1.4040506@python.net> <Pine.LNX.4.64.0604281216430.5662@lx3.local> Message-ID: <e2tbfe$akr$1@sea.gmane.org> engelbert.gruber at ssg.co.at wrote: > On Fri, 28 Apr 2006, Thomas Heller wrote: > >> Gerhard H?ring wrote: >>> Thomas Heller wrote: >>>> [...] I'm now happy with the tool that converts the ctypes tutorial >>>> from reST to LaTeX, >>>> I will later (today or tomorrow) commit that into Python SVN. >>> >>> Did you commit that already? Alternatively, can you send it to me, >>> please? >>> >>> -- Gerhard >> >> I meant the ctypes tutorial in latex format, not the tool itself. >> >> Anyway, the tool is mkydoc.py, in combination with missing.py: >> >> http://svn.python.org/view/external/ctypes-0.9.9.6/docs/manual/ >> >> I'm not sure it is ready for public consumption ;-). > > from what i saw it is not, because paths are hardcoded, Sure, but it should only require some small changes to adapt it to other rest sources. > but aside from > this it should be usable and i am willing to take it up and into > docutils, but this requires feedback I must work on the docs themselves, so I currently have only two things: - the table in the ctypes tutorial has a totally different look than the other tables in the docs. Compare http://docs.python.org/dev/lib/ctypes-simple-data-types.html with http://docs.python.org/dev/lib/module-struct.html - feature request: it would be very nice if it were possible to generate links into the index for functions and types from the rest sources. Thomas From martin at v.loewis.de Fri Apr 28 17:27:44 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 28 Apr 2006 17:27:44 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <4451D201.2030107@python.net> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <4451D201.2030107@python.net> Message-ID: <44523470.2080700@v.loewis.de> Thomas Heller wrote: > I was fearing it is getting too long. How many pages do you expect > or will you tolerate (yes, this is a serious question)? I don't think there should be a page limit to documentation. If it is structured into sections, then size simply doesn't matter on the Web: people can easily skip over it / ignore it if they want to. It only matters for the size of the distribution, but only slightly so: I doubt it contributes significantly to the size of the whole distribution. > 2. A reference manual, listing all the available functions and types > (it will probably duplicate a lot of what is in the tutorial). I would feel that this is a must-have document, and completeness is certainly a goal here. Now, if certain parts are still undocumented, I wouldn't make creation of this documentation a release requirement, but text that already exists should be included. > 3. Some articles/howtos which cover advanced issues. I don't think they are necessary; if Andrew thinks the howto section would be a good place, I don't mind. > I have the start of a reference manual, which is about 12 pages now. > Incomplete, some sections are not yet or no longer correct, but you > can take a look at it here: > > http://starship.python.net/crew/theller/manual/manual.html > > This could be completed and committed into SVN soon. Looks good to me. Martin From thomas at python.org Fri Apr 28 17:39:19 2006 From: thomas at python.org (Thomas Wouters) Date: Fri, 28 Apr 2006 17:39:19 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <e2tb2v$816$1@sea.gmane.org> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> <e2tb2v$816$1@sea.gmane.org> Message-ID: <9e804ac0604280839q5e694d4fq24dcacd66849e6ee@mail.gmail.com> On 4/28/06, Thomas Heller <theller at python.net> wrote: > Would not another way be to make sure Modules is moved *behind* the > setup.py build directory on sys.path? Indeed! I hadn't realized that, although I might've if I'd been able to find where Modules is put on sys.path. And, likewise, I would do as you suggest (which feels like the right thing) if I could only find out where Modules is put on sys.path :) I don't have time to search for it today nor, probably, this weekend (which is a party weekend in the Netherlands.) I'll get to it eventually, although a helpful hint from an old-timer who remembers as far back as Modules/Setup would be welcome. :) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060428/dcfa6ce4/attachment.html From martin at v.loewis.de Fri Apr 28 17:41:15 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 28 Apr 2006 17:41:15 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ca471dc20604280738w55deb6a2g9bbfa4cebc650e2b@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <ca471dc20604280738w55deb6a2g9bbfa4cebc650e2b@mail.gmail.com> Message-ID: <4452379B.10705@v.loewis.de> Guido van Rossum wrote: >> - bdist_deb in distutils package >> (Owner: ???) >> http://mail.python.org/pipermail/python-dev/2006-February/060926.html > > Isn't that MvL? I spoke in favour of including it, but don't recall ever to committing as a "sponsor/second" of including it (and I certainly didn't write it). I can't find a patch for bdist_deb right now, so if nobody steps forward with actual code to include, this gets probably dropped from 2.5. Notice that the URL quoted actually speaks against a bdist_deb command. >> - Support for building "fat" Mac binaries (Intel and PPC) >> (Owner: Ronald Oussoren) > > Yes, this would be cool. This is nearly committed. For some reason, SF apparently dropped my last change request for it, and Ronald's patch responding to this request, so it barely missed 2.5a2. Regardless, I guess Ronald will release his 2.5a2 binaries in the fat form. For those interested, one surprising (for me) challenge in this is that WORDS_BIGENDIAN can't be a configure-time constant anymore. It doesn't need to be a run-time thing, either, because at run-time, you know the endianness of the (current) processor. It isn't configure-time constant, because each individual gcc invocations runs *two* compiler passes (cc1) and *two* assemblers, which incidentally have different endiannesses. So WORDS_BIGENDIAN now must be a compile-time constant; fortunately, gcc defines either __BIG_ENDIAN__ or __LITTLE_ENDIAN__, depending on whether it is the PPC cc1 or the x86 cc1 that runs. To make this work with autoconf on systems which don't define either of these constants, pyconfig.h.in now has a block that derives WORDS_BIGENDIAN from __(BIG|LITTLE)_ENDIAN__ if they are defined, and makes it configure-time otherwise. So in short: WORDS_BIGENDIAN will always be right, and will have two different values in a fat binary. Regards, Martin From amk at amk.ca Fri Apr 28 17:54:07 2006 From: amk at amk.ca (A.M. Kuchling) Date: Fri, 28 Apr 2006 11:54:07 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <5.1.1.6.0.20060428105534.01e1ce18@mail.telecommunity.com> References: <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <5.1.1.6.0.20060428105534.01e1ce18@mail.telecommunity.com> Message-ID: <20060428155407.GA12523@localhost.localdomain> On Fri, Apr 28, 2006 at 11:02:07AM -0400, Phillip J. Eby wrote: > I doubt I'll have time to write documentation for it before alpha 3. If > it's okay for the docs to wait for one of the beta releases -- or better > yet, if someone could volunteer to create rough draft documentation that I > could just then edit -- then it shouldn't be a problem getting it in and > integrated. Barring some radical new thing in alpha3, the heavy lifting of the "What's New" is done so I'm available to help with documentation. (The functional programming howto can wait a little while longer.) I assume all we need is the module-level docs for the LibRef? So, what's the scope of the proposed addition? Everything in the wsgiref package, including the simple_server module? > However, just to avoid the sort of thing that happened with setuptools, I > would suggest, Guido, that you make a last call for objections on the > Web-SIG ... Agreed. Guido, another suggestion: an Artima weblog post about the proposal, so that blog commentators notice it. --amk From martin at v.loewis.de Fri Apr 28 18:11:23 2006 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 28 Apr 2006 18:11:23 +0200 Subject: [Python-Dev] Dropping __init__.py requirement for subpackages In-Reply-To: <9e804ac0604280839q5e694d4fq24dcacd66849e6ee@mail.gmail.com> References: <ca471dc20604261016g14854274i970d6f4fc72561c7@mail.gmail.com> <e2ole6$dd$1@sea.gmane.org> <ca471dc20604261349l6fe795eob2b021afa35f11cd@mail.gmail.com> <9e804ac0604261540w6695262bod1bcf194be1c08a5@mail.gmail.com> <ca471dc20604261558p719dc5b6w7c35040432059dfe@mail.gmail.com> <9e804ac0604261610w1e8a1a02l271174fca1b58f3f@mail.gmail.com> <9e804ac0604270739k464f5fe6h4a473e5161f99ee7@mail.gmail.com> <e2tb2v$816$1@sea.gmane.org> <9e804ac0604280839q5e694d4fq24dcacd66849e6ee@mail.gmail.com> Message-ID: <44523EAB.4080109@v.loewis.de> Thomas Wouters wrote: > Indeed! I hadn't realized that, although I might've if I'd been able to > find where Modules is put on sys.path. And, likewise, I would do as you > suggest (which feels like the right thing) if I could only find out > where Modules is put on sys.path :) I don't have time to search for it > today nor, probably, this weekend (which is a party weekend in the > Netherlands.) I'll get to it eventually, although a helpful hint from an > old-timer who remembers as far back as Modules/Setup would be welcome. :) With some debugging, I found it out: search_for_exec_prefix looks for the presence of Modules/Setup; if that is found, it strips off /Setup, leaving search_for_exec_prefix with -1. This then gets added to sys.path with /* Finally, on goes the directory for dynamic-load modules */ strcat(buf, exec_prefix); I wasn't following exactly, so I might have mixed something up, but... it appears that this problem here comes from site.py adding the build directory for the distutils dynamic objects even after Modules. The site.py code is # XXX This should not be part of site.py, since it is needed even when # using the -S option for Python. See http://www.python.org/sf/586680 def addbuilddir(): """Append ./build/lib.<platform> in case we're running in the build dir (especially for Guido :-)""" from distutils.util import get_platform s = "build/lib.%s-%.3s" % (get_platform(), sys.version) s = os.path.join(os.path.dirname(sys.path[-1]), s) sys.path.append(s) I would suggest fixing #586680: Add build/lib.* before adding dynamic-load modules, by moving the code into Modules/getpath.c. You should be able to use efound < 0 as an indication that this is indeed the build directory. Regards, Martin From mal at egenix.com Fri Apr 28 18:20:44 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 28 Apr 2006 18:20:44 +0200 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> References: <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> Message-ID: <445240DC.3070400@egenix.com> Phillip J. Eby wrote: > At 09:54 PM 4/27/2006 +0200, M.-A. Lemburg wrote: >> Note that I was talking about the .pth file being >> writable, not the directory. > > Please stop this vague, handwaving FUD. You have yet to explain how this > situation is supposed to arise. Is there some platform on which files with > a .pth extension are automatically writable by users *when .py files are > not also*? > > If not, then you are talking out of your... um, hat. If files are writable > by other users by default, then .py files are too. Once again: *no new > vector*. > >> Even if they are not >> writable by non-admins, the .pth files can point >> to directories which are. > > Uh huh. And how does that happen, exactly? Um, the user puts them > there? What is your point, exactly? That people can do things insecurely > if they're allowed near a computer? You don't seem to want to see the problem: By modifying site.py to take .pth files on all directories in PYTHONPATH into account, you extend the possibility to add "symbolic" links to a lot more directories than those which are normally monitored by the sysadmin. Every entry in a .pth file is added to sys.path, it even allows imports to happen at .pth file scanning time. A single malicious .pth file on PYTHONPATH could trick the user into running Python modules that he isn't really aware of. Adding directories to PYTHONPATH which are writable by others is easy, just add '/tmp' as entry in some .pth file. Then put e.g. a modified os.py into /tmp and have it execute 'rm -rf /' at import time. In summary: I don't think that allowing .pth files on all PYTHONPATH directories is the right way to go. In order to solve your problem with users not being able to install user local copies of packages, we should instead discuss the possibility of having a standard directory on PYTHONPATH that is being searched for such packages and then allow .pth files only in that directory (which is under user control). Please consider discussing such things on python-dev before implementing a hack that works around standard Python behavior. >> Here's a HOWTO to optimize startup time, without loosing >> convenience: > > I meant, a HOWTO for setuptools users who care about this, although at the > moment I have heard only from one -- and you're not, AFAIK, actually a > setuptools user. You asked for a HOWTO, I gave you one :-) I have played with setuptools and it lots of interesting things, most of which are great and I applaud you for taking the effort in creating the tool. However, as you know, I do have a few issues with it that I'd like to see resolved, which is why I keep trying to convince you of considering them (ever since you started in 2004 with this). >> No, I'm talking about a format which has the same if not >> more benefits as what you're trying to achieve with the >> .egg file approach, but without all the magic and hacks. >> >> It's not like this wouldn't be possible to achieve. > > That may or may not be true. Perhaps if you had participated in the > original call to the distutils-sig for developing such a format (back in > December 2004), perhaps the design would've been more to your liking. > > Oh wait... you did: > > http://mail.python.org/pipermail/distutils-sig/2004-December/004351.html Indeed. And I suggested that you reconsider the idea to use ZIP files for installation (rather than just distribution): http://mail.python.org/pipermail/distutils-sig/2004-December/004349.html > And if you replace 'syspathtools.use()' in that email, with > 'pkg_resources.require()', then it describes *exactly how setuptools > works with .egg directories today*. Interesting that you used that idea, because back then you didn't reply to the email. Looks like I deserve some credit ;-) If you've already implemented this (which I wasn't aware of, since when I played with setuptools it kept installing .egg ZIP files), then why don't you make .egg *directories* the standard installation scheme, instead of insisting on having .egg ZIP files in site-packages/ ? I've now played with it again and found that for some packages (e.g. kid and Paste) it installs these as .egg directories (horray!). For other packages such as elementtree which are not available as egg files, it still creates egg files (by first downloading the source package, then creating an egg file and installing that). It also uses the egg files for quite a few packages that are distributed as egg files. So far, I've not found a pattern to this. I wonder why you don't always create .egg directories. > Apparently, you hadn't yet thought of any the objections that you are now > raising to *the very scheme that you yourself proposed*, until somebody > else took the trouble to actually implement it. Where am I doing that ? > And now you want to say that I never listen to or implement your > proposals? Please. Your email is the first documentation on record of how > this system works! I'll repeat it again (hopefully the last time): * I'm not against using directories for managing packages * I'm not against using ZIP files of whatever format for the distribution of packages. * I'm not against adding all kinds of new and useful commands to distutils, including bdist_egg and a new install_as_egg command * I am against using ZIP as medium for package installation. * I am against using standard distutils commands to mean different things depending on which set of packages you have installed (in this case the "install" command) I believe that these requests are very reasonable. >> Not really. > > Then I won't bother adding it, since nobody else asked for it. But don't > ever say that I didn't offer you a real solution to the behavior you > complained about. > > Meanwhile, I will take your choice as prima facie evidence that the things > you are griping about have no real impact on you, and you are simply trying > to make other people conform to your views of how things "should" be, > rather than being interested in solving actual problems. > > It also makes it clear that your opinion about setuptools default > installation behavior isn't relevant to any future Python-Dev discussion > about setuptools' inclusion in the standard library, because it's: > > 1. Obviously not a real problem for you (else you'd accept the offer of a > feature that would permanently solve it for you) > > 2. Not something that anybody else has complained about since I made --root > automatically activate distutils compatibility > > In short, the credibility of your whining about this point and my supposed > failure to accomodate you is now thoroughly debunked. I added an option to > enable this behavior, I made other options enable the behavior where it > could be reasonably assumed valid, and I offered you an option you could > use to globally disable it for *any* package using setuptools, so that it > would never affect you again. > > (And all of this... to disable the behavior that implements a scheme that > you yourself proposed as the best way to do this sort of thing!) Phillip, you're always side-stepping answering questions (and cutting away quoted email text doesn't help either). Adding dozens of options to your system doesn't help remedy the problem, it only makes things worse. Of course, there are always ways for me to have my way of doing things - from hacking setuptools to simply avoiding setuptools altogether. But that's not what I want. I would like setuptools to play nice with the rest of the Python world and become part of the standard distribution. However, the level of magic introduced by setuptools is simply not what we're used to in the Python world, at least not around here (meaning python-dev). The few suggestions I made would not hurt setuptools much, but would help put it back into the "explicit is better than implicit" camp, simply because a lot of magic could be removed. As an example, you could avoid all the problems you currently have with C extensions in .egg ZIP files. You currently unzip just the C extension to a temporary cache directory and add proxies for in the original packages (a strategy that may or may not work depending on how the C extension is tied to the rest of the package). By installing the .egg ZIP file as .egg directory, you'd avoid this completely. Another example: the code that tries to find packages on the web. Screen scrapping is not really what I'd consider a safe way to find a package - the user might end up installing a completely different package. Your argument is that you'd like things to work without the user having to pop up a browser and find the right URL for a package (your usually "just works" argument). You should be aware that both package authors and users are capable of learning and adapting. If we tell package authors to put a link to their eggs on PyPI along with an MD5 sum, then the users be in a much better position and could feel more secure about what they download and install. It may take a while, but in the end will make the Python world better and safer for everyone, while still making the user experience a good one. Plus, you could rip out all the code that currently tries to read web-sites and find packages that resemble the package name, reducing magic. Have a nice weekend, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 28 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From pje at telecommunity.com Fri Apr 28 18:36:53 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 12:36:53 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <20060428155407.GA12523@localhost.localdomain> References: <5.1.1.6.0.20060428105534.01e1ce18@mail.telecommunity.com> <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <5.1.1.6.0.20060428105534.01e1ce18@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060428123438.01e24de8@mail.telecommunity.com> At 11:54 AM 4/28/2006 -0400, A.M. Kuchling wrote: >On Fri, Apr 28, 2006 at 11:02:07AM -0400, Phillip J. Eby wrote: > > I doubt I'll have time to write documentation for it before alpha 3. If > > it's okay for the docs to wait for one of the beta releases -- or better > > yet, if someone could volunteer to create rough draft documentation that I > > could just then edit -- then it shouldn't be a problem getting it in and > > integrated. > >Barring some radical new thing in alpha3, the heavy lifting of the >"What's New" is done so I'm available to help with documentation. >(The functional programming howto can wait a little while longer.) I >assume all we need is the module-level docs for the LibRef? > >So, what's the scope of the proposed addition? Everything in the >wsgiref package, including the simple_server module? Yes. simple_server is, coincidentally, the most controversial point on the Web-SIG, in that some argue for including a "better" web server. However, nobody has come forth and said, "Here's my web server, it's stable and I want to put it in the stdlib", so the discussion wound down in general vagueness the last time it was brought up. From mal at egenix.com Fri Apr 28 18:43:00 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 28 Apr 2006 18:43:00 +0200 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <445240DC.3070400@egenix.com> References: <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> <445240DC.3070400@egenix.com> Message-ID: <44524614.1050803@egenix.com> M.-A. Lemburg wrote: >>> No, I'm talking about a format which has the same if not >>> more benefits as what you're trying to achieve with the >>> .egg file approach, but without all the magic and hacks. >>> >>> It's not like this wouldn't be possible to achieve. >> That may or may not be true. Perhaps if you had participated in the >> original call to the distutils-sig for developing such a format (back in >> December 2004), perhaps the design would've been more to your liking. >> >> Oh wait... you did: >> >> http://mail.python.org/pipermail/distutils-sig/2004-December/004351.html > > Indeed. And I suggested that you reconsider the idea to > use ZIP files for installation (rather than just distribution): > > http://mail.python.org/pipermail/distutils-sig/2004-December/004349.html > >> And if you replace 'syspathtools.use()' in that email, with >> 'pkg_resources.require()', then it describes *exactly how setuptools >> works with .egg directories today*. > > Interesting that you used that idea, because back then you didn't > reply to the email. Looks like I deserve some credit ;-) > > If you've already implemented this (which I wasn't aware of, since > when I played with setuptools it kept installing .egg ZIP files), > then why don't you make .egg *directories* the standard installation > scheme, instead of insisting on having .egg ZIP files in > site-packages/ ? > > I've now played with it again and found that for some > packages (e.g. kid and Paste) it installs these as .egg > directories (horray!). > > For other packages such as elementtree which are not available > as egg files, it still creates egg files (by first downloading the > source package, then creating an egg file and installing that). > > It also uses the egg files for quite a few packages that are > distributed as egg files. > > So far, I've not found a pattern to this. > > I wonder why you don't always create .egg directories. I've now found this section in the documentation which seems to have the reason: http://peak.telecommunity.com/DevCenter/EasyInstall#compressed-installation Apart from the statement "because Python processes zipfile entries on sys.path much faster than it does directories." being wrong, it looks like all you'd have to do, is make --always-unzip the default. Another nit which seems to have been introduced in 0.6a11: you now prepend egg directory entries to other sys.path entries, instead of appending them. What's the reason for that ? Egg directory should really be treated just like any other site-package package and not be allowed to override stdlib modules and packages without explicit user action by e.g. adjusting PYTHONPATH. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 28 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From ronaldoussoren at mac.com Fri Apr 28 19:17:35 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 28 Apr 2006 19:17:35 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <44522FCB.1060106@v.loewis.de> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> Message-ID: <7E788D21-BFB8-4B51-820E-F09E6F0E1E10@mac.com> On 28-apr-2006, at 17:07, Martin v. L?wis wrote: > Neal Norwitz wrote: >> Who is the owner for getting new icons installed with the new logo? > > For the OSX icons, I guess Ronald Oussoren owns the task of getting > them into the distribution. That's fine by me. I'll be adding them to the universal python 2.4 tree soon and to 2.5 when that's is done. Do have to wait for the contributor agreement, which the folks over at psf at python.org say is good enough, to do that? Ronald From ronaldoussoren at mac.com Fri Apr 28 19:21:47 2006 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 28 Apr 2006 19:21:47 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <4452379B.10705@v.loewis.de> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <ca471dc20604280738w55deb6a2g9bbfa4cebc650e2b@mail.gmail.com> <4452379B.10705@v.loewis.de> Message-ID: <207A443A-FE3D-44D3-BE3B-D0158DD53471@mac.com> On 28-apr-2006, at 17:41, Martin v. L?wis wrote: > > >>> - Support for building "fat" Mac binaries (Intel and PPC) >>> (Owner: Ronald Oussoren) >> >> Yes, this would be cool. > > This is nearly committed. For some reason, SF apparently dropped > my last change request for it, and Ronald's patch responding > to this request, so it barely missed 2.5a2. Regardless, I guess > Ronald will release his 2.5a2 binaries in the fat form. I hope this isn't a bad omen for this feature ;-). I've also had to resubmit one of my changes to this tracker item because SF dropped it. My 2.5a2 build will include this patch and will be a fat binary. Ronald From martin at v.loewis.de Fri Apr 28 19:24:14 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 28 Apr 2006 19:24:14 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <7E788D21-BFB8-4B51-820E-F09E6F0E1E10@mac.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> <7E788D21-BFB8-4B51-820E-F09E6F0E1E10@mac.com> Message-ID: <44524FBE.1020804@v.loewis.de> Ronald Oussoren wrote: > That's fine by me. I'll be adding them to the universal python 2.4 > tree soon and to 2.5 when that's is done. Do have to wait for the > contributor agreement, which the folks over at psf at python.org say is > good enough, to do that? If the artist has informally agreed to do that (so it is just the formal signing that takes time), go ahead. Regards, Martin From pje at telecommunity.com Fri Apr 28 19:33:51 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 13:33:51 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <445240DC.3070400@egenix.com> References: <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060428123724.04488590@mail.telecommunity.com> At 06:20 PM 4/28/2006 +0200, M.-A. Lemburg wrote: >A single malicious .pth file on PYTHONPATH could trick >the user into running Python modules that he isn't really >aware of. > >Adding directories to PYTHONPATH which are writable by >others is easy, just add '/tmp' as entry in some .pth >file. Then put e.g. a modified os.py into /tmp and have >it execute 'rm -rf /' at import time. You still haven't explained *how* these malicious .pth files or entries come into being *without the user's permission*. How does this malicious user get write access to the .pth file to put /tmp into it? Whatever your answer is, *that* is the security hole, not the existence of usable .pth files. So this is still just handwaving and FUD. > > And if you replace 'syspathtools.use()' in that email, with > > 'pkg_resources.require()', then it describes *exactly how setuptools > > works with .egg directories today*. > >Interesting that you used that idea, because back then you didn't >reply to the email. Looks like I deserve some credit ;-) In truth, I completely forgot about that post and only rediscovered it yesterday. When Bob and I were doing the initial design, he said he thought that the naively-unzipped form of an egg should be usable as-is, and I agreed that it made sense to do that. >If you've already implemented this (which I wasn't aware of, since >when I played with setuptools it kept installing .egg ZIP files), See the --always-unzip option to easy_install, which has been there since 0.5a9 -- i.e., since July 2005. Package authors can also mark their packages "zip_safe=False" to force it to be installed unzipped (unless the user forces it to be zipped using --zip-ok). >then why don't you make .egg *directories* the standard installation >scheme, instead of insisting on having .egg ZIP files in >site-packages/ ? Because zip files add *less* import overhead over the life of a program (even a relatively short program) than directories do! (Which, by the way, I believe I've explained to you on the distutils-SIG on more than one occasion, during some of the previous N times you brought this up.) >I wonder why you don't always create .egg directories. Performance, plain and simple. What you *should* be benchmarking when you test performance impact, is twenty directories at the front of sys.path vs. twenty zipfiles at the front, both for "python -c pass" and for a program that does a few stdlib imports. That will show you why EasyInstall never unzips eggs if it can help it. >* I am against using ZIP as medium for package installation. Then add this to your distutils.cfg or ~/.pydistutils.cfg and be happy forevermore: [easy_install] always_unzip = 1 If, however, you mean that you want *everyone else* to do as you prefer, forget about it, 'cause it's not going to happen. You aren't setuptools' target user, so the defaults aren't going to be catering to your prejudices -- especially not the ones based on ignorance, like this one. >* I am against using standard distutils commands to mean > different things depending on which set of packages you > have installed (in this case the "install" command) And as I just pointed out, you've failed to show that there is any practical purpose to this. I've already made the behavior an option, and it's the default behavior when you use --root. You haven't shown any actual use case for it being the default behavior, and practicality beats purity. What's more, you said you don't want the option to make it the default behavior for you. Which means that this is only about you deciding what is "right" for *other* people than you to have, so I don't see any reason to consider that subject any further. If other people want that behavior, they can certainly speak up for themselves about what *actual* (not hypothetical) problems it creates for them. (And when they do speak up, as some system packagers did, I made --root enable the old behavior and that addressed the *actual* problems the new behavior created for that audience.) >Phillip, you're always side-stepping answering questions >(and cutting away quoted email text doesn't help either). Repeating arguments I've already shown to be spurious helps even less. I don't see much point to chasing red herrings and straw men, either. >You should be aware that both package authors and users >are capable of learning and adapting. Then they can certainly adapt to "install" doing the right thing for most users and most packages: to install things in such a way that they can be uninstalled or upgraded cleanly, by default. >If we tell package authors to put a link to their eggs on >PyPI along with an MD5 sum, then the users be in a much >better position and could feel more secure about what they >download and install. People who want to do that can easily use: easy_install --allow-hosts=*.python.org or put it in their configuration files to make it the default behavior. See: http://peak.telecommunity.com/DevCenter/EasyInstall#restricting-downloads-with-allow-hosts >Have a nice weekend, It'd be an even nicer one if people objecting to things would read the documentation first, to find out whether their objections actually apply. Your zip vs. directory arguments would've been moot if you'd read things like this: http://peak.telecommunity.com/DevCenter/EasyInstall#compressed-installation Or for that matter, if you'd read the documentation that was linked to in the post that started this thread, which explains that .egg can be a zipfile or a directory. Or if you'd read the parts of the EasyInstall documentation where it explains that OS X already processes .pth files from ~/Library/Python2.x/site-packages. If you don't have time to read the documentation, please have the courtesy to *ask questions* to try to alleviate your concerns, preferably *before* you launch attacks using nothing but vague premonitions based on ignorance. From g.brandl at gmx.net Fri Apr 28 19:36:37 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 28 Apr 2006 19:36:37 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <44522FCB.1060106@v.loewis.de> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> Message-ID: <e2tjr5$890$1@sea.gmane.org> Martin v. L?wis wrote: > Neal Norwitz wrote: >> Who is the owner for getting new icons installed with the new logo? > > Nobody, so far (for Windows). Somebody should contact the owner and > find out what the license on this work is, and then tell me what > to do with them. I assume the py and pyc icons are to be replaced, and I > also assume the Vista icons are to be ignored, but what about the > other(s)? As I posted here previously, I contacted the owner, and he said that he didn't care about specifying a license. I guess that means that we can pick one ;) Georg From nnorwitz at gmail.com Fri Apr 28 19:37:11 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 28 Apr 2006 10:37:11 -0700 Subject: [Python-Dev] Summer of Code mailing list Message-ID: <ee2a432c0604281037x714dbb60ub97047270b346e5d@mail.gmail.com> There's a new SoC mailing list. soc2006 at python.org You can sign up here: http://mail.python.org/mailman/listinfo/soc2006 This list is for any SoC discussion: mentors, students, idea, etc. Student can submit applications starting May 1, so now is the time to get students interested in your ideas! Please pass this information along. Cheers, n From pje at telecommunity.com Fri Apr 28 19:51:32 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 13:51:32 -0400 Subject: [Python-Dev] Internal documentation for egg formats now available In-Reply-To: <44524614.1050803@egenix.com> References: <445240DC.3070400@egenix.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060425183523.01e59ae8@mail.telecommunity.com> <5.1.1.6.0.20060427131801.01e4fa90@mail.telecommunity.com> <5.1.1.6.0.20060427171209.01e27fe8@mail.telecommunity.com> <445240DC.3070400@egenix.com> Message-ID: <5.1.1.6.0.20060428133628.0379f008@mail.telecommunity.com> (Thank you, by the way, for actually reading some of the documentation before writing this post, and for asking questions instead of jumping to conclusions.) At 06:43 PM 4/28/2006 +0200, M.-A. Lemburg wrote: >I've now found this section in the documentation which seems to >have the reason: > >http://peak.telecommunity.com/DevCenter/EasyInstall#compressed-installation > >Apart from the statement "because Python processes zipfile entries on >sys.path much faster than it does directories." being wrong, Measure it. Be sure to put the directories or zipfiles *first* on the path, not last. The easiest way to do so accurately would be to easy_install some packages first using --zip-ok and then using --always-unzip, and compare the two. Finally, try installing them --multi-version, and compare that, to get the speed when none of the packages are explicitly put on sys.path. >it looks like all you'd have to do, is make --always-unzip the >default. You mean, all that *you'd* have to do is put it in your distutils configuration to make it the default for you, if for some reason you have a lot of programs that resemble "python -c 'pass'" in their import behavior. :) >Another nit which seems to have been introduced in 0.6a11: >you now prepend egg directory entries to other sys.path entries, >instead of appending them. > >What's the reason for that ? It eliminates the possibility of conflicts with system-installed packages, or packages installed using the distutils, and provides the ability to install stdlib upgrades. >Egg directory should really be treated just like any other >site-package package and not be allowed to override stdlib >modules and packages Adding them *after* site-packages makes it impossible for a user to install a local upgrade to a system-installed package in site-packages. One problem that I think you're not taking into consideration, is that setuptools does not overwrite packages except with an identical version. It thus cannot replace an existing "raw" installation of a package by the distutils, except by deleting it (which it used to support via --delete-conflicting) or by installing ahead of the conflict on sys.path. Since one of the problems with using --delete-conflicting was that users often don't have write access to site-packages, it's far simpler to just organize sys.path so that eggs always take precedence over their parent directory. Thus, eggs in site-packages get precedence over site-packages itself, and eggs on PYTHONPATH get precedence over PYTHONPATH. >without explicit user action by >e.g. adjusting PYTHONPATH. Installing to a PYTHONPATH directory *is* an explicit user action. Installing something anywhere is an explicit user request: "I'd like this package to be importable, please." If you don't want what you install to be importable by default, use --multi-version, which installs packages but doesn't put them on sys.path until you ask for them at runtime. From theller at python.net Fri Apr 28 20:01:23 2006 From: theller at python.net (Thomas Heller) Date: Fri, 28 Apr 2006 20:01:23 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <e2tjr5$890$1@sea.gmane.org> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> <e2tjr5$890$1@sea.gmane.org> Message-ID: <e2tl9j$d6a$1@sea.gmane.org> Georg Brandl wrote: > Martin v. L?wis wrote: >> Neal Norwitz wrote: >>> Who is the owner for getting new icons installed with the new logo? >> Nobody, so far (for Windows). Somebody should contact the owner and >> find out what the license on this work is, and then tell me what >> to do with them. I assume the py and pyc icons are to be replaced, and I >> also assume the Vista icons are to be ignored, but what about the >> other(s)? > > As I posted here previously, I contacted the owner, and he said that he > didn't care about specifying a license. I guess that means that we can > pick one ;) > If this is really possible (but I doubt that) I suggest we pick the license that Just had for his logo: """ .... If you do not agree with this LICENSE, you had better not looked at THE LOGO. """ http://starship.python.net/~just/pythonpowered/ Thomas From guido at python.org Fri Apr 28 20:03:47 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 28 Apr 2006 11:03:47 -0700 Subject: [Python-Dev] Adding wsgiref to stdlib Message-ID: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> PEP 333 specifies WSGI, the Python Web Server Gateway Interface v1.0; it's written by Phillip Eby who put a lot of effort in it to make it acceptable to very diverse web frameworks. The PEP has been well received by web framework makers and users. As a supplement to the PEP, Phillip has written a reference implementation, "wsgiref". I don't know how many people have used wsgiref; I'm using it myself for an intranet webserver and am very happy with it. (I'm asking Phillip to post the URL for the current source; searching for it produces multiple repositories.) I believe that it would be a good idea to add wsgiref to the stdlib, after some minor cleanups such as removing the extra blank lines that Phillip puts in his code. Having standard library support will remove the last reason web framework developers might have to resist adopting WSGI, and the resulting standardization will help web framework users. Last time this was brought up there were feature requests and discussion on how "industrial strength" the webserver in wsgiref ought to be but nothing like the flamefest that setuptools caused (no comments please). I'm inviting people to discuss the addition of wsgiref to the standard library. I'd like the discussion to be finished before a3 goes out; technically it can go in up till the b1 code freeze, but I don't really want to push it that close. I'd like the focus of the discussion to be "what are the risks of adding wsgiref to the stdlib"; not "what could we think of that's even better". Achieving a perfect decision is not the goal; having general consensus that adding it would be better than not adding is would be good. Pointing out specific bugs in wsgiref and suggesting how they ought to be fixed is also welcome. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From janssen at parc.com Fri Apr 28 20:31:09 2006 From: janssen at parc.com (Bill Janssen) Date: Fri, 28 Apr 2006 11:31:09 PDT Subject: [Python-Dev] Adding wsgiref to stdlib In-Reply-To: Your message of "Fri, 28 Apr 2006 11:03:47 PDT." <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> Message-ID: <06Apr28.113116pdt."58641"@synergy1.parc.xerox.com> > I'm inviting people to discuss the addition of wsgiref to the standard > library. I'd like the discussion to be finished before a3 goes out; +1. I think it's faily low-risk. WSGI has been discussed and implemented for well over a year; there are many working implementations of the spec. Adding wsgiref to the stdlib would help other implementors of the spec. I think there should be a better server implementation in the stdlib, but I think that can be added separately. (Personally, I'd like to find the time to (a) make Medusa thread-safe, and (b) add WSGI to it. If anyone would like to help with that, send me mail.) Bill From pje at telecommunity.com Fri Apr 28 20:36:11 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 14:36:11 -0400 Subject: [Python-Dev] Adding wsgiref to stdlib In-Reply-To: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.co m> Message-ID: <5.1.1.6.0.20060428142518.01e2b180@mail.telecommunity.com> At 11:03 AM 4/28/2006 -0700, Guido van Rossum wrote: >(I'm asking Phillip to post the URL for the current >source; searching for it produces multiple repositories.) Source browsing: http://svn.eby-sarna.com/wsgiref/ Anonymous SVN: svn://svn.eby-sarna.com/svnroot/wsgiref From jjl at pobox.com Fri Apr 28 20:37:14 2006 From: jjl at pobox.com (John J Lee) Date: Fri, 28 Apr 2006 19:37:14 +0100 (GMT Standard Time) Subject: [Python-Dev] Bug day? Message-ID: <Pine.WNT.4.64.0604281935470.3964@shaolin> Is another bug day planned in the next week or two? John From martin at v.loewis.de Fri Apr 28 21:22:54 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 28 Apr 2006 21:22:54 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <e2tjr5$890$1@sea.gmane.org> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> <e2tjr5$890$1@sea.gmane.org> Message-ID: <44526B8E.2070809@v.loewis.de> Georg Brandl wrote: > As I posted here previously, I contacted the owner, and he said that he > didn't care about specifying a license. I guess that means that we can > pick one ;) Can you please ask whether he would be willing to fill out a contrib form (http://www.python.org/psf/contrib/contrib-form/)? Without some kind of explicit contribution, I hesitate to use it. Regards, Martin From engelbert.gruber at ssg.co.at Fri Apr 28 21:25:39 2006 From: engelbert.gruber at ssg.co.at (engelbert.gruber at ssg.co.at) Date: Fri, 28 Apr 2006 21:25:39 +0200 (CEST) Subject: [Python-Dev] rest2latex - was: Re: Raising objections In-Reply-To: <e2tbfe$akr$1@sea.gmane.org> References: <ee2a432c0604182110w49a84a38iace62e9d9b3bf424@mail.gmail.com> <200604191457.24195.anthony@interlink.com.au> <1145424790.21740.86.camel@geddy.wooz.org> <ee2a432c0604182255t4bda0170g14b84c4ddb6b8a85@mail.gmail.com> <4445FADF.90307@python.net> <4451DE1B.2090804@ghaering.de> <4451E6C1.4040506@python.net> <Pine.LNX.4.64.0604281216430.5662@lx3.local> <e2tbfe$akr$1@sea.gmane.org> Message-ID: <Pine.LNX.4.64.0604282116350.6504@lx3.local> i commited mkpydoc to docutils/sandbox/pydoc-writer with some small modifications - Patch for python 2.3. - Filenames from command line. - Guard definition of ``locallinewidth`` against redefinition. - latex needs definition of ``locallinewidth``. 1. there isnt a copyright in mkpydoc.py, could everybody involved agree on PD so it can go into docutils. 2. edloper/docpy is another possibility 3. as is http://www.rexx.com/~dkuhlman/rstpythonlatex_intro.html 4. AND * An unpatched latex2html is unable to handle ``longtable`` options. Maybe remove the ``[c]`` and put the longtable into a center environment, but python doc uses ``tablei`` and `` longtablei``. * (th) the table in the ctypes tutorial has a totally different look than the other tables in the docs. Compare `ctypes <http://docs.python.org/dev/lib/ctypes-simple-data-types.html>`_ with `std pydoc <http://docs.python.org/dev/lib/module-struct.html>`__ . see previous * * (th) feature request: it would be very nice if it were possible to generate links into the index for functions and types from the rest sources. * Document ``markup.py`` and ``missing.py``. * Title, author, ... are hardcoded. and i am on workshop next week, so please be patient and i dont know if this will ever work. -- contributions welcome --- d.goodger From g.brandl at gmx.net Fri Apr 28 21:31:24 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 28 Apr 2006 21:31:24 +0200 Subject: [Python-Dev] Float formatting and # Message-ID: <e2tqic$v6e$1@sea.gmane.org> Is there a reason why the "alternate format" isn't documented for float conversions in http://docs.python.org/lib/typesseq-strings.html ? '%#8.f' % 1.0 keeps the decimal point while '%8.f' % 1.0 drops it. Also, for %g the alternate form keeps trailing zeroes. While at it, I noticed a difference between %f and %g: '%.3f' % 1.123 is "1.123" while '%.3g' % 1.123 is "1.12". Is that intentional? Georg From ianb at colorstudy.com Fri Apr 28 21:32:35 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 14:32:35 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> Message-ID: <44526DD3.4010807@colorstudy.com> Guido van Rossum wrote: > PEP 333 specifies WSGI, the Python Web Server Gateway Interface v1.0; > it's written by Phillip Eby who put a lot of effort in it to make it > acceptable to very diverse web frameworks. The PEP has been well > received by web framework makers and users. > > As a supplement to the PEP, Phillip has written a reference > implementation, "wsgiref". I don't know how many people have used > wsgiref; I'm using it myself for an intranet webserver and am very > happy with it. (I'm asking Phillip to post the URL for the current > source; searching for it produces multiple repositories.) > > I believe that it would be a good idea to add wsgiref to the stdlib, > after some minor cleanups such as removing the extra blank lines that > Phillip puts in his code. Having standard library support will remove > the last reason web framework developers might have to resist adopting > WSGI, and the resulting standardization will help web framework users. I'd like to include paste.lint with that as well (as wsgiref.lint or whatever). Since the last discussion I enumerated in the docstring all the checks it does. There's still some outstanding issues, mostly where I'm not sure if it is too restrictive (marked with @@ in the source). It's at: http://svn.pythonpaste.org/Paste/trunk/paste/lint.py I think another useful addition would be some prefix-based dispatcher, similar to paste.urlmap (but probably a bit simpler): http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py The motivation there is to give people the basic tools to simple multi-application hosting, and in the process implicitly suggest how other dispatching can be done. I think this is something that doesn't occur to people naturally, and they see it as a flaw in the server (that the server doesn't have a dispatching feature), and the result is either frustration, griping, or bad kludges. By including a basic implementation of WSGI-based dispatching the standard library can lead people in the right direction for more sophisticated dispatching. And prefix dispatching is also quite useful on its own, it's not just educational. > Last time this was brought up there were feature requests and > discussion on how "industrial strength" the webserver in wsgiref ought > to be but nothing like the flamefest that setuptools caused (no > comments please). No one disagreed with the basic premise though, just some questions about the particulars of the server. I think there were at least a couple small suggestions for the wsgiref server; in particular maybe a slight refactoring to make it easier to use with https. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From g.brandl at gmx.net Fri Apr 28 21:36:49 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 28 Apr 2006 21:36:49 +0200 Subject: [Python-Dev] Float formatting and # In-Reply-To: <e2tqic$v6e$1@sea.gmane.org> References: <e2tqic$v6e$1@sea.gmane.org> Message-ID: <e2tqsh$v6e$2@sea.gmane.org> Georg Brandl wrote: > Is there a reason why the "alternate format" isn't documented for float > conversions in http://docs.python.org/lib/typesseq-strings.html ? > > '%#8.f' % 1.0 keeps the decimal point while '%8.f' % 1.0 drops it. > > Also, for %g the alternate form keeps trailing zeroes. > > While at it, I noticed a difference between %f and %g: > > '%.3f' % 1.123 is "1.123" while > '%.3g' % 1.123 is "1.12". > > Is that intentional? Reviewing the printf man page, this is okay since for %f, the precision is the number of digits after the decimal point while for %g, it is the number of significant digits. Still, that should be documented in the Python manual. Georg From guido at python.org Fri Apr 28 21:47:44 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 28 Apr 2006 12:47:44 -0700 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <44526DD3.4010807@colorstudy.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> Message-ID: <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> On 4/28/06, Ian Bicking <ianb at colorstudy.com> wrote: > I'd like to include paste.lint with that as well (as wsgiref.lint or > whatever). Since the last discussion I enumerated in the docstring all > the checks it does. There's still some outstanding issues, mostly where > I'm not sure if it is too restrictive (marked with @@ in the source). > It's at: > > http://svn.pythonpaste.org/Paste/trunk/paste/lint.py That looks fine to me; I'm not in this business full-time so I'll await others' responses. > I think another useful addition would be some prefix-based dispatcher, > similar to paste.urlmap (but probably a bit simpler): > http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py IMO this is getting into framework design. Perhaps something like this could be added in 2.6? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ianb at colorstudy.com Fri Apr 28 22:03:18 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 15:03:18 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> Message-ID: <44527506.2070407@colorstudy.com> Guido van Rossum wrote: >> I think another useful addition would be some prefix-based dispatcher, >> similar to paste.urlmap (but probably a bit simpler): >> http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py > > > IMO this is getting into framework design. Perhaps something like this > could be added in 2.6? I don't think it's frameworky. It could be used to build a very primitive framework, but even then it's not a particularly useful starting point. In Paste this would generally be used below any framework (or above I guess, depending on which side is "up"). You'd pass /blog to a blog app, /cms to a cms app, etc. WSGI already is very specific about what needs to be done when doing this dispatching (adjusting SCRIPT_NAME and PATH_INFO), and that's all that the dispatching needs to do. The applications themselves are written in some framework with internal notions of URL dispatching, but this doesn't infringe upon those. (Unless the framework doesn't respect SCRIPT_NAME and PATH_INFO; but that's their problem, as the dispatcher is just using what's already allowed for in the WSGI spec.) It also doesn't overlap with frameworks, as prefix-based dispatching isn't really that useful in a framework. The basic implementation is: class PrefixDispatch(object): def __init__(self): self.applications = {} def add_application(self, prefix, app): self.applications[prefix] = app def __call__(self, environ, start_response): apps = sorted(self.applications.items(), key=lambda x: -len(x[0])) path_info = environ.get('PATH_INFO', '') for prefix, app in apps: if not path_info.startswith(prefix): continue environ['SCRIPT_NAME'] = environ.get('SCRIPT_NAME', '')+prefix environ['PATH_INFO'] = environ.get('PATH_INFO', '')[len(prefix):] return app(environ, start_response) start_response('404 Not Found', [('Content-type', 'text/html')]) return ['<html><body><h1>Not Found</h1></body></html>'] There's a bunch of checks that should take place (most related to /'s), and the not found response should be configurable (probably as an application that can be passed in as an argument). But that's most of what it should do. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From pje at telecommunity.com Fri Apr 28 22:16:43 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 16:16:43 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <44526DD3.4010807@colorstudy.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> Message-ID: <5.1.1.6.0.20060428160624.04234328@mail.telecommunity.com> At 02:32 PM 4/28/2006 -0500, Ian Bicking wrote: >Guido van Rossum wrote: > > PEP 333 specifies WSGI, the Python Web Server Gateway Interface v1.0; > > it's written by Phillip Eby who put a lot of effort in it to make it > > acceptable to very diverse web frameworks. The PEP has been well > > received by web framework makers and users. > > > > As a supplement to the PEP, Phillip has written a reference > > implementation, "wsgiref". I don't know how many people have used > > wsgiref; I'm using it myself for an intranet webserver and am very > > happy with it. (I'm asking Phillip to post the URL for the current > > source; searching for it produces multiple repositories.) > > > > I believe that it would be a good idea to add wsgiref to the stdlib, > > after some minor cleanups such as removing the extra blank lines that > > Phillip puts in his code. Having standard library support will remove > > the last reason web framework developers might have to resist adopting > > WSGI, and the resulting standardization will help web framework users. > >I'd like to include paste.lint with that as well (as wsgiref.lint or >whatever). Since the last discussion I enumerated in the docstring all >the checks it does. There's still some outstanding issues, mostly where >I'm not sure if it is too restrictive (marked with @@ in the source). >It's at: > > http://svn.pythonpaste.org/Paste/trunk/paste/lint.py +1, but lose the unused 'global_conf' parameter and 'make_middleware' functions. >I think another useful addition would be some prefix-based dispatcher, >similar to paste.urlmap (but probably a bit simpler): >http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py I'd rather see something a *lot* simpler - something that just takes a dictionary mapping names to application objects, and parses path segments using wsgiref functions. That way, its usefulness as an example wouldn't be obscured by having too many features. Such a thing would still be quite useful, and would illustrate how to do more sophisticated dispatching. Something more or less like: from wsgiref.util import shift_path_info # usage: # main_app = AppMap(foo=part_one, bar=part_two, ...) class AppMap: def __init__(self, **apps): self.apps = apps def __call__(self, environ, start_response): name = shift_path_info(environ) if name is None: return self.default(environ, start_response) elif name in self.apps: return self.apps[name](environ,start_response) return self.not_found(environ, start_response) def default(self, environ, start_response): self.not_found(environ, start_response) def not_found(self, environ, start_response): # code to generate a 404 response here This should be short enough to highlight the concept, while still providing a few hooks for subclassing. From guido at python.org Fri Apr 28 22:19:26 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 28 Apr 2006 13:19:26 -0700 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <44527506.2070407@colorstudy.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> Message-ID: <ca471dc20604281319n544a4d9o13febc927266f2fd@mail.gmail.com> It still looks like an application of WSGI, not part of a reference implementation. Multiple apps looks like an advanced topic to me; more something that the infrastructure (Apache server or whatever) ought to take care of. I don't expect you to agree with me. But I don't expect you to be able to convince me either. Maybe you can convince Phillip; I'm going to try to sit on my hands now. --Guido On 4/28/06, Ian Bicking <ianb at colorstudy.com> wrote: > Guido van Rossum wrote: > >> I think another useful addition would be some prefix-based dispatcher, > >> similar to paste.urlmap (but probably a bit simpler): > >> http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py > > > > > > IMO this is getting into framework design. Perhaps something like this > > could be added in 2.6? > > I don't think it's frameworky. It could be used to build a very > primitive framework, but even then it's not a particularly useful > starting point. > > In Paste this would generally be used below any framework (or above I > guess, depending on which side is "up"). You'd pass /blog to a blog > app, /cms to a cms app, etc. WSGI already is very specific about what > needs to be done when doing this dispatching (adjusting SCRIPT_NAME and > PATH_INFO), and that's all that the dispatching needs to do. > > The applications themselves are written in some framework with internal > notions of URL dispatching, but this doesn't infringe upon those. > (Unless the framework doesn't respect SCRIPT_NAME and PATH_INFO; but > that's their problem, as the dispatcher is just using what's already > allowed for in the WSGI spec.) It also doesn't overlap with frameworks, > as prefix-based dispatching isn't really that useful in a framework. > > The basic implementation is: > > class PrefixDispatch(object): > def __init__(self): > self.applications = {} > def add_application(self, prefix, app): > self.applications[prefix] = app > def __call__(self, environ, start_response): > apps = sorted(self.applications.items(), > key=lambda x: -len(x[0])) > path_info = environ.get('PATH_INFO', '') > for prefix, app in apps: > if not path_info.startswith(prefix): > continue > environ['SCRIPT_NAME'] = environ.get('SCRIPT_NAME', '')+prefix > environ['PATH_INFO'] = environ.get('PATH_INFO', > '')[len(prefix):] > return app(environ, start_response) > start_response('404 Not Found', [('Content-type', 'text/html')]) > return ['<html><body><h1>Not Found</h1></body></html>'] > > > There's a bunch of checks that should take place (most related to /'s), > and the not found response should be configurable (probably as an > application that can be passed in as an argument). But that's most of > what it should do. > > > -- > Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ianb at colorstudy.com Fri Apr 28 22:31:08 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 15:31:08 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <ca471dc20604281319n544a4d9o13febc927266f2fd@mail.gmail.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <ca471dc20604281319n544a4d9o13febc927266f2fd@mail.gmail.com> Message-ID: <44527B8C.8030809@colorstudy.com> Guido van Rossum wrote: > It still looks like an application of WSGI, not part of a reference > implementation. Multiple apps looks like an advanced topic to me; more > something that the infrastructure (Apache server or whatever) ought to > take care of. I don't understand the distinction between "application of WSGI" and "reference implementation". The reference implementation *is* an application of WSGI... using the reference implementation doesn't make something more or less WSGI. Also, wsgiref *is* infrastructure. When using the HTTP server, there is no other infrastructure, there is nothing else that will do this prefix dispatching for you. Apache and other web servers also provides this functionality, and with the right configuration it will provide the identical environment as a prefix dispatcher. To me that's a positive. I've seen cases where other people have done the same prefix dispatching in Python *without* matching the interface of anybody else's code or environment; that's the sort of thing a reference implementation is meant to keep people from doing. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From pje at telecommunity.com Fri Apr 28 22:41:25 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 16:41:25 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <ca471dc20604281319n544a4d9o13febc927266f2fd@mail.gmail.com > References: <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> Message-ID: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> At 01:19 PM 4/28/2006 -0700, Guido van Rossum wrote: >It still looks like an application of WSGI, not part of a reference >implementation. Multiple apps looks like an advanced topic to me; more >something that the infrastructure (Apache server or whatever) ought to >take care of. I'm fine with a super-simple implementation that emphasizes the concept, not feature-richness. A simple dict-based implementation showcases both the wsgiref function for path shifting, and the idea of composing an application out of mini-applications. (The point is to demonstrate how people can compose WSGI applications *without* needing a framework.) But I don't think that this demo should be a prefix mapper; people doing more sophisticated routing can use Paste or Routes. If it's small enough, I'd say to add this mapper to wsgiref.util, or if Guido is strongly set against it being in the code, we should at least put it in the documentation as an example of how to use 'shift_path_info()' in wsgiref.util. From guido at python.org Fri Apr 28 22:40:01 2006 From: guido at python.org (Guido van Rossum) Date: Fri, 28 Apr 2006 13:40:01 -0700 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> Message-ID: <ca471dc20604281340y7f9dbee7i1adf3e6c74922443@mail.gmail.com> On 4/28/06, Phillip J. Eby <pje at telecommunity.com> wrote: > If it's small enough, I'd say to add this mapper to wsgiref.util, or if > Guido is strongly set against it being in the code, we should at least put > it in the documentation as an example of how to use 'shift_path_info()' in > wsgiref.util. I'm for doing what you think is best. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From collinw at gmail.com Fri Apr 28 22:47:38 2006 From: collinw at gmail.com (Collin Winter) Date: Fri, 28 Apr 2006 16:47:38 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> Message-ID: <43aa6ff70604281347l6c58366egaab182a91aa12420@mail.gmail.com> On 4/28/06, Phillip J. Eby <pje at telecommunity.com> wrote: > At 01:19 PM 4/28/2006 -0700, Guido van Rossum wrote: > >It still looks like an application of WSGI, not part of a reference > >implementation. Multiple apps looks like an advanced topic to me; more > >something that the infrastructure (Apache server or whatever) ought to > >take care of. > > I'm fine with a super-simple implementation that emphasizes the concept, > not feature-richness. A simple dict-based implementation showcases both > the wsgiref function for path shifting, and the idea of composing an > application out of mini-applications. (The point is to demonstrate how > people can compose WSGI applications *without* needing a framework.) > > But I don't think that this demo should be a prefix mapper; people doing > more sophisticated routing can use Paste or Routes. > > If it's small enough, I'd say to add this mapper to wsgiref.util, or if > Guido is strongly set against it being in the code, we should at least put > it in the documentation as an example of how to use 'shift_path_info()' in > wsgiref.util. Perhaps this could go in Demo/wsgiref/? Collin Winter From ianb at colorstudy.com Fri Apr 28 22:50:59 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 15:50:59 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428160624.04234328@mail.telecommunity.com> References: <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <5.1.1.6.0.20060428160624.04234328@mail.telecommunity.com> Message-ID: <44528033.5080506@colorstudy.com> Phillip J. Eby wrote: >> I'd like to include paste.lint with that as well (as wsgiref.lint or >> whatever). Since the last discussion I enumerated in the docstring all >> the checks it does. There's still some outstanding issues, mostly where >> I'm not sure if it is too restrictive (marked with @@ in the source). >> It's at: >> >> http://svn.pythonpaste.org/Paste/trunk/paste/lint.py > > > +1, but lose the unused 'global_conf' parameter and 'make_middleware' > functions. Yeah, those are just related to Paste Deploy and wouldn't go in. >> I think another useful addition would be some prefix-based dispatcher, >> similar to paste.urlmap (but probably a bit simpler): >> http://svn.pythonpaste.org/Paste/trunk/paste/urlmap.py > > > I'd rather see something a *lot* simpler - something that just takes a > dictionary mapping names to application objects, and parses path > segments using wsgiref functions. That way, its usefulness as an > example wouldn't be obscured by having too many features. Such a thing > would still be quite useful, and would illustrate how to do more > sophisticated dispatching. Something more or less like: > > from wsgiref.util import shift_path_info > > # usage: > # main_app = AppMap(foo=part_one, bar=part_two, ...) > > class AppMap: > def __init__(self, **apps): > self.apps = apps > > def __call__(self, environ, start_response): > name = shift_path_info(environ) > if name is None: > return self.default(environ, start_response) > elif name in self.apps: > return self.apps[name](environ,start_response) > return self.not_found(environ, start_response) > > def default(self, environ, start_response): > self.not_found(environ, start_response) > > def not_found(self, environ, start_response): > # code to generate a 404 response here > > This should be short enough to highlight the concept, while still > providing a few hooks for subclassing. That's mostly what I was thinking, though using a full prefix (instead of just a single path segment), and the default is the application at '', like in my other email. paste.urlmap has several features I wouldn't propose (like domain and port matching, more Paste Deploy stuff, and a proxy object that I should probably just delete); I probably should have been more specific. URLMap's dictionary interface isn't that useful either. Another feature that the example in my other email doesn't have is / handling, specifically redirecting /something-that-matches to /something-that-matches/ (something Apache's Alias doesn't do but should). Host and port matching is pretty easy to do at the same time, and in my experience can be useful to do at the same time, but I don't really care if that feature goes in. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From ianb at colorstudy.com Fri Apr 28 23:04:28 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 16:04:28 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> References: <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> Message-ID: <4452835C.7080902@colorstudy.com> Phillip J. Eby wrote: > At 01:19 PM 4/28/2006 -0700, Guido van Rossum wrote: > >> It still looks like an application of WSGI, not part of a reference >> implementation. Multiple apps looks like an advanced topic to me; more >> something that the infrastructure (Apache server or whatever) ought to >> take care of. > > > I'm fine with a super-simple implementation that emphasizes the concept, > not feature-richness. A simple dict-based implementation showcases both > the wsgiref function for path shifting, and the idea of composing an > application out of mini-applications. (The point is to demonstrate how > people can compose WSGI applications *without* needing a framework.) > > But I don't think that this demo should be a prefix mapper; people doing > more sophisticated routing can use Paste or Routes. I don't see why not to use prefix matching. It is more consistent with the handling of the default application ('', instead of a method that needs to be overridden), and more general, and the algorithm is only barely more complex and not what I'd call sophisticated. The default application handling in particular means that AppMap isn't really useful without subclassing or assigning to .default. Prefix matching wouldn't show off anything else in wsgiref, because there's nothing else to use; paste.urlmap doesn't use any other part of Paste either (except one unimportant exception) because there's just no need. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From pje at telecommunity.com Fri Apr 28 23:52:11 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 17:52:11 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <4452835C.7080902@colorstudy.com> References: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> At 04:04 PM 4/28/2006 -0500, Ian Bicking wrote: >I don't see why not to use prefix matching. It is more consistent with >the handling of the default application ('', instead of a method that >needs to be overridden), and more general, and the algorithm is only >barely more complex and not what I'd call sophisticated. The default >application handling in particular means that AppMap isn't really useful >without subclassing or assigning to .default. > >Prefix matching wouldn't show off anything else in wsgiref, Right, that would be taking away one of the main reasons to include it. To make the real dispatcher, I'd flesh out what I wrote a little bit, to handle the "default" method in a more meaningful way, including the redirect. All that should only add a few lines, however. From martin at v.loewis.de Sat Apr 29 00:19:20 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 29 Apr 2006 00:19:20 +0200 Subject: [Python-Dev] binary trees. Review obmalloc.c In-Reply-To: <44508866.60503@renet.ru> References: <20060424114024.66D0.JCARLSON@uci.edu> <444F1E36.30302@renet.ru> <20060426094148.66EE.JCARLSON@uci.edu> <44508866.60503@renet.ru> Message-ID: <445294E8.1080401@v.loewis.de> Vladimir 'Yu' Stepanov wrote: > * To adapt allocation of blocks of memory with other alignment. Now > alignment is rigidly set on 8 bytes. As a variant, it is possible to > use alignment on 4 bytes. And this value can be set at start of the > interpreter through arguments/variable environments/etc. At testing > with alignment on 4 or 8 bytes difference in speed of work was not > appreciable. That depends on the hardware you use, of course. Some architectures absolutely cannot stand mis-aligned accesses, and programs will just crash if they try to perform such accesses. So Python should err on the safe side, and only use a smaller alignment when it is known not to hurt. OTOH, I don't see the *advantage* in reducing the alignment. > * To expand the maximal size which can be allocated by means of the > given module. Now the maximal size is 256 bytes. Right. This is somewhat deliberate, though; the expectation is that fragmentation will increase dramatically if even larger size classes are supported. > * At the size of the allocated memory close to maximal, the > allocation of blocks becomes inefficient. For example, for the > sizesof blocks 248 and 256 (blocksize), values of quantity of > elements (PAGESIZE/blocksize) it is equal 16. I.e. it would be > possible to use for the sizes of the block 248 same page, as for the > size of the block 256. Good idea; that shouldn't be too difficult to implement: for sizes above 215, pools need to be spaced apart only at 16 bytes. Regards, Martin From ianb at colorstudy.com Sat Apr 29 00:47:55 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 17:47:55 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> References: <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> Message-ID: <44529B9B.6050805@colorstudy.com> Phillip J. Eby wrote: > At 04:04 PM 4/28/2006 -0500, Ian Bicking wrote: > >> I don't see why not to use prefix matching. It is more consistent with >> the handling of the default application ('', instead of a method that >> needs to be overridden), and more general, and the algorithm is only >> barely more complex and not what I'd call sophisticated. The default >> application handling in particular means that AppMap isn't really useful >> without subclassing or assigning to .default. >> >> Prefix matching wouldn't show off anything else in wsgiref, > > > Right, that would be taking away one of the main reasons to include it. That's putting the cart in front of the horse, using a matching algorithm because that's what shift_path_info does, not because it's the most natural or useful way to do the match. I suggest prefix matching not because it shows how the current functions in wsgiref work, but because it shows a pattern of dispatching WSGI applications on a level that is typically (but for WSGI, unnecessarily) built into the server. The educational value is in the pattern, not in the implementation. If you want to show how the functions in wsgiref work, then that belongs in documentation. Which would be good too, people like examples, and the more examples in the wsgiref docs the better. People are much less likely to see examples in the code itself. > To make the real dispatcher, I'd flesh out what I wrote a little bit, to > handle the "default" method in a more meaningful way, including the > redirect. All that should only add a few lines, however. It will still be only a couple lines less than prefix matching. Another issue with your implementation is the use of keyword arguments for the path mappings, even though path mappings have no association with keyword arguments or valid Python identifiers. -- Ian Bicking / ianb at colorstudy.com / http://blog.ianbicking.org From pje at telecommunity.com Sat Apr 29 01:48:58 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 19:48:58 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <44529B9B.6050805@colorstudy.com> References: <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060428193927.01e30ec8@mail.telecommunity.com> At 05:47 PM 4/28/2006 -0500, Ian Bicking wrote: >It will still be only a couple lines less than prefix matching. That's beside the point. Prefix matching is inherently a more complex concept, and more likely to be confusing, without introducing much in the way of new features. If I want to dispatch /foo/bar, why not just use: AppMap(foo=AppMap(bar=whatever)) So, I don't see prefix matching as introducing anything that's worth the extra complexity. If somebody needs a high-performance prefix matcher, they can get yours from Paste. If I was going to include a more sophisticated dispatcher, I'd add an ordered regular expression dispatcher, since that would support use cases that the simple or prefix dispatchers would not, but it would also support the prefix cases without nesting. >Another issue with your implementation is the use of keyword arguments for >the path mappings, even though path mappings have no association with >keyword arguments or valid Python identifiers. That was for brevity; it should probably also take a mapping argument. From pje at telecommunity.com Sat Apr 29 01:57:33 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Fri, 28 Apr 2006 19:57:33 -0400 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <20060428233435.GC22737@caltech.edu> References: <5.1.1.6.0.20060428142518.01e2b180@mail.telecommunity.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <5.1.1.6.0.20060428142518.01e2b180@mail.telecommunity.com> Message-ID: <5.1.1.6.0.20060428195148.042cff48@mail.telecommunity.com> At 04:34 PM 4/28/2006 -0700, Titus Brown wrote: >Hi, Phillip, > >I'm getting this error when I run the tests, with both Python 2.3 and >2.4: > >====================================================================== >FAIL: testHeaderFormats (wsgiref.tests.test_handlers.HandlerTests) >---------------------------------------------------------------------- >Traceback (most recent call last): > File "/disk/u/t/dev/misc/wsgiref/src/wsgiref/tests/test_handlers.py", > line 205, in testHeaderFormats > (stdpat%(version,sw), h.stdout.getvalue()) >AssertionError: ('HTTP/1.0 200 OK\\r\\nDate: \\w{3} \\w{3} [ 0123]\\d >\\d\\d:\\d\\d:\\d\\d \\d{4}\\r\\nServer: FooBar/1.0\r\nContent-Length: >0\\r\\n\\r\\n', 'HTTP/1.0 200 OK\r\nDate: Fri, 28 Apr 2006 23:28:11 >GMT\r\nServer: FooBar/1.0\r\nContent-Length: 0\r\n\r\n') > >---------------------------------------------------------------------- This is probably due to Guido's patch to make the Date: header more RFC compliant. I'll take a look at it this weekend. >On a separate note, what are you actually proposing to include? It'd be >good to remove the TODO list, for example, unless those are things To Be >Done before adding it into Python 2.5. Well, it looks like the "validate" bit will be going in, and we're talking about what to put in "router", so that'll take care of half the list right there. :) The other two items can wait, unless somebody wants to contribute them. >Will it be added as 'wsgi' or 'wsgiref? I assumed it would be wsgiref, which would allow compatibility with existing code that uses it. >I'd also personally suggest putting anything intended for common use >directly under the top level, i.e. > > wsgiref.WSGIServer > >vs > > wsgiref.simple_server.WSGIServer I'm against this, because it would force the handlers and simple_server modules to be imported, even for programs not using them. >And, finally, is there any documentation? Only the docstrings. Contributions are more than welcome. From ianb at colorstudy.com Sat Apr 29 02:48:42 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Fri, 28 Apr 2006 19:48:42 -0500 Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: <5.1.1.6.0.20060428193927.01e30ec8@mail.telecommunity.com> References: <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <44527506.2070407@colorstudy.com> <ca471dc20604281103o3cc9ea5fnd7e03dd724f7b0b0@mail.gmail.com> <44526DD3.4010807@colorstudy.com> <ca471dc20604281247k23a586aayd1b7f1c9241e782a@mail.gmail.com> <44527506.2070407@colorstudy.com> <5.1.1.6.0.20060428163524.037e5fb8@mail.telecommunity.com> <5.1.1.6.0.20060428174915.01fab7f8@mail.telecommunity.com> <5.1.1.6.0.20060428193927.01e30ec8@mail.telecommunity.com> Message-ID: <4452B7EA.8060707@colorstudy.com> Phillip J. Eby wrote: > At 05:47 PM 4/28/2006 -0500, Ian Bicking wrote: >> It will still be only a couple lines less than prefix matching. > > That's beside the point. Prefix matching is inherently a more complex > concept, and more likely to be confusing, without introducing much in > the way of new features. I just don't understand this. It's not more complex. Prefix matching works like: get the prefixes order them longest first check each one against PATH_INFO use the matched app or call the not found handler Name matching works like: get the mapping get the next chunk get the app associated with that chunk use that app or call the not found handler One is not more complex than the other. > If I want to dispatch /foo/bar, why not just use: > > AppMap(foo=AppMap(bar=whatever)) You create an intermediate application with no particular purpose. You get two default handlers, two not found handlers, and you create an object tree that is distracting because it is artificial. Paths are strings, not trees or objects. When you confuse strings for objects you are moving into framework territory. > If I was going to include a more sophisticated dispatcher, I'd add an > ordered regular expression dispatcher, since that would support use > cases that the simple or prefix dispatchers would not, but it would also > support the prefix cases without nesting. That is significantly more complex, because SCRIPT_NAME/PATH_INFO cannot be used to express what the regular expression matched. It also overlaps with frameworks. WSGI doesn't offer any standard mechanism to do that sort of thing. It could (e.g., a wsgi.path_vars key), but it doesn't. Or you do something that looks like mod_rewrite, but no one wants that. Prefix based routing represents a real cusp -- more than that, and you have to invent conventions not already present in the WSGI spec, and you overlap with frameworks. Less than that... well, you can't do a whole lot less than that. -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From nnorwitz at gmail.com Sat Apr 29 05:05:26 2006 From: nnorwitz at gmail.com (Neal Norwitz) Date: Fri, 28 Apr 2006 20:05:26 -0700 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <20060428122045.GB8336@localhost.localdomain> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> Message-ID: <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> Here's what's left for 2.5 after the most recent go around. There's no owner for these items. If no one takes them, they won't get done and I will move them to deferred within a week: * Add @decorator decorator to functional, rename to functools? What's the benefit of @decorator? Who made functional? It's new in 2.5, right? If so, move it or it will be functional for all 2.x. * Remove the fpectl module? Does anyone use this? It can probably be removed, but someone needs to do the work. * new icons is lost and needs a shepherd to make python look spiffy * what's happening with these modules: timing, cl, sv? n On 4/28/06, A.M. Kuchling <amk at amk.ca> wrote: > On Thu, Apr 27, 2006 at 10:58:49PM -0700, Neal Norwitz wrote: > > If you are addressed on this message, it means you have open issues > > that need to be resolved for 2.5. Some of these issues are > > documentation, others are code issues. This information comes from > > PEP 356. > > There are also these items in the 'possible features' section: > ================ > Modules under consideration for inclusion: > > - bdist_deb in distutils package > (Owner: ???) > http://mail.python.org/pipermail/python-dev/2006-February/060926.html > > - wsgiref to the standard library > (Owner: Phillip Eby) > > - pure python pgen module > (Owner: Guido) > > - Support for building "fat" Mac binaries (Intel and PPC) > (Owner: Ronald Oussoren) > ================ > > wsgiref is the most important one, I think. If there's anything I can > do to help, please let me know. > > --amk > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com > From kbk at shore.net Sat Apr 29 05:08:27 2006 From: kbk at shore.net (Kurt B. Kaiser) Date: Fri, 28 Apr 2006 23:08:27 -0400 (EDT) Subject: [Python-Dev] Weekly Python Patch/Bug Summary Message-ID: <200604290308.k3T38R9n004599@bayview.thirdcreek.com> Patch / Bug Summary ___________________ Patches : 378 open ( +7) / 3199 closed ( +4) / 3577 total (+11) Bugs : 901 open ( -7) / 5792 closed (+25) / 6693 total (+18) RFE : 214 open ( +3) / 214 closed ( +2) / 428 total ( +5) New / Reopened Patches ______________________ Allow PyArg_ParseTupleAndKeywords to unpack tuples (2006-04-22) http://python.org/sf/1474454 reopened by loewis Allow PyArg_ParseTupleAndKeywords to unpack tuples (2006-04-21) http://python.org/sf/1474454 opened by Roger Upole detect %zd format for PY_FORMAT_SIZE_T (2006-04-22) http://python.org/sf/1474907 opened by Brett Cannon New doctest option: +SKIP (2006-04-23) CLOSED http://python.org/sf/1475231 opened by Edward Loper patch fixing #1448060 (gettext.py bug) (2006-04-24) http://python.org/sf/1475523 opened by Thenault Sylvain IndentationError for unexpected indent (2006-04-24) http://python.org/sf/1475845 opened by Roger Miller Add help reference on Mac (2006-04-25) http://python.org/sf/1476578 opened by Bruce Sherwood __init__.py'less package import warnings (2006-04-27) http://python.org/sf/1477281 opened by Thomas Wouters Allow os.listdir to accept file names longer than MAX_PATH (2006-04-26) http://python.org/sf/1477350 opened by Roger Upole Fix doctest nit. (2006-04-28) http://python.org/sf/1478292 opened by Thomas Heller Picky floats (2006-04-28) http://python.org/sf/1478364 opened by Skip Montanaro Patches Closed ______________ Allow PyArg_ParseTupleAndKeywords to unpack tuples (2006-04-22) http://python.org/sf/1474454 closed by loewis Weak linking support for OSX (2006-04-17) http://python.org/sf/1471925 closed by ronaldoussoren test for broken poll at runtime (2006-04-17) http://python.org/sf/1471761 closed by ronaldoussoren New doctest option: +SKIP (2006-04-23) http://python.org/sf/1475231 closed by tim_one start testing strings > 2GB (2006-04-17) http://python.org/sf/1471578 closed by twouters New / Reopened Bugs ___________________ non-keyword argument following keyword (2006-04-23) http://python.org/sf/1474677 opened by George Yoshida pickling files works with protocol=2. (2006-04-22) http://python.org/sf/1474680 opened by Kirill Simonov Document os.path.join oddity on Windows (2006-04-23) CLOSED http://python.org/sf/1475009 opened by Miki Tebeka example code in what's new/sqlite3 docs (2006-04-23) CLOSED http://python.org/sf/1475080 opened by James Pryor Tkinter hangs in test_tcl (2006-04-23) http://python.org/sf/1475162 opened by Thomas Wouters Poorly worded description for socket.makefile() (2006-04-24) http://python.org/sf/1475554 opened by Roy Smith replacing obj.__dict__ with a subclass of dict (2006-04-24) http://python.org/sf/1475692 opened by ganges master SystemError in socket sendto (2006-04-25) CLOSED http://python.org/sf/1476111 opened by CyDefect Documentation date stuck on "5th April 2006" (2006-04-25) CLOSED http://python.org/sf/1476216 opened by Mat Martineau StringIO should implement __str__() (2006-04-25) CLOSED http://python.org/sf/1476356 opened by Dom Lachowicz Finish PEP 343 terminology cleanup (2006-04-26) http://python.org/sf/1476845 opened by Nick Coghlan Missing import line in example (2006-04-26) CLOSED http://python.org/sf/1477102 opened by Bruce Eckel Incorrect code in example (2006-04-26) CLOSED http://python.org/sf/1477140 opened by Bruce Eckel test_bsddb skipped -- Failed to load /home/shashi/Python-2.4 (2006-04-27) http://python.org/sf/1477450 opened by shashi test_ctypes: undefined symbol: XGetExtensionVersion (2006-04-28) http://python.org/sf/1478253 opened by balducci Invalid value returned by util.get_platform() on HP (2006-04-28) CLOSED http://python.org/sf/1478326 opened by S?bastien Sabl? test_capi crashed -- thread.error: can't allocate lock (2006-04-28) CLOSED http://python.org/sf/1478400 opened by shashi datetime.datetime.fromtimestamp ValueError. Rounding error (2006-04-28) CLOSED http://python.org/sf/1478429 opened by Erwin Bonsma size limit exceeded for read() from network drive (2006-04-28) http://python.org/sf/1478529 opened by Mark Sheppard Bugs Closed ___________ IDLE does not start 2.4.3 (2006-04-17) http://python.org/sf/1471806 closed by egrimes TempFile can hang on Windows (2006-04-20) http://python.org/sf/1473760 closed by tim_one import module with .dll extension (2006-04-18) http://python.org/sf/1472566 closed by gbrandl Document os.path.join oddity on Windows (2006-04-23) http://python.org/sf/1475009 closed by gbrandl example code in what's new/sqlite3 docs (2006-04-23) http://python.org/sf/1475080 closed by akuchling interactive: no cursors ctrl-a/e... in 2.5a1/linux/debian (2006-04-18) http://python.org/sf/1472173 closed by twouters doctest mishandles exceptions raised within generators (2005-10-25) http://python.org/sf/1337990 closed by tim_one SystemError in socket sendto (2006-04-25) http://python.org/sf/1476111 closed by twouters string parameter to ioctl not null terminated, includes fix (2006-02-17) http://python.org/sf/1433877 closed by twouters Documentation date stuck on "5th April 2006" (2006-04-25) http://python.org/sf/1476216 closed by akuchling can't send files via ftp on my MacOS X 10.3.9 (2006-02-23) http://python.org/sf/1437614 closed by tjreedy StringIO should implement __str__() (2006-04-25) http://python.org/sf/1476356 closed by rhettinger Missing import line in example (2006-04-26) http://python.org/sf/1477102 closed by akuchling Incorrect code in example (2006-04-26) http://python.org/sf/1477140 closed by akuchling Math mode not well handled in \documentclass{howto} (2005-02-17) http://python.org/sf/1124692 closed by fdrake Windows non-MS compiler doc updates (2003-11-10) http://python.org/sf/839709 closed by fdrake "print statement" in libref index broken (2006-01-21) http://python.org/sf/1411674 closed by fdrake AF_UNIX sockets do not handle Linux-specific addressing (2003-07-02) http://python.org/sf/764437 closed by gbrandl The -m option to python does not search zip files (2005-08-02) http://python.org/sf/1250389 closed by pmoore Invalid value returned by util.get_platform() on HP (2006-04-28) http://python.org/sf/1478326 closed by gbrandl test_capi crashed -- thread.error: can't allocate lock (2006-04-28) http://python.org/sf/1478400 closed by gbrandl datetime.datetime.fromtimestamp ValueError. Rounding error (2006-04-28) http://python.org/sf/1478429 closed by gbrandl shutil.copytree debug message problem (2006-04-19) http://python.org/sf/1472949 closed by gbrandl urllib2.Request constructor to urllib.quote the url given (2006-04-20) http://python.org/sf/1473560 closed by gbrandl New / Reopened RFE __________________ feature requests for logging lib (2006-04-22) http://python.org/sf/1474577 opened by blaize rhodes feature requests for logging lib (2006-04-22) CLOSED http://python.org/sf/1474609 opened by blaize rhodes compute/doc %z os-indep., time.asctime_tz / _TZ (2006-04-24) http://python.org/sf/1475397 opened by kxroberto Add SeaMonkey to webbrowser.py (2006-04-25) CLOSED http://python.org/sf/1476166 opened by Oleg Broytmann Drop py.ico and pyc.ico (2006-04-27) http://python.org/sf/1477968 opened by Martin v. L??wis RFE Closed __________ feature requests for logging lib (2006-04-22) http://python.org/sf/1474609 closed by gbrandl Add SeaMonkey to webbrowser.py (2006-04-25) http://python.org/sf/1476166 closed by gbrandl From tim.peters at gmail.com Sat Apr 29 06:17:02 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 29 Apr 2006 00:17:02 -0400 Subject: [Python-Dev] binary trees. Review obmalloc.c In-Reply-To: <445294E8.1080401@v.loewis.de> References: <20060424114024.66D0.JCARLSON@uci.edu> <444F1E36.30302@renet.ru> <20060426094148.66EE.JCARLSON@uci.edu> <44508866.60503@renet.ru> <445294E8.1080401@v.loewis.de> Message-ID: <1f7befae0604282117r61b3e1ewf451542aa4cb7ac0@mail.gmail.com> [Vladimir 'Yu' Stepanov] >> * To adapt allocation of blocks of memory with other alignment. Now >> alignment is rigidly set on 8 bytes. As a variant, it is possible to >> use alignment on 4 bytes. And this value can be set at start of the >> interpreter through arguments/variable environments/etc. At testing >> with alignment on 4 or 8 bytes difference in speed of work was not >> appreciable. [Martin v. L?wis] > That depends on the hardware you use, of course. Some architectures > absolutely cannot stand mis-aligned accesses, and programs will just > crash if they try to perform such accesses. Note that we _had_ to add a goofy "long double" to the PyGC_Head union because some Python platform required 8-byte alignment for some types ... see rev 25454. Any spelling of malloc() also needs to return memory aligned for any legitimate purpose, and 8-byte alignment is the strictest requirement we know of across current Python platforms. > So Python should err on the safe side, and only use a smaller alignment > when it is known not to hurt. > > OTOH, I don't see the *advantage* in reducing the alignment. It could cut wasted bytes. There is no per-object memory overhead in a release-build obmalloc: the allocatable part of a pool is entirely used for user-visible object memory, except when the alignment requirement ends up forcing unused (by both the user and by obmalloc) pad bytes. For example, Python ints consume 12 bytes on 32-bit boxes, but if space were allocated for them by obmalloc (it's not), obmalloc would have to use 16 bytes per int to preserve 8-byte alignment. OTOH, obmalloc (unlike PyGC_Head) has always used 8-byte alignment, because that's what worked best for Vladimir Marangozov during his extensive timing tests. It's not an isolated decision -- e.g., it's also influenced by, and influences, "the best" pool size, and (of course) cutting alignment to 4 would double the number of "size classes" obmalloc has to juggle. >> * To expand the maximal size which can be allocated by means of the >> given module. Now the maximal size is 256 bytes. > Right. This is somewhat deliberate, though; the expectation is that > fragmentation will increase dramatically if even larger size classes > are supported. It's entirely deliberate ;-) obmalloc is no way trying to be a general-purpose allocator. It was designed specifically for the common memory patterns in Python, aiming at large numbers of small objects of the same size. That's extremely common in Python, and allocations larger than 256 bytes are not. The maximum size was actually 64 bytes at first. After that, dicts grew an embedded 8-element table, which vastly increased the size of the dict struct. obmalloc's limit was boosted to 256 then, although it could have stopped at the size of a dict (in the rough ballpark of 150 bytes). There was no reason (then or now) to go beyond 256. >> * At the size of the allocated memory close to maximal, the >> allocation of blocks becomes inefficient. For example, for the >> sizesof blocks 248 and 256 (blocksize), values of quantity of >> elements (PAGESIZE/blocksize) it is equal 16. I.e. it would be >> possible to use for the sizes of the block 248 same page, as for the >> size of the block 256. > Good idea; that shouldn't be too difficult to implement: for sizes above > 215, pools need to be spaced apart only at 16 bytes. I'd rather drop the limit to 215 then <0.3 wink>. Allocations that large probably still aren't important to obmalloc, but it is important that finding a requested allocation's "size index" be as cheap as possible. Uniformity helps that. On modern boxes, it may be desirable to boost POOL_SIZE -- nobody has run timing experiments as comprehensive as Vladimir ran way back when, and there's no particular reason to believe that the same set of tradeoffs would look best on current architectures. From collinw at gmail.com Sat Apr 29 06:39:02 2006 From: collinw at gmail.com (Collin Winter) Date: Sat, 29 Apr 2006 00:39:02 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> Message-ID: <43aa6ff70604282139u4749f95eo18c6d1cafe729366@mail.gmail.com> On 4/28/06, Neal Norwitz <nnorwitz at gmail.com> wrote: > There's no owner for these items. If no one takes them, they won't > get done and I will move them to deferred within a week: > > * Add @decorator decorator to functional, rename to functools? > What's the benefit of @decorator? Who made functional? It's new > in 2.5, right? If so, move it or it will be functional for all 2.x. The PEP responsible for functional (PEP 309) was written by Peter Harris, with the partial class (the module's sole member for a while) coded by Hye-Shik Chang and Raymond Hettinger. Comments in the module list Hye-Shik Chang as the maintainer. I'd very much like to see functional renamed to functools, and I've cooked up the necessary changes to handle the move (SF #1478788). Collin Winter From tim.peters at gmail.com Sat Apr 29 06:47:52 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 29 Apr 2006 00:47:52 -0400 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <44526B8E.2070809@v.loewis.de> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <44522FCB.1060106@v.loewis.de> <e2tjr5$890$1@sea.gmane.org> <44526B8E.2070809@v.loewis.de> Message-ID: <1f7befae0604282147u5edadcf5oe152f1247668350b@mail.gmail.com> [Georg Brandl] >> As I posted here previously, I contacted the owner, and he said that he >> didn't care about specifying a license. I guess that means that we can >> pick one ;) [Martin v. L?wis] > Can you please ask whether he would be willing to fill out a contrib > form (http://www.python.org/psf/contrib/contrib-form/)? Without > some kind of explicit contribution, I hesitate to use it. Me too, but stronger than hesitation. Georg, if he doesn't sign a contrib form, we have no legal right to distribute his work, or to license Python's users to (re)distribute it either. That's just the way the world works. Or if the fellow just doesn't want to bother picking the "initial license" the contrib form requires, tell him to pick the Academic Free License, Version 2.1. Well, you should actually ask him to ask his lawyer to decide that with him, but since I'm not on hallucinogens at the moment, making the choice for him is the only thing that will work in the real world -- it just has to look like it was his choice ;-) From tim.peters at gmail.com Sat Apr 29 07:07:18 2006 From: tim.peters at gmail.com (Tim Peters) Date: Sat, 29 Apr 2006 01:07:18 -0400 Subject: [Python-Dev] Float formatting and # In-Reply-To: <e2tqsh$v6e$2@sea.gmane.org> References: <e2tqic$v6e$1@sea.gmane.org> <e2tqsh$v6e$2@sea.gmane.org> Message-ID: <1f7befae0604282207w1d74c1e4t3b1778dbd2bcef49@mail.gmail.com> [Georg Brandl] > ... > Reviewing the printf man page, this is okay since for %f, the precision is the > number of digits after the decimal point while for %g, it is the number of > significant digits. Still, that should be documented in the Python manual. Well, there are a lot of little details in C's formats. Filling in the gaps people happen to notice is one way to proceed; another is to point to external docs for C's rules, like http://www-ccs.ucsd.edu/c/lib_prin.html#Print%20Functions From janssen at parc.com Sat Apr 29 09:09:14 2006 From: janssen at parc.com (Bill Janssen) Date: Sat, 29 Apr 2006 00:09:14 PDT Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: Your message of "Fri, 28 Apr 2006 13:19:26 PDT." <ca471dc20604281319n544a4d9o13febc927266f2fd@mail.gmail.com> Message-ID: <06Apr29.000917pdt."58641"@synergy1.parc.xerox.com> > It still looks like an application of WSGI, not part of a reference > implementation. It seems to me that canonical exemplars are part of what a "reference" implementation should include. Otherwise it would be a "standard" implementation, which is considerably different. Bill From janssen at parc.com Sat Apr 29 09:13:08 2006 From: janssen at parc.com (Bill Janssen) Date: Sat, 29 Apr 2006 00:13:08 PDT Subject: [Python-Dev] [Web-SIG] Adding wsgiref to stdlib In-Reply-To: Your message of "Fri, 28 Apr 2006 13:47:38 PDT." <43aa6ff70604281347l6c58366egaab182a91aa12420@mail.gmail.com> Message-ID: <06Apr29.001312pdt."58641"@synergy1.parc.xerox.com> > Perhaps this could go in Demo/wsgiref/? Perhaps both Ian's and Phillip's examples could go into Demo/wsgiref/? Bill From ncoghlan at gmail.com Sat Apr 29 09:17:48 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 29 Apr 2006 17:17:48 +1000 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <43aa6ff70604282139u4749f95eo18c6d1cafe729366@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> <43aa6ff70604282139u4749f95eo18c6d1cafe729366@mail.gmail.com> Message-ID: <4453131C.706@gmail.com> Collin Winter wrote: > I'd very much like to see functional renamed to functools, and I've > cooked up the necessary changes to handle the move (SF #1478788). +1 since there are utilities like "decorator" and "deprecated" which belong in such a module, but don't fit within the "functional programming" meme suggested by the current name. If we're happy with the name change, I can make sure this change goes in before alpha 3 (probably sooner, to make it easier for anyone interested to work on a patch for @decorator). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From amk at amk.ca Sat Apr 29 14:31:00 2006 From: amk at amk.ca (A.M. Kuchling) Date: Sat, 29 Apr 2006 08:31:00 -0400 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <20060427231321.CB16A1E4006@bag.python.org> References: <20060427231321.CB16A1E4006@bag.python.org> Message-ID: <20060429123059.GA2021@Andrew-iBook2.local> On Fri, Apr 28, 2006 at 01:13:21AM +0200, thomas.wouters wrote: > - Warn-raise ImportWarning when importing would have picked up a directory > as package, if only it'd had an __init__.py. This swaps two tests (for > case-ness and __init__-ness), but case-test is not really more expensive, > and it's not in a speed-critical section. For the what's new, I'd like to clarify the purpose of this change. Is the plan to make __init__.py optional in subpackages in 2.6, and this warning is a first step toward that? Or is this just to improve the error reporting when a directory lacking an __init__.py is found, and no further changes will be in 2.6? --amk From jcarlson at uci.edu Sat Apr 29 15:12:29 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 29 Apr 2006 06:12:29 -0700 Subject: [Python-Dev] Crazy idea for str.join Message-ID: <20060429055551.672C.JCARLSON@uci.edu> I understand the underlying implementation of str.join can be a bit convoluted (with the auto-promotion to unicode and all), but I don't suppose there is any chance to get str.join to support objects which implement the buffer interface as one of the items in the sequence? Something like: y = 'hello world' buf1 = buffer(y, 0, 5) buf2 = buffer(y, 6) print ''.join([buf1, buf2]) should print "helloworld" It would take a PyBuffer_Check() just after the PyUnicode_Check() block. And one may be able to replace the other PyString_* calls with PyObject_AsCharBuffer() calls. I'm not going to push for this, but someone other than me may find such functionality useful. - Josiah P.S. One interesting thing to note is that quite a few underlying objects take buffer objects in lieu of strings, even when they aren't documented as doing so; file.write(), array.fromstring(), mmap[i:j] = buffer, etc. From g.brandl at gmx.net Sat Apr 29 15:12:16 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 29 Apr 2006 15:12:16 +0200 Subject: [Python-Dev] 2.5 open issues In-Reply-To: <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> References: <ee2a432c0604272258t480f527fm5dfe93f5cea53d42@mail.gmail.com> <20060428122045.GB8336@localhost.localdomain> <ee2a432c0604282005l574b7472u88a586b7e5ca3a76@mail.gmail.com> Message-ID: <e2vonh$4gr$2@sea.gmane.org> Neal Norwitz wrote: > Here's what's left for 2.5 after the most recent go around. > > There's no owner for these items. If no one takes them, they won't > get done and I will move them to deferred within a week: > > * Add @decorator decorator to functional, rename to functools? > What's the benefit of @decorator? Creating decorators that don't hinder introspection. > Who made functional? It's new > in 2.5, right? If so, move it or it will be functional for all 2.x. > > * Remove the fpectl module? > Does anyone use this? It can probably be removed, but someone > needs to do the work. > > * new icons is lost and needs a shepherd to make python look spiffy I've already done the work for Unixy platforms. Georg From g.brandl at gmx.net Sat Apr 29 15:12:09 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 29 Apr 2006 15:12:09 +0200 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <20060429123059.GA2021@Andrew-iBook2.local> References: <20060427231321.CB16A1E4006@bag.python.org> <20060429123059.GA2021@Andrew-iBook2.local> Message-ID: <e2von9$4gr$1@sea.gmane.org> A.M. Kuchling wrote: > On Fri, Apr 28, 2006 at 01:13:21AM +0200, thomas.wouters wrote: >> - Warn-raise ImportWarning when importing would have picked up a directory >> as package, if only it'd had an __init__.py. This swaps two tests (for >> case-ness and __init__-ness), but case-test is not really more expensive, >> and it's not in a speed-critical section. > > For the what's new, I'd like to clarify the purpose of this change. > Is the plan to make __init__.py optional in subpackages in 2.6, and > this warning is a first step toward that? Or is this just to improve > the error reporting when a directory lacking an __init__.py is found, > and no further changes will be in 2.6? From what I have read out of the quite lengthy thread on this topic, there's no decision yet. Georg From ncoghlan at gmail.com Sat Apr 29 15:20:46 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 29 Apr 2006 23:20:46 +1000 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <20060429123059.GA2021@Andrew-iBook2.local> References: <20060427231321.CB16A1E4006@bag.python.org> <20060429123059.GA2021@Andrew-iBook2.local> Message-ID: <4453682E.4000403@gmail.com> A.M. Kuchling wrote: > On Fri, Apr 28, 2006 at 01:13:21AM +0200, thomas.wouters wrote: >> - Warn-raise ImportWarning when importing would have picked up a directory >> as package, if only it'd had an __init__.py. This swaps two tests (for >> case-ness and __init__-ness), but case-test is not really more expensive, >> and it's not in a speed-critical section. > > For the what's new, I'd like to clarify the purpose of this change. > Is the plan to make __init__.py optional in subpackages in 2.6, and > this warning is a first step toward that? Or is this just to improve > the error reporting when a directory lacking an __init__.py is found, > and no further changes will be in 2.6? I think it's hard to say because that thread moved so fast :) FWIW, my interpretation was that there was some degree of consensus that better error reporting for this situation was a good thing, but Guido still has a bit of persuading to do if he wants to make an empty __init__.py optional in subpackages for Python 2.6. So the relatively non-controversial bit (improving the error reporting) was added immediately, and the controversial bit postponed to see if the better error reporting had any effect on the demand for it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From edloper at gradient.cis.upenn.edu Sat Apr 29 15:59:15 2006 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Sat, 29 Apr 2006 09:59:15 -0400 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <20060429055551.672C.JCARLSON@uci.edu> References: <20060429055551.672C.JCARLSON@uci.edu> Message-ID: <44537133.2000004@gradient.cis.upenn.edu> Josiah Carlson wrote: > [...] get str.join to support objects which > implement the buffer interface as one of the items in the sequence? > > Something like: > > y = 'hello world' > buf1 = buffer(y, 0, 5) > buf2 = buffer(y, 6) > print ''.join([buf1, buf2]) > > should print "helloworld" This is incompatible with the recent proposal making str.join automatically str-ify its arguments. i.e.: ''.join(['a', 12, 'b']) -> 'a12b'. I don't feel strongly about either proposal, I just thought I'd point out that they're mutually exclusive. -Edward From ncoghlan at gmail.com Sat Apr 29 16:10:43 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 30 Apr 2006 00:10:43 +1000 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <44537133.2000004@gradient.cis.upenn.edu> References: <20060429055551.672C.JCARLSON@uci.edu> <44537133.2000004@gradient.cis.upenn.edu> Message-ID: <445373E3.2050607@gmail.com> Edward Loper wrote: > This is incompatible with the recent proposal making str.join > automatically str-ify its arguments. i.e.: > > ''.join(['a', 12, 'b']) -> 'a12b'. > > I don't feel strongly about either proposal, I just thought I'd point > out that they're mutually exclusive. Doesn't accepting objects that support the buffer interface come for free with stringification? (My understanding of buffer objects is fairly sketchy, so I may be missing something here. . .) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From edloper at gradient.cis.upenn.edu Sat Apr 29 16:15:40 2006 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Sat, 29 Apr 2006 10:15:40 -0400 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <445373E3.2050607@gmail.com> References: <20060429055551.672C.JCARLSON@uci.edu> <44537133.2000004@gradient.cis.upenn.edu> <445373E3.2050607@gmail.com> Message-ID: <4453750C.1060204@gradient.cis.upenn.edu> Nick Coghlan wrote: > Edward Loper wrote: >> This is incompatible with the recent proposal making str.join >> automatically str-ify its arguments. i.e.: >> >> ''.join(['a', 12, 'b']) -> 'a12b'. >> >> I don't feel strongly about either proposal, I just thought I'd point >> out that they're mutually exclusive. > > Doesn't accepting objects that support the buffer interface come for > free with stringification? (My understanding of buffer objects is fairly > sketchy, so I may be missing something here. . .) It's quite possible that I'm the one who was confused.. I just tried it out with string-based buffers, and for that case at least it works fine. I don't know about other buffers. So actually these two proposals might be compatible after all. -Edward From fredrik at pythonware.com Sat Apr 29 20:54:00 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 29 Apr 2006 20:54:00 +0200 Subject: [Python-Dev] introducing the experimental pyref wiki Message-ID: <e30coa$u43$1@sea.gmane.org> the pytut wiki (http://pytut.infogami.com/) has now been up and running for one month, and has seen well over 250 edits from over a dozen contributors. to celebrate this, and to exercise the toolchain that I've deve- loped for pytut and pyfaq (http://pyfaq.infogami.com/), I spent a few hours putting together a hyperlinked mashup of the language reference and portions of the library reference: http://pyref.infogami.com/ a couple of notes: - all important "concepts" have unique URLs: this includes key- words, types, special methods and attributes, statements, builtin functions, and exceptions. - the conversion and structure is a bit rough; especially the syntax/data model/execution model parts needs some serious refactoring. the "concept pages" are in a lot better shape. - registered users can add comments to all pages (editing is currently "by invitation only"; mail me your infogami account if you want to help!) - the documentation style used at the pyref site is tuned for authoring; an "end-user rendering" for python.org can look a lot different. (all three sites can be made available in glorious XHTML for inclusion in arbitrary toolchains). enjoy! </F> From jcarlson at uci.edu Sat Apr 29 21:36:07 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 29 Apr 2006 12:36:07 -0700 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <4453750C.1060204@gradient.cis.upenn.edu> References: <445373E3.2050607@gmail.com> <4453750C.1060204@gradient.cis.upenn.edu> Message-ID: <20060429122737.672F.JCARLSON@uci.edu> Edward Loper <edloper at gradient.cis.upenn.edu> wrote: > > Nick Coghlan wrote: > > Edward Loper wrote: > >> This is incompatible with the recent proposal making str.join > >> automatically str-ify its arguments. i.e.: > >> > >> ''.join(['a', 12, 'b']) -> 'a12b'. > >> > >> I don't feel strongly about either proposal, I just thought I'd point > >> out that they're mutually exclusive. > > > > Doesn't accepting objects that support the buffer interface come for > > free with stringification? (My understanding of buffer objects is fairly > > sketchy, so I may be missing something here. . .) > > It's quite possible that I'm the one who was confused.. I just tried it > out with string-based buffers, and for that case at least it works fine. > I don't know about other buffers. So actually these two proposals > might be compatible after all. The point of my proposal was to prevent an unnecessary level of copying during a str.join . Right now, if one manually uses str(buf) (or indexing or slicing), it copies the contents of the buffer into a string (likely via PyString_FromStringAndSize() ). By using PyObject_AsCharBuffer(), and using memcpy on that pointer rather than the pointer returned by PyString_AsString() from any (possibly auto-stringified) strings, it would reduce memory copies by half for those non-string buffer-supporting arguments. At least for the examples of buffers that I've seen, using the buffer interface for objects that support it is equivalent to automatically applying str() to them. This is, strictly speaking, an optimization. - Josiah From fredrik at pythonware.com Sat Apr 29 22:06:08 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sat, 29 Apr 2006 22:06:08 +0200 Subject: [Python-Dev] Crazy idea for str.join References: <445373E3.2050607@gmail.com><4453750C.1060204@gradient.cis.upenn.edu> <20060429122737.672F.JCARLSON@uci.edu> Message-ID: <e30gvi$a6j$1@sea.gmane.org> Josiah Carlson wrote: > At least for the examples of buffers that I've seen, using the buffer > interface for objects that support it is equivalent to automatically > applying str() to them. This is, strictly speaking, an optimization. >>> a = array.array("i", [1, 2, 3]) >>> str(a) "array('i', [1, 2, 3])" >>> str(buffer(a)) '\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' </F> From thomas at python.org Sat Apr 29 22:12:35 2006 From: thomas at python.org (Thomas Wouters) Date: Sat, 29 Apr 2006 22:12:35 +0200 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <4453682E.4000403@gmail.com> References: <20060427231321.CB16A1E4006@bag.python.org> <20060429123059.GA2021@Andrew-iBook2.local> <4453682E.4000403@gmail.com> Message-ID: <9e804ac0604291312k557a035cwfb888fdcb17c81f9@mail.gmail.com> On 4/29/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > So the relatively non-controversial bit (improving the error reporting) > was > added immediately, and the controversial bit postponed to see if the > better > error reporting had any effect on the demand for it. This is exactly how I intended it, and I believe that's Guido's desire, too. Otherwise, he would've asked me to make the warning a FutureWarning, instead. (Which could of course still happen.) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060429/afb8280b/attachment.html From theller at python.net Sat Apr 29 22:43:39 2006 From: theller at python.net (Thomas Heller) Date: Sat, 29 Apr 2006 22:43:39 +0200 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <e30gvi$a6j$1@sea.gmane.org> References: <445373E3.2050607@gmail.com><4453750C.1060204@gradient.cis.upenn.edu> <20060429122737.672F.JCARLSON@uci.edu> <e30gvi$a6j$1@sea.gmane.org> Message-ID: <e30j5q$fom$1@sea.gmane.org> Fredrik Lundh wrote: > Josiah Carlson wrote: > >> At least for the examples of buffers that I've seen, using the buffer >> interface for objects that support it is equivalent to automatically >> applying str() to them. This is, strictly speaking, an optimization. > > >>> a = array.array("i", [1, 2, 3]) > >>> str(a) > "array('i', [1, 2, 3])" > >>> str(buffer(a)) > '\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' or: thomas at tubu:~$ python2.4 Python 2.4.2 (#2, Sep 30 2005, 22:19:27) [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> str(buffer(u"abc")) 'a\x00\x00\x00b\x00\x00\x00c\x00\x00\x00' >>> Thomas From fperez.net at gmail.com Sat Apr 29 23:19:17 2006 From: fperez.net at gmail.com (Fernando Perez) Date: Sat, 29 Apr 2006 15:19:17 -0600 Subject: [Python-Dev] Google Summer of Code proposal: improvement of long int and adding new types/modules. References: <44488AC9.10304@vp.pl> Message-ID: <e30l97$m58$1@sea.gmane.org> Hi all, Mateusz Rukowicz wrote: > I wish to participate in Google Summer of Code as a python developer. I > have few ideas, what would be improved and added to python. Since these > changes and add-ons would be codded in C, and added to python-core > and/or as modules,I am not sure, if you are willing to agree with these > ideas. > > First of all, I think, it would be good idea to speed up long int > implementation in python. Especially multiplying and converting > radix-2^k to radix-10^l. It might be done, using much faster algorithms > than already used, and transparently improve efficiency of multiplying > and printing/reading big integers. > > Next thing I would add is multi precision floating point type to the > core and fraction type, which in some cases highly improves operations, > which would have to be done using floating point instead. > Of course, math module will need update to support multi precision > floating points, and with that, one could compute asin or any other > function provided with math with precision limited by memory and time. > It would be also good idea to add function which computes pi and exp > with unlimited precision. > And last thing - It would be nice to add some number-theory functions to > math module (or new one), like prime-tests, factorizations etc. Sorry for pitching in late, I was away for a while. I'd just like to point out in the context of this discussion: http://sage.scipy.org/sage/ SAGE is a fairly comprehensive system built on top of python to do all sorts of research-level number theory work, from basic things up to unpronouncable ones. It includes wrappers to many of the major number-theory related libraries which exist with an open-source license. I am not commenting one way or another on your proposal, just bringing up a project with a lot of relevance to what you are talking about. Cheers, f From jcarlson at uci.edu Sun Apr 30 00:11:58 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Sat, 29 Apr 2006 15:11:58 -0700 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <e30j5q$fom$1@sea.gmane.org> References: <e30gvi$a6j$1@sea.gmane.org> <e30j5q$fom$1@sea.gmane.org> Message-ID: <20060429150648.6738.JCARLSON@uci.edu> Thomas Heller <theller at python.net> wrote: > > Fredrik Lundh wrote: > > Josiah Carlson wrote: > > > >> At least for the examples of buffers that I've seen, using the buffer > >> interface for objects that support it is equivalent to automatically > >> applying str() to them. This is, strictly speaking, an optimization. > > > > >>> a = array.array("i", [1, 2, 3]) > > >>> str(a) > > "array('i', [1, 2, 3])" > > >>> str(buffer(a)) > > '\x01\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00' > > or: > > thomas at tubu:~$ python2.4 > Python 2.4.2 (#2, Sep 30 2005, 22:19:27) > [GCC 4.0.2 20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu8)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> str(buffer(u"abc")) > 'a\x00\x00\x00b\x00\x00\x00c\x00\x00\x00' > >>> Hrm, those are interesting examples. It would seem as though checking to see if a passed object was a buffer itself, rather than whether it supported the buffer interface, would be more correct. - Josiah From talin at acm.org Sat Apr 29 20:27:49 2006 From: talin at acm.org (Talin) Date: Sat, 29 Apr 2006 11:27:49 -0700 Subject: [Python-Dev] PEP 3102: Keyword-only arguments Message-ID: <4453B025.3080100@acm.org> PEP: 3102 Title: Keyword-Only Arguments Version: $Revision$ Last-Modified: $Date$ Author: Talin <talin at acm.org> Status: Draft Type: Standards Content-Type: text/plain Created: 22-Apr-2006 Python-Version: 3.0 Post-History: Abstract This PEP proposes a change to the way that function arguments are assigned to named parameter slots. In particular, it enables the declaration of "keyword-only" arguments: arguments that can only be supplied by keyword and which will never be automatically filled in by a positional argument. Rationale The current Python function-calling paradigm allows arguments to be specified either by position or by keyword. An argument can be filled in either explicitly by name, or implicitly by position. There are often cases where it is desirable for a function to take a variable number of arguments. The Python language supports this using the 'varargs' syntax ('*name'), which specifies that any 'left over' arguments be passed into the varargs parameter as a tuple. One limitation on this is that currently, all of the regular argument slots must be filled before the vararg slot can be. This is not always desirable. One can easily envision a function which takes a variable number of arguments, but also takes one or more 'options' in the form of keyword arguments. Currently, the only way to do this is to define both a varargs argument, and a 'keywords' argument (**kwargs), and then manually extract the desired keywords from the dictionary. Specification Syntactically, the proposed changes are fairly simple. The first change is to allow regular arguments to appear after a varargs argument: def sortwords(*wordlist, case_sensitive=False): ... This function accepts any number of positional arguments, and it also accepts a keyword option called 'case_sensitive'. This option will never be filled in by a positional argument, but must be explicitly specified by name. Keyword-only arguments are not required to have a default value. Since Python requires that all arguments be bound to a value, and since the only way to bind a value to a keyword-only argument is via keyword, such arguments are therefore 'required keyword' arguments. Such arguments must be supplied by the caller, and they must be supplied via keyword. The second syntactical change is to allow the argument name to be omitted for a varargs argument: def compare(a, b, *, key=None): ... The reasoning behind this change is as follows. Imagine for a moment a function which takes several positional arguments, as well as a keyword argument: def compare(a, b, key=None): ... Now, suppose you wanted to have 'key' be a keyword-only argument. Under the above syntax, you could accomplish this by adding a varargs argument immediately before the keyword argument: def compare(a, b, *ignore, key=None): ... Unfortunately, the 'ignore' argument will also suck up any erroneous positional arguments that may have been supplied by the caller. Given that we'd prefer any unwanted arguments to raise an error, we could do this: def compare(a, b, *ignore, key=None): if ignore: # If ignore is not empty raise TypeError As a convenient shortcut, we can simply omit the 'ignore' name, meaning 'don't allow any positional arguments beyond this point'. Function Calling Behavior The previous section describes the difference between the old behavior and the new. However, it is also useful to have a description of the new behavior that stands by itself, without reference to the previous model. So this next section will attempt to provide such a description. When a function is called, the input arguments are assigned to formal parameters as follows: - For each formal parameter, there is a slot which will be used to contain the value of the argument assigned to that parameter. - Slots which have had values assigned to them are marked as 'filled'. Slots which have no value assigned to them yet are considered 'empty'. - Initially, all slots are marked as empty. - Positional arguments are assigned first, followed by keyword arguments. - For each positional argument: o Attempt to bind the argument to the first unfilled parameter slot. If the slot is not a vararg slot, then mark the slot as 'filled'. o If the next unfilled slot is a vararg slot, and it does not have a name, then it is an error. o Otherwise, if the next unfilled slot is a vararg slot then all remaining non-keyword arguments are placed into the vararg slot. - For each keyword argument: o If there is a parameter with the same name as the keyword, then the argument value is assigned to that parameter slot. However, if the parameter slot is already filled, then that is an error. o Otherwise, if there is a 'keyword dictionary' argument, the argument is added to the dictionary using the keyword name as the dictionary key, unless there is already an entry with that key, in which case it is an error. o Otherwise, if there is no keyword dictionary, and no matching named parameter, then it is an error. - Finally: o If the vararg slot is not yet filled, assign an empty tuple as its value. o For each remaining empty slot: if there is a default value for that slot, then fill the slot with the default value. If there is no default value, then it is an error. In accordance with the current Python implementation, any errors encountered will be signaled by raising TypeError. (If you want something different, that's a subject for a different PEP.) Backwards Compatibility The function calling behavior specified in this PEP is a superset of the existing behavior - that is, it is expected that any existing programs will continue to work. Copyright This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From steven.bethard at gmail.com Sun Apr 30 03:50:10 2006 From: steven.bethard at gmail.com (Steven Bethard) Date: Sat, 29 Apr 2006 19:50:10 -0600 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <4453B025.3080100@acm.org> References: <4453B025.3080100@acm.org> Message-ID: <d11dcfba0604291850q7b1eeac3y57b6793dd038ceeb@mail.gmail.com> On 4/29/06, Talin <talin at acm.org> wrote: > PEP: 3102 > Title: Keyword-Only Arguments > Version: $Revision$ > Last-Modified: $Date$ > Author: Talin <talin at acm.org> > Status: Draft > Type: Standards > Content-Type: text/plain > Created: 22-Apr-2006 > Python-Version: 3.0 > Post-History: > > > Abstract > > This PEP proposes a change to the way that function arguments are > assigned to named parameter slots. In particular, it enables the > declaration of "keyword-only" arguments: arguments that can only > be supplied by keyword and which will never be automatically > filled in by a positional argument. +1. And I suggest this be re-targeted for Python 2.6. Now all we need is someone to implement this. ;-) By the way, I thought the "Function Calling Behavior" section was particularly clear. Thanks for that! STeVe -- Grammar am for people who can't think for myself. --- Bucky Katt, Get Fuzzy From guido at python.org Sun Apr 30 04:52:08 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 29 Apr 2006 19:52:08 -0700 Subject: [Python-Dev] Crazy idea for str.join In-Reply-To: <20060429055551.672C.JCARLSON@uci.edu> References: <20060429055551.672C.JCARLSON@uci.edu> Message-ID: <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> On 4/29/06, Josiah Carlson <jcarlson at uci.edu> wrote: > I understand the underlying implementation of str.join can be a bit > convoluted (with the auto-promotion to unicode and all), but I don't > suppose there is any chance to get str.join to support objects which > implement the buffer interface as one of the items in the sequence? In Py3k, buffers won't be compatible with strings -- buffers will be about bytes, while strings will be about characters. Given that future I don't think we should mess with the semantics in 2.x; one change in the near(ish) future is enough of a transition. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Sun Apr 30 04:59:04 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 29 Apr 2006 19:59:04 -0700 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <9e804ac0604291312k557a035cwfb888fdcb17c81f9@mail.gmail.com> References: <20060427231321.CB16A1E4006@bag.python.org> <20060429123059.GA2021@Andrew-iBook2.local> <4453682E.4000403@gmail.com> <9e804ac0604291312k557a035cwfb888fdcb17c81f9@mail.gmail.com> Message-ID: <ca471dc20604291959p37b47f67k18f888dd5b9b54a7@mail.gmail.com> Actually after all the -1 responses I wasn't going to reconsider this for 2.x; maybe for py3k. Most of the -1 votes were unconditional; only a few respondents thought that I was proposing it for 2.5. --Guido On 4/29/06, Thomas Wouters <thomas at python.org> wrote: > > > On 4/29/06, Nick Coghlan <ncoghlan at gmail.com> wrote: > > So the relatively non-controversial bit (improving the error reporting) > was > > added immediately, and the controversial bit postponed to see if the > better > > error reporting had any effect on the demand for it. > > > This is exactly how I intended it, and I believe that's Guido's desire, too. > Otherwise, he would've asked me to make the warning a FutureWarning, > instead. (Which could of course still happen.) > > -- > Thomas Wouters <thomas at python.org> > > Hi! I'm a .signature virus! copy me into your .signature file to help me > spread! > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Sun Apr 30 05:27:26 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 29 Apr 2006 23:27:26 -0400 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: References: <20060427231321.CB16A1E4006@bag.python.org><20060429123059.GA2021@Andrew-iBook2.local><4453682E.4000403@gmail.com><9e804ac0604291312k557a035cwfb888fdcb17c81f9@mail.gmail.com> <ca471dc20604291959p37b47f67k18f888dd5b9b54a7@mail.gmail.com> Message-ID: <e31aqt$721$1@sea.gmane.org> "Guido van Rossum" <guido at python.org> wrote in message news:ca471dc20604291959p37b47f67k18f888dd5b9b54a7 at mail.gmail.com... > Actually after all the -1 responses I wasn't going to reconsider this > for 2.x; maybe for py3k. I think there may be proposals to review and possibly revise the packing and import mechanisms for 3.0. So I think your comment about str.join "one change in the near(ish) future is enough of a transition." may turn out to be relevant here also. tjr From tjreedy at udel.edu Sun Apr 30 05:30:38 2006 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 29 Apr 2006 23:30:38 -0400 Subject: [Python-Dev] PEP 3102: Keyword-only arguments References: <4453B025.3080100@acm.org> Message-ID: <e31b0t$7i4$1@sea.gmane.org> "Talin" <talin at acm.org> wrote in message news:4453B025.3080100 at acm.org... > > def sortwords(*wordlist, case_sensitive=False): The rationale for this is pretty obvious. But ... > The second syntactical change is to allow the argument name to > be omitted for a varargs argument: > > def compare(a, b, *, key=None): > ... > > The reasoning behind this change is as follows. Imagine for a > moment a function which takes several positional arguments, as > well as a keyword argument: > > def compare(a, b, key=None): > ... > > Now, suppose you wanted to have 'key' be a keyword-only argument. Why? Why not let the user type the additional argument(s) without the parameter name? tjr From brett at python.org Sun Apr 30 06:02:05 2006 From: brett at python.org (Brett Cannon) Date: Sat, 29 Apr 2006 21:02:05 -0700 Subject: [Python-Dev] (hopefully) final draft of externally maintained code PEP Message-ID: <bbaeab100604292102l495b4829gde0900e0760dc10d@mail.gmail.com> Only piece of info I am missing right now is who is in charge of expat. If no one steps forward I will just mark it as N/A. I would also like Gerhard to sign off on pysqlite section. Otherwise this PEP is finished and I will commit it once I get the info I need (or lack thereof) and Gerhard gives me an OK assuming no one else finds a problem with it. --------------------------------------------------------- PEP: XXX Title: Externally Maintained Packages Version: $Revision: 43251 $ Last-Modified: $Date: 2006-03-23 06:28:55 -0800 (Thu, 23 Mar 2006) $ Author: Brett Cannon <brett at python.org> Status: Active Type: Informational Content-Type: text/x-rst Created: XX-XXX-2006 Abstract ======== There are many great pieces of Python software developed outside of the Python standard library (aka, stdlib). Sometimes it makes sense to incorporate these externally maintained packages into the stdlib in order to fill a gap in the tools provided by Python. But by having the packages maintained externally it means Python's developers do not have direct control over the packages' evolution and maintenance. Some package developers prefer to have bug reports and patches go through them first instead of being directly applied to Python's repository. This PEP is meant to record details of packages in the stdlib that are maintained outside of Python's repository. Specifically, it is meant to keep track of any specific maintenance needs for each package. It also is meant to allow people to know which version of a package is released with which version of Python. Externally Maintained Packages ============================== The section title is the name of the package as known outside of the Python standard library. The "standard library name" is what the package is named within Python. The "contact person" is the Python developer in charge of maintaining the package. The "synchronisation history" lists what external version of the package was included in each version of Python (if different from the previous Python release). ctypes ------ - Web page http://starship.python.net/crew/theller/ctypes/ - Standard library name ctypes - Contact person Thomas Heller - Synchronisation history * 0.9.9.6 (2.5) Bugs can be reported to either the Python tracker [#python-tracker]_ or the ctypes tracker [#ctypes-tracker]_ and assigned to Thomas Heller. ElementTree ----------- - Web page http://effbot.org/zone/element-index.htm - Standard library name xml.etree - Contact person Fredrik Lundh - Synchronisation history * 1.2.6 (2.5) Patches should not be directly applied to Python HEAD, but instead reported to the Python tracker [#python-tracker]_ (critical bug fixes are the exception). Bugs should also be reported to the Python tracker. Both bugs and patches should be assigned to Fredrik Lundh. Expat XML parser ---------------- - Web page http://www.libexpat.org/ - Standard library name N/A (this refers to the parser itself, and not the Python bindings) - Contact person XXX - Synchronisation history * 1.95.8 (2.4) * 1.95.7 (2.3) Optik ----- - Web site http://optik.sourceforge.net/ - Standard library name optparse - Contact person Greg Ward - Synchronisation history * 1.5.1 (2.5) * 1.5a1 (2.4) * 1.4 (2.3) pysqlite -------- - Web site http://www.sqlite.org/ - Standard library name sqlite3 - Contact person Gerhard H??ring - Synchronisation history * 2.2.2 (2.5) Bugs should be reported to the pysqlite bug tracker [#pysqlite-tracker]_ as well as any patches that are not deemed critical. References ========== .. [#python-tracker] Python tracker (http://sourceforge.net/tracker/?group_id=5470) .. [#ctypes-tracker] ctypes tracker (http://sourceforge.net/tracker/?group_id=71702) .. [#pysqlite-tracker] pysqlite tracker (http://pysqlite.org/) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From guido at python.org Sun Apr 30 06:05:32 2006 From: guido at python.org (Guido van Rossum) Date: Sat, 29 Apr 2006 21:05:32 -0700 Subject: [Python-Dev] [Python-checkins] r45770 - in python/trunk: In-Reply-To: <e31aqt$721$1@sea.gmane.org> References: <20060427231321.CB16A1E4006@bag.python.org> <20060429123059.GA2021@Andrew-iBook2.local> <4453682E.4000403@gmail.com> <9e804ac0604291312k557a035cwfb888fdcb17c81f9@mail.gmail.com> <ca471dc20604291959p37b47f67k18f888dd5b9b54a7@mail.gmail.com> <e31aqt$721$1@sea.gmane.org> Message-ID: <ca471dc20604292105y5b91144eu420c146dff90d356@mail.gmail.com> I may be misunderstanding what you wrote. I thought that packing and join were *only* being discussed in a py3k context? If they are being discussed in a 2.x context, then yes, these discussions ought to be moved to py3k. --Guido On 4/29/06, Terry Reedy <tjreedy at udel.edu> wrote: > > "Guido van Rossum" <guido at python.org> wrote in message > news:ca471dc20604291959p37b47f67k18f888dd5b9b54a7 at mail.gmail.com... > > Actually after all the -1 responses I wasn't going to reconsider this > > for 2.x; maybe for py3k. > > I think there may be proposals to review and possibly revise the packing > and import mechanisms for 3.0. So I think your comment about str.join "one > change in the near(ish) future is enough of a transition." may turn out to > be relevant here also. > > tjr > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From aahz at pythoncraft.com Sun Apr 30 06:30:15 2006 From: aahz at pythoncraft.com (Aahz) Date: Sat, 29 Apr 2006 21:30:15 -0700 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <4453B025.3080100@acm.org> References: <4453B025.3080100@acm.org> Message-ID: <20060430043015.GA11829@panix.com> On Sat, Apr 29, 2006, Talin wrote: > > Specification > > Syntactically, the proposed changes are fairly simple. The first > change is to allow regular arguments to appear after a varargs > argument: > > def sortwords(*wordlist, case_sensitive=False): > ... > > This function accepts any number of positional arguments, and it > also accepts a keyword option called 'case_sensitive'. This > option will never be filled in by a positional argument, but > must be explicitly specified by name. > > Keyword-only arguments are not required to have a default value. > Since Python requires that all arguments be bound to a value, > and since the only way to bind a value to a keyword-only argument > is via keyword, such arguments are therefore 'required keyword' > arguments. Such arguments must be supplied by the caller, and > they must be supplied via keyword. You should show with simulated doctests what happens when you call required keyword functions correctly and incorrectly. You should also show an example of a required keyword function where the parameter does not have a default value. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From fredrik at pythonware.com Sun Apr 30 08:13:40 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 30 Apr 2006 08:13:40 +0200 Subject: [Python-Dev] rich comparisions and old-style classes Message-ID: <e31kip$u6l$1@sea.gmane.org> trying to come up with a more concise description of the rich comparision machinery for pyref.infogami.com, I stumbled upon an oddity that I cannot really explain: in the attached example below, why is the rich comparision machinery doing *four* attempts to use __eq__ in the old- class case? </F> $ more test.py class old: def __init__(self, name): self.name = name def __repr__(self): return self.name def __eq__(self, other): print "EQ", self, other return NotImplemented def __cmp__(self, other): print "CMP", self, other return 0 class new(object): def __init__(self, name): self.name = name def __repr__(self): return self.name def __eq__(self, other): print "EQ", self, other return NotImplemented def __cmp__(self, other): print "CMP", self, other return 0 a = old("A") b = old("B") print a == b a = new("A") b = new("B") print a == b $ python test.py EQ A B EQ B A EQ B A EQ A B CMP A B True EQ A B EQ B A CMP A B True From talin at acm.org Sat Apr 29 20:24:17 2006 From: talin at acm.org (Talin) Date: Sat, 29 Apr 2006 11:24:17 -0700 Subject: [Python-Dev] PEP 3101: Advanced String Formatting Message-ID: <4453AF51.6050605@acm.org> PEP: 3101 Title: Advanced String Formatting Version: $Revision$ Last-Modified: $Date$ Author: Talin <talin at acm.org> Status: Draft Type: Standards Content-Type: text/plain Created: 16-Apr-2006 Python-Version: 3.0 Post-History: Abstract This PEP proposes a new system for built-in string formatting operations, intended as a replacement for the existing '%' string formatting operator. Rationale Python currently provides two methods of string interpolation: - The '%' operator for strings. [1] - The string.Template module. [2] The scope of this PEP will be restricted to proposals for built-in string formatting operations (in other words, methods of the built-in string type). The '%' operator is primarily limited by the fact that it is a binary operator, and therefore can take at most two arguments. One of those arguments is already dedicated to the format string, leaving all other variables to be squeezed into the remaining argument. The current practice is to use either a dictionary or a tuple as the second argument, but as many people have commented [3], this lacks flexibility. The "all or nothing" approach (meaning that one must choose between only positional arguments, or only named arguments) is felt to be overly constraining. While there is some overlap between this proposal and string.Template, it is felt that each serves a distinct need, and that one does not obviate the other. In any case, string.Template will not be discussed here. Specification The specification will consist of 4 parts: - Specification of a set of methods to be added to the built-in string class. - Specification of a new syntax for format strings. - Specification of a new set of class methods to control the formatting and conversion of objects. - Specification of an API for user-defined formatting classes. String Methods The build-in string class will gain a new method, 'format', which takes takes an arbitrary number of positional and keyword arguments: "The story of {0}, {1}, and {c}".format(a, b, c=d) Within a format string, each positional argument is identified with a number, starting from zero, so in the above example, 'a' is argument 0 and 'b' is argument 1. Each keyword argument is identified by its keyword name, so in the above example, 'c' is used to refer to the third argument. The result of the format call is an object of the same type (string or unicode) as the format string. Format Strings Brace characters ('curly braces') are used to indicate a replacement field within the string: "My name is {0}".format('Fred') The result of this is the string: "My name is Fred" Braces can be escaped using a backslash: "My name is {0} :-\{\}".format('Fred') Which would produce: "My name is Fred :-{}" The element within the braces is called a 'field'. Fields consist of a name, which can either be simple or compound, and an optional 'conversion specifier'. Simple names are either names or numbers. If numbers, they must be valid decimal numbers; if names, they must be valid Python identifiers. A number is used to identify a positional argument, while a name is used to identify a keyword argument. Compound names are a sequence of simple names seperated by periods: "My name is {0.name} :-\{\}".format(dict(name='Fred')) Compound names can be used to access specific dictionary entries, array elements, or object attributes. In the above example, the '{0.name}' field refers to the dictionary entry 'name' within positional argument 0. Each field can also specify an optional set of 'conversion specifiers'. Conversion specifiers follow the field name, with a colon (':') character separating the two: "My name is {0:8}".format('Fred') The meaning and syntax of the conversion specifiers depends on the type of object that is being formatted, however many of the built-in types will recognize a standard set of conversion specifiers. The conversion specifier consists of a sequence of zero or more characters, each of which can consist of any printable character except for a non-escaped '}'. The format() method does not attempt to intepret the conversion specifiers in any way; it merely passes all of the characters between the first colon ':' and the matching right brace ('}') to the various underlying formatters (described later.) Standard Conversion Specifiers For most built-in types, the conversion specifiers will be the same or similar to the existing conversion specifiers used with the '%' operator. Thus, instead of '%02.2x", you will say '{0:2.2x}'. There are a few differences however: - The trailing letter is optional - you don't need to say '2.2d', you can instead just say '2.2'. If the letter is omitted, the value will be converted into its 'natural' form (that is, the form that it take if str() or unicode() were called on it) subject to the field length and precision specifiers (if supplied). - Variable field width specifiers use a nested version of the {} syntax, allowing the width specifier to be either a positional or keyword argument: "{0:{1}.{2}d}".format(a, b, c) (Note: It might be easier to parse if these used a different type of delimiter, such as parens - avoiding the need to create a regex that handles the recursive case.) - The support for length modifiers (which are ignored by Python anyway) is dropped. For non-built-in types, the conversion specifiers will be specific to that type. An example is the 'datetime' class, whose conversion specifiers are identical to the arguments to the strftime() function: "Today is: {0:%x}".format(datetime.now()) Controlling Formatting A class that wishes to implement a custom interpretation of its conversion specifiers can implement a __format__ method: class AST: def __format__(self, specifiers): ... The 'specifiers' argument will be either a string object or a unicode object, depending on the type of the original format string. The __format__ method should test the type of the specifiers parameter to determine whether to return a string or unicode object. It is the responsibility of the __format__ method to return an object of the proper type. string.format() will format each field using the following steps: 1) See if the value to be formatted has a __format__ method. If it does, then call it. 2) Otherwise, check the internal formatter within string.format that contains knowledge of certain builtin types. 3) Otherwise, call str() or unicode() as appropriate. User-Defined Formatting Classes The code that interprets format strings can be called explicitly from user code. This allows the creation of custom formatter classes that can override the normal formatting rules. The string and unicode classes will have a class method called 'cformat' that does all the actual work of formatting; The format() method is just a wrapper that calls cformat. The parameters to the cformat function are: -- The format string (or unicode; the same function handles both.) -- A field format hook (see below) -- A tuple containing the positional arguments -- A dict containing the keyword arguments The cformat function will parse all of the fields in the format string, and return a new string (or unicode) with all of the fields replaced with their formatted values. For each field, the cformat function will attempt to call the field format hook with the following arguments: field_hook(value, conversion, buffer) The 'value' field corresponds to the value being formatted, which was retrieved from the arguments using the field name. (The field_hook has no control over the selection of values, only how they are formatted.) The 'conversion' argument is the conversion spec part of the field, which will be either a string or unicode object, depending on the type of the original format string. The 'buffer' argument is a Python array object, either a byte array or unicode character array. The buffer object will contain the partially constructed string; the field hook is free to modify the contents of this buffer if needed. The field_hook will be called once per field. The field_hook may take one of two actions: 1) Return False, indicating that the field_hook will not process this field and the default formatting should be used. This decision should be based on the type of the value object, and the contents of the conversion string. 2) Append the formatted field to the buffer, and return True. Alternate Syntax Naturally, one of the most contentious issues is the syntax of the format strings, and in particular the markup conventions used to indicate fields. Rather than attempting to exhaustively list all of the various proposals, I will cover the ones that are most widely used already. - Shell variable syntax: $name and $(name) (or in some variants, ${name}). This is probably the oldest convention out there, and is used by Perl and many others. When used without the braces, the length of the variable is determined by lexically scanning until an invalid character is found. This scheme is generally used in cases where interpolation is implicit - that is, in environments where any string can contain interpolation variables, and no special subsitution function need be invoked. In such cases, it is important to prevent the interpolation behavior from occuring accidentally, so the '$' (which is otherwise a relatively uncommonly-used character) is used to signal when the behavior should occur. It is the author's opinion, however, that in cases where the formatting is explicitly invoked, that less care needs to be taken to prevent accidental interpolation, in which case a lighter and less unwieldy syntax can be used. - Printf and its cousins ('%'), including variations that add a field index, so that fields can be interpolated out of order. - Other bracket-only variations. Various MUDs (Multi-User Dungeons) such as MUSH have used brackets (e.g. [name]) to do string interpolation. The Microsoft .Net libraries uses braces ({}), and a syntax which is very similar to the one in this proposal, although the syntax for conversion specifiers is quite different. [4] - Backquoting. This method has the benefit of minimal syntactical clutter, however it lacks many of the benefits of a function call syntax (such as complex expression arguments, custom formatters, etc.). - Other variations include Ruby's #{}, PHP's {$name}, and so on. Sample Implementation A rought prototype of the underlying 'cformat' function has been coded in Python, however it needs much refinement before being submitted. Backwards Compatibility Backwards compatibility can be maintained by leaving the existing mechanisms in place. The new system does not collide with any of the method names of the existing string formatting techniques, so both systems can co-exist until it comes time to deprecate the older system. References [1] Python Library Reference - String formating operations http://docs.python.org/lib/typesseq-strings.html [2] Python Library References - Template strings http://docs.python.org/lib/node109.html [3] [Python-3000] String formating operations in python 3k http://mail.python.org/pipermail/python-3000/2006-April/000285.html [4] Composite Formatting - [.Net Framework Developer's Guide] http://msdn.microsoft.com/library/en-us/cpguide/html/cpconcompositeformatting.asp?frame=true Copyright This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From fredrik at pythonware.com Sun Apr 30 10:21:14 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 30 Apr 2006 10:21:14 +0200 Subject: [Python-Dev] draft of externally maintained packages PEP References: <bbaeab100604251906v1537358asd0828b75437723b7@mail.gmail.com> Message-ID: <e31s1s$d48$1@sea.gmane.org> Brett Cannon wrote: > ElementTree > ----------- > - Web page > http://effbot.org/zone/element-index.htm > - Standard library name > xml.etree > - Contact person > Fredrik Lundh > - Synchronisation history > * 1.2.6 (2.5) xml.etree contains components from both ElementTree and and cElementTree. this is a bit more accurate: > - Synchronisation history > * ElementTree 1.2.6 (2.5) > * cElementTree 1.0.5 (2.5) </F> From martin at v.loewis.de Sun Apr 30 10:35:24 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 10:35:24 +0200 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <e31b0t$7i4$1@sea.gmane.org> References: <4453B025.3080100@acm.org> <e31b0t$7i4$1@sea.gmane.org> Message-ID: <445476CC.3080008@v.loewis.de> Terry Reedy wrote: >> Now, suppose you wanted to have 'key' be a keyword-only argument. > > Why? Why not let the user type the additional argument(s) without the > parameter name? Are you asking why that feature (keyword-only arguments) is desirable? That's the whole point of the PEP. Or are you asking why the user shouldn't be allowed to pass keyword-only arguments by omitting the keyword? Because they wouldn't be keyword-only arguments then, anymore. Regards, Martin From thomas at python.org Sun Apr 30 10:59:47 2006 From: thomas at python.org (Thomas Wouters) Date: Sun, 30 Apr 2006 10:59:47 +0200 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <d11dcfba0604291850q7b1eeac3y57b6793dd038ceeb@mail.gmail.com> References: <4453B025.3080100@acm.org> <d11dcfba0604291850q7b1eeac3y57b6793dd038ceeb@mail.gmail.com> Message-ID: <9e804ac0604300159i35a80414r85a7b2800b4ea34b@mail.gmail.com> On 4/30/06, Steven Bethard <steven.bethard at gmail.com> wrote: > > On 4/29/06, Talin <talin at acm.org> wrote: > > This PEP proposes a change to the way that function arguments are > > assigned to named parameter slots. In particular, it enables the > > declaration of "keyword-only" arguments: arguments that can only > > be supplied by keyword and which will never be automatically > > filled in by a positional argument. > > +1. And I suggest this be re-targeted for Python 2.6. Now all we > need is someone to implement this. ;-) Pfft, implementation is easy. I have the impression Talin wants to implement it himself, but even if he doesn't, I'm sure I'll have a free week somewhere in the next year and a half in which I can implement it :) It's not that hard a problem, it just requires a lot of reading of the AST and function-call code (if you haven't read it already.) -- Thomas Wouters <thomas at python.org> Hi! I'm a .signature virus! copy me into your .signature file to help me spread! -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/python-dev/attachments/20060430/af537862/attachment.htm From jcarlson at uci.edu Sun Apr 30 12:01:11 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Sun, 30 Apr 2006 03:01:11 -0700 Subject: [Python-Dev] methods on the bytes object (was: Crazy idea for str.join) In-Reply-To: <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> References: <20060429055551.672C.JCARLSON@uci.edu> <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> Message-ID: <20060430023808.673B.JCARLSON@uci.edu> "Guido van Rossum" <guido at python.org> wrote: > On 4/29/06, Josiah Carlson <jcarlson at uci.edu> wrote: > > I understand the underlying implementation of str.join can be a bit > > convoluted (with the auto-promotion to unicode and all), but I don't > > suppose there is any chance to get str.join to support objects which > > implement the buffer interface as one of the items in the sequence? > > In Py3k, buffers won't be compatible with strings -- buffers will be > about bytes, while strings will be about characters. Given that future > I don't think we should mess with the semantics in 2.x; one change in > the near(ish) future is enough of a transition. This brings up something I hadn't thought of previously. While unicode will obviously keep its .join() method when it becomes str in 3.x, will bytes objects get a .join() method? Checking the bytes PEP, very little is described about the type other than it basically being an array of 8 bit integers. That's fine and all, but it kills many of the parsing and modification use-cases that are performed on strings via the non __xxx__ methods. Specifically in the case of bytes.join(), the current common use-case of <literal>.join(...) would become something similar to bytes(<literal>).join(...), unless bytes objects got a syntax... Or maybe I'm missing something? Anyways, when the bytes type was first being discussed, I had hoped that it would basically become array.array("B", ...) + non-unicode str. Allowing for bytes to do everything that str was doing before, plus a few new tricks (almost like an mmap...), minus those operations which require immutability. - Josiah From martin at v.loewis.de Sun Apr 30 12:06:15 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 12:06:15 +0200 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <20060430023808.673B.JCARLSON@uci.edu> References: <20060429055551.672C.JCARLSON@uci.edu> <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> <20060430023808.673B.JCARLSON@uci.edu> Message-ID: <44548C17.9050807@v.loewis.de> Josiah Carlson wrote: > Specifically in the case of bytes.join(), the current common use-case of > <literal>.join(...) would become something similar to > bytes(<literal>).join(...), unless bytes objects got a syntax... Or > maybe I'm missing something? I think what you are missing is that algorithms that currently operate on byte strings should be reformulated to operate on character strings, not reformulated to operate on bytes objects. Regards, Martin From g.brandl at gmx.net Sun Apr 30 12:13:54 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 30 Apr 2006 12:13:54 +0200 Subject: [Python-Dev] Problem with inspect and PEP 302 Message-ID: <e322l2$t1f$1@sea.gmane.org> Recently, the inspect module was updated to conform with PEP 302. Now this is broken: >>> import inspect >>> inspect.stack() The traceback shows clearly what's going on. However, I don't know how to resolve the problem. Georg From arigo at tunes.org Sun Apr 30 12:41:42 2006 From: arigo at tunes.org (Armin Rigo) Date: Sun, 30 Apr 2006 12:41:42 +0200 Subject: [Python-Dev] rich comparisions and old-style classes In-Reply-To: <e31kip$u6l$1@sea.gmane.org> References: <e31kip$u6l$1@sea.gmane.org> Message-ID: <20060430104142.GA851@code0.codespeak.net> Hi Fredrik, On Sun, Apr 30, 2006 at 08:13:40AM +0200, Fredrik Lundh wrote: > trying to come up with a more concise description of the rich > comparision machinery for pyref.infogami.com, That's quite optimistic. It's a known dark area. > I stumbled upon an oddity that I cannot really explain: I'm afraid the only way to understand this is to step through the C code. I'm sure there is a reason along the lines of "well, we tried this and that, now let's try this slightly different thing which might or might not result in the same methods being called again". I notice that you didn't try comparing an old-style instance with a new-style one :-) More pragmatically I'd suggest that you only describe the new-style behavior. Old-style classes have two levels of dispatching starting from the introduction of new-style classes in 2.2 and I'm sure that no docs apart from deep technical references should have to worry about that. A bientot, Armin. From fuzzyman at voidspace.org.uk Sun Apr 30 12:56:36 2006 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 30 Apr 2006 11:56:36 +0100 Subject: [Python-Dev] rich comparisions and old-style classes In-Reply-To: <20060430104142.GA851@code0.codespeak.net> References: <e31kip$u6l$1@sea.gmane.org> <20060430104142.GA851@code0.codespeak.net> Message-ID: <445497E4.3070300@voidspace.org.uk> Armin Rigo wrote: > Hi Fredrik, > > On Sun, Apr 30, 2006 at 08:13:40AM +0200, Fredrik Lundh wrote: > >> trying to come up with a more concise description of the rich >> comparision machinery for pyref.infogami.com, >> > > That's quite optimistic. It's a known dark area. > > >> I stumbled upon an oddity that I cannot really explain: >> > > I'm afraid the only way to understand this is to step through the C > code. I'm sure there is a reason along the lines of "well, we tried > this and that, now let's try this slightly different thing which might > or might not result in the same methods being called again". > > I notice that you didn't try comparing an old-style instance with a > new-style one :-) > > More pragmatically I'd suggest that you only describe the new-style > behavior. Old-style classes have two levels of dispatching starting > from the introduction of new-style classes in 2.2 and I'm sure that no > docs apart from deep technical references should have to worry about > that. > > I thought a good rule of thumb (especially for a tutorial) was : Either define ``__cmp__`` *or* define the rich comparison operators. Doing both is a recipe for confusion. Michael Foord > A bientot, > > Armin. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > From ncoghlan at gmail.com Sun Apr 30 13:01:54 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 30 Apr 2006 21:01:54 +1000 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <e31b0t$7i4$1@sea.gmane.org> References: <4453B025.3080100@acm.org> <e31b0t$7i4$1@sea.gmane.org> Message-ID: <44549922.7020109@gmail.com> Terry Reedy wrote: > "Talin" <talin at acm.org> wrote in message news:4453B025.3080100 at acm.org... >> Now, suppose you wanted to have 'key' be a keyword-only argument. > > Why? Why not let the user type the additional argument(s) without the > parameter name? Because for some functions (e.g. min()/max()) you want to use *args, but support some additional keyword arguments to tweak a few aspects of the operation (like providing a "key=x" option). Currently, to support such options, you need to accept **kwds, then go poking around inside the dict manually. This PEP simply allows you to request that the interpreter do such poking around for you, rather than having to do it yourself at the start of the function. I used to be a fan of the idea, but I'm now tending towards a -0, as I'm starting to think that wanting keyword-only arguments for a signature is a sign that the first argument to the function should have been an iterable, rather than accepting *args. However, there are syntactic payoffs that currently favour using *args instead of accepting an iterable as the first argument to a function. I recently went into a lot more detail on that topic as part of the py3k discussion of set literals [1]. I'd like to turn that message into the core of a PEP eventually, but given the target would be Python 2.6 at the earliest, there are plenty of other things higher on my to-do list. Cheers, Nick. [1] http://mail.python.org/pipermail/python-3000/2006-April/001504.html -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From aahz at pythoncraft.com Sun Apr 30 15:16:56 2006 From: aahz at pythoncraft.com (Aahz) Date: Sun, 30 Apr 2006 06:16:56 -0700 Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: <4453AF51.6050605@acm.org> References: <4453AF51.6050605@acm.org> Message-ID: <20060430131656.GA13868@panix.com> First of all, I recommend that you post this to comp.lang.python. This is the kind of PEP where wide community involvement is essential to success; be prepared for massive revision. (My point about massive revision would be true regardless of how much consensus has been reached on python-dev -- PEPs often change radically once they're thrown into the wild.) On Sat, Apr 29, 2006, Talin wrote: > > Braces can be escaped using a backslash: > > "My name is {0} :-\{\}".format('Fred') You should include somewhere reasoning for using "\{" and "\}" instead of "{{" and "}}". > Simple names are either names or numbers. If numbers, they must > be valid decimal numbers; if names, they must be valid Python > identifiers. A number is used to identify a positional argument, > while a name is used to identify a keyword argument. s/decimal numbers/decimal integers/ (or possibly "base-10 integers" for absolute clarity) > The parameters to the cformat function are: > > -- The format string (or unicode; the same function handles > both.) > -- A field format hook (see below) > -- A tuple containing the positional arguments > -- A dict containing the keyword arguments > > The cformat function will parse all of the fields in the format > string, and return a new string (or unicode) with all of the > fields replaced with their formatted values. > > For each field, the cformat function will attempt to call the > field format hook with the following arguments: > > field_hook(value, conversion, buffer) You need to explain further what a field format hook is and how one specifies it. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From ncoghlan at iinet.net.au Sun Apr 30 15:23:46 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Sun, 30 Apr 2006 23:23:46 +1000 Subject: [Python-Dev] More on contextlib - adding back a contextmanager decorator Message-ID: <4454BA62.5080704@iinet.net.au> A few things from the pre-alpha2 context management terminology review have had a chance to run around in the back of my head for a while now, and I'd like to return to a topic Paul Moore brought up during that discussion. Paul had a feeling there should be two generator decorators in contextlib - one for __context__ methods and one for standalone generator functions. However, contextfactory seemed to meet both needs, so we didn't follow the question up for alpha 2. The second link in this chain is the subsequent discussion with Guido about making the context manager and managed context protocols orthogonal. With this clearer separation of the two terms to happen in alpha 3, it becomes more reasonable to have two different generator decorators, one for defining managed contexts and one for defining context managers. The final link is a use case for such a context manager decorator, in the form of a couple of HTML tag example contexts in the contextlib documentation. (contextlib.nested is also a good use case, where caching can be used to ensure certain resources are always acquired in the same order, but the issue is easier to demonstrate using the HTML tag examples) Firstly, the class-based HTML tag context manager example: -------------------- class TagClass: def __init__(self, name): self.name = name @contextfactory def __context__(self): print "<%s>" % self.name yield self print "</%s>" % self.name >>> h1_cls = TagClass('h1') >>> with h1_cls: ... print "Header A" ... <h1> Header A </h1> >>> with h1_cls: ... print "Header B" ... <h1> Header B </h1> -------------------- Each with statement creates a new context object, so caching the tag object itself works just as you would expect. Unfortunately, the same cannot be said for the generator based version: -------------------- @contextfactory def tag_gen(name): print "<%s>" % name yield print "</%s>" % name >>> h1_gen = tag_gen('h1') >>> with h1_gen: ... print "Header A" ... <h1> Header A </h1> >>> with h1_gen: ... print "Header B" ... Traceback (most recent call last): ... RuntimeError: generator didn't yield -------------------- The managed contexts produced by the context factory aren't reusable, so caching them doesn't work properly - they need to be created afresh for each with statement. Adding another decorator to define context managers, as Paul suggested, solves this problem neatly. Here's a possible implementation: def managerfactory(gen_func): # Create a context manager factory from a generator function context_factory = contextfactory(gen_func) def wrapper(*args, **kwds): class ContextManager(object): def __context__(self): return context_factory(*args, **kwds) mgr = ContextManager() mgr.__context__() # Throwaway context to check arguments return mgr # Having @functools.decorator would eliminate the next 4 lines wrapper.__name__ = context_factory.__name__ wrapper.__module__ = context_factory.__module__ wrapper.__doc__ = context_factory.__doc__ wrapper.__dict__.update(context_factory.__dict__) return wrapper @managerfactory def tag_gen2(name): print "<%s>" % name yield print "</%s>" % name >>> h1_gen2 = tag_gen2('h1') >>> with h1_gen2: ... print "Header A" ... <h1> Header A </h1> >>> with h1_gen2: ... print "Header B" ... <h1> Header B </h1> -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From ncoghlan at iinet.net.au Sun Apr 30 15:56:19 2006 From: ncoghlan at iinet.net.au (Nick Coghlan) Date: Sun, 30 Apr 2006 23:56:19 +1000 Subject: [Python-Dev] Adding functools.decorator Message-ID: <4454C203.2060707@iinet.net.au> Collin Winters has done the work necessary to rename PEP 309's functional module to functools and posted the details to SF [1]. I'd like to take that patch, tweak it so the C module is built as _functools rather than functools, and then add a functools.py consisting of: from _functools import * # Pick up functools.partial def _update_wrapper(decorated, func, deco_func): # Support naive introspection decorated.__module__ = func.__module__ decorated.__name__ = func.__name__ decorated.__doc__ = func.__doc__ decorated.__dict__.update(func.__dict__) # Provide access to decorator and original function decorated.__decorator__ = deco_func decorated.__decorates__ = func def decorator(deco_func): """Wrap a function as an introspection friendly decorator function""" def wrapper(func): decorated = deco_func(func) if decorated is func: return func _update_wrapper(decorated, func, deco_func) return decorated # Manually make this decorator introspection friendly _update_wrapper(wrapper, deco_func, decorator) return wrapper After typing those four lines of boilerplate to support naive introspection out in full several times for contextlib related decorators, I can testify that doing it by hand gets old really fast :) Some details that are up for discussion: - which of a function's special attributes should be copied/updated? - should the __decorates__ and __decorator__ attributes be added? If people are happy with this idea, I can make sure it happens before alpha 3. Cheers, Nick. [1] http://www.python.org/sf/1478788 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From gabriel.becedillas at corest.com Fri Apr 28 21:15:54 2006 From: gabriel.becedillas at corest.com (Gabriel Becedillas) Date: Fri, 28 Apr 2006 16:15:54 -0300 Subject: [Python-Dev] PyThreadState_SetAsyncExc and native extensions Message-ID: <445269EA.7060802@corest.com> Does anyone know if PyThreadState_SetAsyncExc stops a thread while its inside a native extension ? I'm trying to stop a testing script that boils down to this: while True: print "aaa" native_extension_call() print "bbb" Using PyThreadState_SetAsyncExc the module doesn't stop but if I add more print statements to increase the chance that PyThreadState_SetAsyncExc is called when the module is executing Python code, the module stops. From sanxiyn at gmail.com Sat Apr 29 05:34:51 2006 From: sanxiyn at gmail.com (Sanghyeon Seo) Date: Sat, 29 Apr 2006 12:34:51 +0900 Subject: [Python-Dev] __getslice__ usage in sre_parse Message-ID: <5b0248170604282034k26827e3du91b4ce3f9f538828@mail.gmail.com> Hello, Python language reference 3.3.6 deprecates __getslice__. I think it's okay that UserList.py has it, but sre_parse shouldn't use it, no? __getslice__ is not implemented in IronPython and this breaks usage of _sre.py, a pure-Python implementation of _sre, on IronPython: http://ubique.ch/code/_sre/ _sre.py is needed for me because IronPython's own regex implementation using underlying .NET implementation is not compatible enough for my applications. I will write a separate bug report for this. It should be a matter of removing __getslice__ and adding isinstance(index, slice) check in __getitem__. I would very much appreciate it if this is fixed before Python 2.5. Seo Sanghyeon From ben at 666.com Sat Apr 29 11:16:59 2006 From: ben at 666.com (Ben Wing) Date: Sat, 29 Apr 2006 04:16:59 -0500 Subject: [Python-Dev] elimination of scope bleeding of iteration variables Message-ID: <44532F0B.5050804@666.com> apologies if this has been brought up on python-dev already. a suggestion i have, perhaps for python 3.0 since it may break some code (but imo it could go into 2.6 or 2.7 because the likely breakage would be very small, see below), is the elimination of the misfeature whereby the iteration variable used in for-loops, list comprehensions, etc. bleeds out into the surrounding scope. [i'm aware that there is a similar proposal for python 3.0 for list comprehensions specifically, but that's not enough.] i've been bitten by this problem more than once. most recently, i had a simple function like this: # Replace property named PROP with NEW in PROPLIST, a list of tuples. def property_name_replace(prop, new, proplist): for i in xrange(len(proplist)): if x[i][0] == prop: x[i] = (new, x[i][1]) the use of `x' in here is an error, as it should be `proplist'; i previously had a loop `for x in proplist', but ran into the problem that tuples can't be modified, and lists can't be modified except by index. despite this obviously bad code, however, it ran without any complaint from python -- which amazed me, when i finally tracked this problem down. turns out i had two offenders, both way down in the "main" code at the bottom of the file -- a for-loop with loop variable x, and a list comprehension [x for x in output_file_map] -- and suddenly `x' is a global variable. yuck. i suggest that the scope of list comprehension iteration variables be only the list comprehension itself (see python 3000), and the scope of for-loop iteration variables be only the for-loop (including any following `else' statement -- this way you can access it and store it in some other variable if you really want it). in practice, i seriously doubt this will break much code and probably could be introduced like the previous scope change: using `from __future__' in python 2.5 or 2.6, and by default in the next version. it should be possible, in most circumstances, to issue a warning about code that relies on the old behavior. ben From ncoghlan at gmail.com Sun Apr 30 16:23:54 2006 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 01 May 2006 00:23:54 +1000 Subject: [Python-Dev] elimination of scope bleeding of iteration variables In-Reply-To: <44532F0B.5050804@666.com> References: <44532F0B.5050804@666.com> Message-ID: <4454C87A.7020701@gmail.com> Ben Wing wrote: > apologies if this has been brought up on python-dev already. > > a suggestion i have, perhaps for python 3.0 since it may break some code > (but imo it could go into 2.6 or 2.7 because the likely breakage would > be very small, see below), is the elimination of the misfeature whereby > the iteration variable used in for-loops, list comprehensions, etc. > bleeds out into the surrounding scope. > > [i'm aware that there is a similar proposal for python 3.0 for list > comprehensions specifically, but that's not enough.] List comprehensions will be fixed in Py3k. However, the scoping of for loop variables won't change, as the current behaviour is essential for search loops that use a break statement to terminate the loop when the item is found. Accordingly, there is plenty of code in the wild that *would* break if the for loop variables were constrained to the for loop, even if your own code wouldn't have such a problem. Outside pure scripts, significant control flow logic (like for loops) should be avoided at module level. You are typically much better off moving the logic inside a _main() function and invoking it at the end of the module. This avoids the 'accidental global' problem for all of the script-only variables, not only the ones that happen to be used as for loop variables. > # Replace property named PROP with NEW in PROPLIST, a list of tuples. > def property_name_replace(prop, new, proplist): > for i in xrange(len(proplist)): > if x[i][0] == prop: > x[i] = (new, x[i][1]) This wouldn't have helped with your name-change problem, but you've got a lot of unnecessary indexing going on there: def property_name_replace(prop, new, proplist): for i, (name, value) in enumerate(proplist): if name == prop: proplist[i] = (new, value) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- http://www.boredomandlaziness.org From simon.belak at gmail.com Sun Apr 30 16:32:03 2006 From: simon.belak at gmail.com (Simon Belak) Date: Sun, 30 Apr 2006 16:32:03 +0200 Subject: [Python-Dev] Adding functools.decorator Message-ID: <a8fa27ac0604300732h19a697cfl7b26ea8d081ac4c7@mail.gmail.com> Nick Coghlan wrote: > Some details that are up for discussion: > > - which of a function's special attributes should be copied/updated? > - should the __decorates__ and __decorator__ attributes be added? > > If people are happy with this idea, I can make sure it happens before > alpha 3. What about preserving decorated function's signature, like we do in TurboGears [1]? Cheers, Simon [1] http://trac.turbogears.org/turbogears/browser/trunk/turbogears/decorator.py From guido at python.org Sun Apr 30 17:22:14 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Apr 2006 08:22:14 -0700 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <44548C17.9050807@v.loewis.de> References: <20060429055551.672C.JCARLSON@uci.edu> <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> <20060430023808.673B.JCARLSON@uci.edu> <44548C17.9050807@v.loewis.de> Message-ID: <ca471dc20604300822s42d33108nf40b3a669d2fcf02@mail.gmail.com> On 4/30/06, "Martin v. L?wis" <martin at v.loewis.de> wrote: > Josiah Carlson wrote: > > Specifically in the case of bytes.join(), the current common use-case of > > <literal>.join(...) would become something similar to > > bytes(<literal>).join(...), unless bytes objects got a syntax... Or > > maybe I'm missing something? > > I think what you are missing is that algorithms that currently operate > on byte strings should be reformulated to operate on character strings, > not reformulated to operate on bytes objects. Well, yes, in most cases, but while attempting to write an I/O library I already had the urge to collect "chunks" I've read in a list and join them later, instead of concatenating repeatedly. I guess I should suppress this urge, and plan to optimize extending a bytes arrays instead, along the lines of how we optimize lists. Still, I expect that having a bunch of string-ish methods on bytes arrays would be convenient for certain types of data handling. Of course, only those methods that don't care about character types would be added, but that's a long list: startswith, endswith, index, rindex, find, rfind, split, rsplit, join, count, replace, translate. (I'm skipping center, ljust and rjust since they seem exclusive to text processing. Ditto for strip/lstrip/rstrip although an argument for those could probably be made, with mandatory 2nd arg.) Unhelpfully y'rs, -- --Guido van Rossum (home page: http://www.python.org/~guido/) From g.brandl at gmx.net Sun Apr 30 17:23:53 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 30 Apr 2006 17:23:53 +0200 Subject: [Python-Dev] Adding functools.decorator In-Reply-To: <4454C203.2060707@iinet.net.au> References: <4454C203.2060707@iinet.net.au> Message-ID: <e32kqa$bq6$1@sea.gmane.org> Nick Coghlan wrote: > Collin Winters has done the work necessary to rename PEP 309's functional > module to functools and posted the details to SF [1]. > > I'd like to take that patch, tweak it so the C module is built as _functools > rather than functools, and then add a functools.py consisting of: I'm all for it. (You could integrate the C version of "decorator" from my SF patch, but I think Python-only is enough). > from _functools import * # Pick up functools.partial > > def _update_wrapper(decorated, func, deco_func): > # Support naive introspection > decorated.__module__ = func.__module__ > decorated.__name__ = func.__name__ > decorated.__doc__ = func.__doc__ > decorated.__dict__.update(func.__dict__) > # Provide access to decorator and original function > decorated.__decorator__ = deco_func > decorated.__decorates__ = func > > def decorator(deco_func): > """Wrap a function as an introspection friendly decorator function""" > def wrapper(func): > decorated = deco_func(func) > if decorated is func: > return func > _update_wrapper(decorated, func, deco_func) > return decorated > # Manually make this decorator introspection friendly > _update_wrapper(wrapper, deco_func, decorator) > return wrapper > > After typing those four lines of boilerplate to support naive introspection > out in full several times for contextlib related decorators, I can testify > that doing it by hand gets old really fast :) Agreed. Georg From guido at python.org Sun Apr 30 17:30:56 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Apr 2006 08:30:56 -0700 Subject: [Python-Dev] __getslice__ usage in sre_parse In-Reply-To: <5b0248170604282034k26827e3du91b4ce3f9f538828@mail.gmail.com> References: <5b0248170604282034k26827e3du91b4ce3f9f538828@mail.gmail.com> Message-ID: <ca471dc20604300830i7cc814f9o183107b6e71329b2@mail.gmail.com> On 4/28/06, Sanghyeon Seo <sanxiyn at gmail.com> wrote: > Hello, > > Python language reference 3.3.6 deprecates __getslice__. I think it's > okay that UserList.py has it, but sre_parse shouldn't use it, no? Well, as long as the deprecated code isn't removed, there's no reason why other library code shouldn't use it. So I disagree that technically there's a reason why sre_parse shouldn't use it. > __getslice__ is not implemented in IronPython and this breaks usage of > _sre.py, a pure-Python implementation of _sre, on IronPython: > http://ubique.ch/code/_sre/ > > _sre.py is needed for me because IronPython's own regex implementation > using underlying .NET implementation is not compatible enough for my > applications. I will write a separate bug report for this. > > It should be a matter of removing __getslice__ and adding > isinstance(index, slice) check in __getitem__. I would very much > appreciate it if this is fixed before Python 2.5. You can influence the fix yourself -- please write a patch (relative to Python 2.5a2 that was just released), submit it to Python's patch tracker on SourceForge (read python.org/dev first), and then sending an email here to alert the developers. This ought to be done well before the planned 2.5b1 release (see PEP 256 for the 2.5 release timeline). You should make sure that the patched Python 2.5 passes all unit tests before submitting your test. Good luck! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From aahz at pythoncraft.com Sun Apr 30 17:37:00 2006 From: aahz at pythoncraft.com (Aahz) Date: Sun, 30 Apr 2006 08:37:00 -0700 Subject: [Python-Dev] PyThreadState_SetAsyncExc and native extensions In-Reply-To: <445269EA.7060802@corest.com> References: <445269EA.7060802@corest.com> Message-ID: <20060430153700.GA17173@panix.com> On Fri, Apr 28, 2006, Gabriel Becedillas wrote: > > Does anyone know if PyThreadState_SetAsyncExc stops a thread while its > inside a native extension ? Please take this question to comp.lang.python; python-dev is for discussion about changes to the Python interpreter/language, not for questions about using Python. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Argue for your limitations, and sure enough they're yours." --Richard Bach From pje at telecommunity.com Sun Apr 30 17:40:59 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 30 Apr 2006 11:40:59 -0400 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <ca471dc20604300822s42d33108nf40b3a669d2fcf02@mail.gmail.co m> References: <44548C17.9050807@v.loewis.de> <20060429055551.672C.JCARLSON@uci.edu> <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> <20060430023808.673B.JCARLSON@uci.edu> <44548C17.9050807@v.loewis.de> Message-ID: <5.1.1.6.0.20060430113833.01e62e20@mail.telecommunity.com> At 08:22 AM 4/30/2006 -0700, Guido van Rossum wrote: >Still, I expect that having a bunch of string-ish methods on bytes >arrays would be convenient for certain types of data handling. Of >course, only those methods that don't care about character types would >be added, but that's a long list: startswith, endswith, index, rindex, >find, rfind, split, rsplit, join, count, replace, translate. I've often wished *lists* had startswith and endswith, and somewhat less often wished they had split or rsplit. Those seem like things that are generally applicable to sequences, not just strings or bytes. From guido at python.org Sun Apr 30 18:36:54 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Apr 2006 09:36:54 -0700 Subject: [Python-Dev] Adding functools.decorator In-Reply-To: <e32kqa$bq6$1@sea.gmane.org> References: <4454C203.2060707@iinet.net.au> <e32kqa$bq6$1@sea.gmane.org> Message-ID: <ca471dc20604300936s2a1bdf0fof8395428230963d9@mail.gmail.com> On 4/30/06, Georg Brandl <g.brandl at gmx.net> wrote: > Nick Coghlan wrote: > > Collin Winters has done the work necessary to rename PEP 309's functional > > module to functools and posted the details to SF [1]. > > > > I'd like to take that patch, tweak it so the C module is built as _functools > > rather than functools, and then add a functools.py consisting of: > > I'm all for it. (You could integrate the C version of "decorator" from my SF > patch, but I think Python-only is enough). Stronger -- this should *not* be implemented in C. There's no performance need, and the C code is much harder to understand, check, and modify. I expect that at some point people will want to tweak what gets copied by _update_wrapper() -- e.g. some attributes may need to be deep-copied, or personalized, or skipped, etc. (Doesn't this already apply to __decorator__ and __decorates__? I can't prove to myself that these get set to the right things when several decorators are stacked on top of each other.) I'm curious if @decorator is the right name and the right API for this though? The name is overly wide (many things are decorators but should not be decorated with @decorator) and I wonder of a manual call to _update_wrapper() wouldn't be just as useful. (Perhaps with a simpler API -- I'm tempted to call YAGNI on the __decorator__ and __decorates__ attributes.) I think there are too many design options here to check this in without more discussion. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Sun Apr 30 18:53:57 2006 From: guido at python.org (Guido van Rossum) Date: Sun, 30 Apr 2006 09:53:57 -0700 Subject: [Python-Dev] More on contextlib - adding back a contextmanager decorator In-Reply-To: <4454BA62.5080704@iinet.net.au> References: <4454BA62.5080704@iinet.net.au> Message-ID: <ca471dc20604300953q112991baw698564024b0eec15@mail.gmail.com> On 4/30/06, Nick Coghlan <ncoghlan at iinet.net.au> wrote: > A few things from the pre-alpha2 context management terminology review have > had a chance to run around in the back of my head for a while now, and I'd > like to return to a topic Paul Moore brought up during that discussion. I believe the context API design has gotten totally out of hand. Regardless of the merits of the "with" approach to HTML generation (which I personally believe to be an abomination), I don't see why the standard library should support every possible use case with a custom-made decorator. Let the author of that tag library provide the decorator. I have a counter-proposal: let's drop __context__. Nearly all use cases have __context__ return self. In the remaining cases, would it really be such a big deal to let the user make an explicit call to some appropriately named method? The only example that I know of where __context__ doesn't return self is the decimal module. So the decimal users would have to type with mycontext.some_method() as ctx: # ctx is a clone of mycontext ctx.prec += 2 <BODY> The implementation of some_method() could be exactly what we currently have as the __context__ method on the decimal.Context object. Its return value is a decimal.WithStatementContext() instance, whose __enter__() method returns a clone of the original context object which is assigned to the variable in the with-statement (here 'ctx'). This even has an additional advantage -- some_method() could have keyword parameters to set the precision and various other context parameters, so we could write this: with mycontext.some_method(prec=mycontext.prec+2): <BODY> Note that we can drop the variable too now (unless we have another need to reference it). An API tweak for certain attributes that are often incremented or decremented could reduce writing: with mycontext.some_method(prec_incr=2): <BODY> -- --Guido van Rossum (home page: http://www.python.org/~guido/) From g.brandl at gmx.net Sun Apr 30 19:09:29 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 30 Apr 2006 19:09:29 +0200 Subject: [Python-Dev] Adding functools.decorator In-Reply-To: <ca471dc20604300936s2a1bdf0fof8395428230963d9@mail.gmail.com> References: <4454C203.2060707@iinet.net.au> <e32kqa$bq6$1@sea.gmane.org> <ca471dc20604300936s2a1bdf0fof8395428230963d9@mail.gmail.com> Message-ID: <e32r0a$s69$1@sea.gmane.org> Guido van Rossum wrote: > On 4/30/06, Georg Brandl <g.brandl at gmx.net> wrote: >> Nick Coghlan wrote: >> > Collin Winters has done the work necessary to rename PEP 309's functional >> > module to functools and posted the details to SF [1]. >> > >> > I'd like to take that patch, tweak it so the C module is built as _functools >> > rather than functools, and then add a functools.py consisting of: >> >> I'm all for it. (You could integrate the C version of "decorator" from my SF >> patch, but I think Python-only is enough). > > Stronger -- this should *not* be implemented in C. There's no > performance need, and the C code is much harder to understand, check, > and modify. +1. > I expect that at some point people will want to tweak what gets copied > by _update_wrapper() -- e.g. some attributes may need to be > deep-copied, or personalized, or skipped, etc. What exactly do you have in mind there? If someone wants to achieve this, she can write his own version of @decorator. One thing that would be nice to copy is the signature, but for that, like demonstrated by Simon Belak, exec trickery is necessary, and I think that's one step too far. The basic wish is just to let introspective tools, and perhaps APIs that rely on function's __name__ and __doc__ work flawlessly with decorators applied, nothing more. > (Doesn't this already > apply to __decorator__ and __decorates__? I can't prove to myself that > these get set to the right things when several decorators are stacked > on top of each other.) I don't think they're necessary. > I'm curious if @decorator is the right name and the right API for this > though? The name is overly wide (many things are decorators but should > not be decorated with @decorator) and I wonder of a manual call to > _update_wrapper() wouldn't be just as useful. Still, _update_wrapper would need to be defined somewhere, and it would have to be called in the body of the decorator. > (Perhaps with a simpler > API -- I'm tempted to call YAGNI on the __decorator__ and > __decorates__ attributes.) > > I think there are too many design options here to check this in > without more discussion. So let the discussion begin! Georg From fredrik at pythonware.com Sun Apr 30 19:15:08 2006 From: fredrik at pythonware.com (Fredrik Lundh) Date: Sun, 30 Apr 2006 19:15:08 +0200 Subject: [Python-Dev] More on contextlib - adding back a contextmanagerdecorator References: <4454BA62.5080704@iinet.net.au> <ca471dc20604300953q112991baw698564024b0eec15@mail.gmail.com> Message-ID: <e32rau$t50$1@sea.gmane.org> Guido van Rossum wrote: > I believe the context API design has gotten totally out of hand. > I have a counter-proposal: let's drop __context__. Heh. I was about to pull out the "if the implementation is hard to explain, it's a bad idea (and bad ideas shouldn't go into 2.X)" rule last week in response to the endless nick-phillip-paul "today I put in my brain the other way" threads... ;-) (but the design isn't really that hard to explain; it's just that ever- one seems to be missing that there are three objects involved, not two...) But if you think we can get rid of the "with statement context object" [1], I'm all for it. "explicit is better than implicit" etc. +1. </F> 1) http://docs.python.org/dev/ref/with.html From edloper at gradient.cis.upenn.edu Sun Apr 30 19:30:07 2006 From: edloper at gradient.cis.upenn.edu (Edward Loper) Date: Sun, 30 Apr 2006 13:30:07 -0400 Subject: [Python-Dev] [Python-3000] in-out parameters In-Reply-To: <20060430005356.3DA9B8B34E@xprdmxin.myway.com> References: <20060430005356.3DA9B8B34E@xprdmxin.myway.com> Message-ID: <4454F41F.9050108@gradient.cis.upenn.edu> Rudy Rudolph wrote: > 2) pass-by-reference: > def f(wrappedParam): > wrappedParam[0] += 5 # ugh > return "this is my result" > > # call it > x = 2 > result = f([x]) > # also ugly, but x is now 7 This example is broken; here's what you get when you run it: >>> def f(wrappedParam): ... wrappedParam[0] += 5 ... return "this is my result" ... >>> # call it ... x = 2 >>> result = f([x]) >>> x 2 You probably intended something more like: >>> x = [2] >>> result = f(x) >>> x[0] 7 (As for the actual topic, I'm personally -0 for adding in-out parameters to python.) -Edward From zpincus at stanford.edu Sun Apr 30 19:40:19 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sun, 30 Apr 2006 10:40:19 -0700 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <4453B025.3080100@acm.org> References: <4453B025.3080100@acm.org> Message-ID: <6ED61C59-D93B-4473-955E-EA0A0E5C79DB@stanford.edu> Some thoughts from a lurker, largely concerning syntax; discount as you wish. First: > Keyword-only arguments are not required to have a default value. > Since Python requires that all arguments be bound to a value, > and since the only way to bind a value to a keyword-only argument > is via keyword, such arguments are therefore 'required keyword' > arguments. Such arguments must be supplied by the caller, and > they must be supplied via keyword. So what would this look like? def foo(*args, kw1, kw2): That seems a bit odd, as my natural expectation wouldn't be to see kw1 ands kw2 as required, no-default keyword args, but as misplaced positional args. Perhaps this might be a little better? def foo(*args, kw1=, kw2=): I'm rather not sure. At least it makes it clear that kw1 and kw2 are keyword arguments, and that they have no default values. Though, I'm kind of neutral on the whole bit -- in my mind "keyword args" and "default-value args" are pretty conflated and I can't think of any compelling reasons why they shouldn't be. It's visually handy when looking at some code to see keywords and be able to say "ok, those are the optional args for changing the handling of the main args". I'm not sure where the big win with required keyword args is. For that matter, why not "default positional args"? I think everyone will agree that seems a bit odd, but it isn't too much odder than "required keyword args". (Not that I'm for the former! I'm just pointing out that if the latter is OK, there's no huge reason why the former wouldn't be, and that is in my mind a flaw.) Second: > def compare(a, b, *, key=None): This syntax seems a bit odd to me, as well. I always "understood" the *args syntax by analogy to globbing -- the asterisk means to "take all the rest", in some sense in both a shell glob and *args. In this syntax, the asterisk, though given a position in the comma- separated list, doesn't mean "take the rest and put it in this position." It means "stop taking things before this position", which is a bit odd, in terms of items *in* an argument list. I grant that it makes sense as a derivation from "*ignore"-type solutions, but as a standalone syntax it feels off. How about something like: def compare(a, b; key=None): The semicolon is sort of like a comma but more forceful; it ends a phrase. This seems like a logical (and easily comprehended by analogy) use here -- ending the "positional arguments" phrase. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine On Apr 29, 2006, at 11:27 AM, Talin wrote: > PEP: 3102 > Title: Keyword-Only Arguments > Version: $Revision$ > Last-Modified: $Date$ > Author: Talin <talin at acm.org> > Status: Draft > Type: Standards > Content-Type: text/plain > Created: 22-Apr-2006 > Python-Version: 3.0 > Post-History: > > > Abstract > > This PEP proposes a change to the way that function arguments are > assigned to named parameter slots. In particular, it enables the > declaration of "keyword-only" arguments: arguments that can only > be supplied by keyword and which will never be automatically > filled in by a positional argument. > > > Rationale > > The current Python function-calling paradigm allows arguments to > be specified either by position or by keyword. An argument > can be > filled in either explicitly by name, or implicitly by position. > > There are often cases where it is desirable for a function to > take > a variable number of arguments. The Python language supports > this > using the 'varargs' syntax ('*name'), which specifies that any > 'left over' arguments be passed into the varargs parameter as a > tuple. > > One limitation on this is that currently, all of the regular > argument slots must be filled before the vararg slot can be. > > This is not always desirable. One can easily envision a function > which takes a variable number of arguments, but also takes one > or more 'options' in the form of keyword arguments. Currently, > the only way to do this is to define both a varargs argument, > and a 'keywords' argument (**kwargs), and then manually extract > the desired keywords from the dictionary. > > > Specification > > Syntactically, the proposed changes are fairly simple. The first > change is to allow regular arguments to appear after a varargs > argument: > > def sortwords(*wordlist, case_sensitive=False): > ... > > This function accepts any number of positional arguments, and it > also accepts a keyword option called 'case_sensitive'. This > option will never be filled in by a positional argument, but > must be explicitly specified by name. > > Keyword-only arguments are not required to have a default value. > Since Python requires that all arguments be bound to a value, > and since the only way to bind a value to a keyword-only argument > is via keyword, such arguments are therefore 'required keyword' > arguments. Such arguments must be supplied by the caller, and > they must be supplied via keyword. > > The second syntactical change is to allow the argument name to > be omitted for a varargs argument: > > def compare(a, b, *, key=None): > ... > > The reasoning behind this change is as follows. Imagine for a > moment a function which takes several positional arguments, as > well as a keyword argument: > > def compare(a, b, key=None): > ... > > Now, suppose you wanted to have 'key' be a keyword-only argument. > Under the above syntax, you could accomplish this by adding a > varargs argument immediately before the keyword argument: > > def compare(a, b, *ignore, key=None): > ... > > Unfortunately, the 'ignore' argument will also suck up any > erroneous positional arguments that may have been supplied by the > caller. Given that we'd prefer any unwanted arguments to > raise an > error, we could do this: > > def compare(a, b, *ignore, key=None): > if ignore: # If ignore is not empty > raise TypeError > > As a convenient shortcut, we can simply omit the 'ignore' name, > meaning 'don't allow any positional arguments beyond this point'. > > > Function Calling Behavior > > The previous section describes the difference between the old > behavior and the new. However, it is also useful to have a > description of the new behavior that stands by itself, without > reference to the previous model. So this next section will > attempt to provide such a description. > > When a function is called, the input arguments are assigned to > formal parameters as follows: > > - For each formal parameter, there is a slot which will be used > to contain the value of the argument assigned to that > parameter. > > - Slots which have had values assigned to them are marked as > 'filled'. Slots which have no value assigned to them yet are > considered 'empty'. > > - Initially, all slots are marked as empty. > > - Positional arguments are assigned first, followed by keyword > arguments. > > - For each positional argument: > > o Attempt to bind the argument to the first unfilled > parameter slot. If the slot is not a vararg slot, then > mark the slot as 'filled'. > > o If the next unfilled slot is a vararg slot, and it does > not have a name, then it is an error. > > o Otherwise, if the next unfilled slot is a vararg slot then > all remaining non-keyword arguments are placed into the > vararg slot. > > - For each keyword argument: > > o If there is a parameter with the same name as the keyword, > then the argument value is assigned to that parameter > slot. > However, if the parameter slot is already filled, then > that > is an error. > > o Otherwise, if there is a 'keyword dictionary' argument, > the argument is added to the dictionary using the keyword > name as the dictionary key, unless there is already an > entry with that key, in which case it is an error. > > o Otherwise, if there is no keyword dictionary, and no > matching named parameter, then it is an error. > > - Finally: > > o If the vararg slot is not yet filled, assign an empty > tuple > as its value. > > o For each remaining empty slot: if there is a default value > for that slot, then fill the slot with the default value. > If there is no default value, then it is an error. > > In accordance with the current Python implementation, any errors > encountered will be signaled by raising TypeError. (If you want > something different, that's a subject for a different PEP.) > > > Backwards Compatibility > > The function calling behavior specified in this PEP is a superset > of the existing behavior - that is, it is expected that any > existing programs will continue to work. > > > Copyright > > This document has been placed in the public domain. > > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ > zpincus%40stanford.edu From g.brandl at gmx.net Sun Apr 30 19:47:35 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 30 Apr 2006 19:47:35 +0200 Subject: [Python-Dev] PEP 3102: Keyword-only arguments In-Reply-To: <6ED61C59-D93B-4473-955E-EA0A0E5C79DB@stanford.edu> References: <4453B025.3080100@acm.org> <6ED61C59-D93B-4473-955E-EA0A0E5C79DB@stanford.edu> Message-ID: <e32t7n$2mk$1@sea.gmane.org> Zachary Pincus wrote: > Some thoughts from a lurker, largely concerning syntax; discount as > you wish. > > First: >> Keyword-only arguments are not required to have a default value. >> Since Python requires that all arguments be bound to a value, >> and since the only way to bind a value to a keyword-only argument >> is via keyword, such arguments are therefore 'required keyword' >> arguments. Such arguments must be supplied by the caller, and >> they must be supplied via keyword. > > So what would this look like? > def foo(*args, kw1, kw2): > > That seems a bit odd, as my natural expectation wouldn't be to see > kw1 ands kw2 as required, no-default keyword args, but as misplaced > positional args. > > Perhaps this might be a little better? > def foo(*args, kw1=, kw2=): > I'm rather not sure. At least it makes it clear that kw1 and kw2 are > keyword arguments, and that they have no default values. Interesting, but perhaps a little too fancy. > Though, I'm kind of neutral on the whole bit -- in my mind "keyword > args" and "default-value args" are pretty conflated and I can't think > of any compelling reasons why they shouldn't be. It's visually handy > when looking at some code to see keywords and be able to say "ok, > those are the optional args for changing the handling of the main > args". I'm not sure where the big win with required keyword args is. Personally I think that required keyword args will not be the most used feature brought by this PEP. More important is that you can give a function as many positional args as you want, all sucked up by the *args, and still supply a "controlling" keyword arg. > For that matter, why not "default positional args"? I think everyone > will agree that seems a bit odd, but it isn't too much odder than > "required keyword args". (Not that I'm for the former! I'm just > pointing out that if the latter is OK, there's no huge reason why the > former wouldn't be, and that is in my mind a flaw.) > > Second: >> def compare(a, b, *, key=None): > This syntax seems a bit odd to me, as well. I always "understood" the > *args syntax by analogy to globbing -- the asterisk means to "take > all the rest", in some sense in both a shell glob and *args. > > In this syntax, the asterisk, though given a position in the comma- > separated list, doesn't mean "take the rest and put it in this > position." It means "stop taking things before this position", which > is a bit odd, in terms of items *in* an argument list. It continues to mean "take the rest", but because there is no name to put it into, it will raise an exception if a rest is present. For me, it seems consistent. > I grant that it makes sense as a derivation from "*ignore"-type > solutions, but as a standalone syntax it feels off. How about > something like: > def compare(a, b; key=None): > > The semicolon is sort of like a comma but more forceful; it ends a > phrase. This seems like a logical (and easily comprehended by > analogy) use here -- ending the "positional arguments" phrase. Guido already ruled it out because it's an "end of statement" marker and nothing else. Georg From martin at v.loewis.de Sun Apr 30 19:49:58 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 19:49:58 +0200 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <ca471dc20604300822s42d33108nf40b3a669d2fcf02@mail.gmail.com> References: <20060429055551.672C.JCARLSON@uci.edu> <ca471dc20604291952s2d0050bcl8c446d37f577b7ef@mail.gmail.com> <20060430023808.673B.JCARLSON@uci.edu> <44548C17.9050807@v.loewis.de> <ca471dc20604300822s42d33108nf40b3a669d2fcf02@mail.gmail.com> Message-ID: <4454F8C6.1070401@v.loewis.de> Guido van Rossum wrote: > Well, yes, in most cases, but while attempting to write an I/O library > I already had the urge to collect "chunks" I've read in a list and > join them later, instead of concatenating repeatedly. I guess I should > suppress this urge, and plan to optimize extending a bytes arrays > instead, along the lines of how we optimize lists. That is certainly a frequent, although degenerated, use case of string.join (i.e. with an empty separator). Instead of introducing bytes.join, I think we should reconsider this: py> sum(["ab","cd"],"") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sum() can't sum strings [use ''.join(seq) instead] Why is it that sum requires numbers? > Still, I expect that having a bunch of string-ish methods on bytes > arrays would be convenient for certain types of data handling. Of > course, only those methods that don't care about character types would > be added, but that's a long list: startswith, endswith, index, rindex, > find, rfind, split, rsplit, join, count, replace, translate. The problem I see with these is that people will use them for text-ish operations, e.g. data.startswith("text".encode("ascii")) While that would be "correct", I see two problems: a) people will complain that they have to use an explicit .encode call, and demand that this should "just work", and b) people will refuse to rewrite their algorithms for character strings (which they likely should in most applications of, say, .startswith), and then complain that the bytes type is soooo limited, and they really want a full byte string type back. Regards, Martin From zpincus at stanford.edu Sun Apr 30 19:52:20 2006 From: zpincus at stanford.edu (Zachary Pincus) Date: Sun, 30 Apr 2006 10:52:20 -0700 Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: <4453AF51.6050605@acm.org> References: <4453AF51.6050605@acm.org> Message-ID: <2FCC26A1-D2C2-4A4A-9474-2908B684F699@stanford.edu> I'm not sure about introducing a special syntax for accessing dictionary entries, array elements and/or object attributes *within a string formatter*... much less an overloaded one that differs from how these elements are accessed in "regular python". > Compound names are a sequence of simple names seperated by > periods: > > "My name is {0.name} :-\{\}".format(dict(name='Fred')) > > Compound names can be used to access specific dictionary entries, > array elements, or object attributes. In the above example, the > '{0.name}' field refers to the dictionary entry 'name' within > positional argument 0. Barring ambiguity about whether .name would mean the "name" attribute or the "name" dictionary entry if both were defined, I'm not sure I really see the point. How is: d = {last:'foo', first:'bar'} "My last name is {0.last}, my first name is {0.first}.".format(d) really that big a win over: d = {last:'foo', first:'bar'} "My last name is {0}, my first name is {1}.".format(d['last'], d ['first']) Plus, the in-string syntax is limited -- e.g. what if I want to call a function on an attribute? Unless you want to re-implement all python syntax within the formatters, someone will always be able to level these sort of complaints. Better, IMO, to provide none of that than a restricted subset of the language -- especially if the syntax looks and works differently from real python. Zach Pincus Program in Biomedical Informatics and Department of Biochemistry Stanford University School of Medicine On Apr 29, 2006, at 11:24 AM, Talin wrote: > PEP: 3101 > Title: Advanced String Formatting > Version: $Revision$ > Last-Modified: $Date$ > Author: Talin <talin at acm.org> > Status: Draft > Type: Standards > Content-Type: text/plain > Created: 16-Apr-2006 > Python-Version: 3.0 > Post-History: > > > Abstract > > This PEP proposes a new system for built-in string formatting > operations, intended as a replacement for the existing '%' string > formatting operator. > > > Rationale > > Python currently provides two methods of string interpolation: > > - The '%' operator for strings. [1] > > - The string.Template module. [2] > > The scope of this PEP will be restricted to proposals for > built-in > string formatting operations (in other words, methods of the > built-in string type). > > The '%' operator is primarily limited by the fact that it is a > binary operator, and therefore can take at most two arguments. > One of those arguments is already dedicated to the format string, > leaving all other variables to be squeezed into the remaining > argument. The current practice is to use either a dictionary > or a > tuple as the second argument, but as many people have commented > [3], this lacks flexibility. The "all or nothing" approach > (meaning that one must choose between only positional arguments, > or only named arguments) is felt to be overly constraining. > > While there is some overlap between this proposal and > string.Template, it is felt that each serves a distinct need, > and that one does not obviate the other. In any case, > string.Template will not be discussed here. > > > Specification > > The specification will consist of 4 parts: > > - Specification of a set of methods to be added to the built-in > string class. > > - Specification of a new syntax for format strings. > > - Specification of a new set of class methods to control the > formatting and conversion of objects. > > - Specification of an API for user-defined formatting classes. > > > String Methods > > The build-in string class will gain a new method, 'format', > which takes takes an arbitrary number of positional and keyword > arguments: > > "The story of {0}, {1}, and {c}".format(a, b, c=d) > > Within a format string, each positional argument is identified > with a number, starting from zero, so in the above example, > 'a' is > argument 0 and 'b' is argument 1. Each keyword argument is > identified by its keyword name, so in the above example, 'c' is > used to refer to the third argument. > > The result of the format call is an object of the same type > (string or unicode) as the format string. > > > Format Strings > > Brace characters ('curly braces') are used to indicate a > replacement field within the string: > > "My name is {0}".format('Fred') > > The result of this is the string: > > "My name is Fred" > > Braces can be escaped using a backslash: > > "My name is {0} :-\{\}".format('Fred') > > Which would produce: > > "My name is Fred :-{}" > > The element within the braces is called a 'field'. Fields > consist > of a name, which can either be simple or compound, and an > optional > 'conversion specifier'. > > Simple names are either names or numbers. If numbers, they must > be valid decimal numbers; if names, they must be valid Python > identifiers. A number is used to identify a positional argument, > while a name is used to identify a keyword argument. > > Compound names are a sequence of simple names seperated by > periods: > > "My name is {0.name} :-\{\}".format(dict(name='Fred')) > > Compound names can be used to access specific dictionary entries, > array elements, or object attributes. In the above example, the > '{0.name}' field refers to the dictionary entry 'name' within > positional argument 0. > > Each field can also specify an optional set of 'conversion > specifiers'. Conversion specifiers follow the field name, with a > colon (':') character separating the two: > > "My name is {0:8}".format('Fred') > > The meaning and syntax of the conversion specifiers depends on > the > type of object that is being formatted, however many of the > built-in types will recognize a standard set of conversion > specifiers. > > The conversion specifier consists of a sequence of zero or more > characters, each of which can consist of any printable character > except for a non-escaped '}'. The format() method does not > attempt to intepret the conversion specifiers in any way; it > merely passes all of the characters between the first colon ':' > and the matching right brace ('}') to the various underlying > formatters (described later.) > > > Standard Conversion Specifiers > > For most built-in types, the conversion specifiers will be the > same or similar to the existing conversion specifiers used with > the '%' operator. Thus, instead of '%02.2x", you will say > '{0:2.2x}'. > > There are a few differences however: > > - The trailing letter is optional - you don't need to say '2.2d', > you can instead just say '2.2'. If the letter is omitted, the > value will be converted into its 'natural' form (that is, the > form that it take if str() or unicode() were called on it) > subject to the field length and precision specifiers (if > supplied). > > - Variable field width specifiers use a nested version of the {} > syntax, allowing the width specifier to be either a positional > or keyword argument: > > "{0:{1}.{2}d}".format(a, b, c) > > (Note: It might be easier to parse if these used a different > type of delimiter, such as parens - avoiding the need to create > a regex that handles the recursive case.) > > - The support for length modifiers (which are ignored by Python > anyway) is dropped. > > For non-built-in types, the conversion specifiers will be > specific > to that type. An example is the 'datetime' class, whose > conversion specifiers are identical to the arguments to the > strftime() function: > > "Today is: {0:%x}".format(datetime.now()) > > > Controlling Formatting > > A class that wishes to implement a custom interpretation of its > conversion specifiers can implement a __format__ method: > > class AST: > def __format__(self, specifiers): > ... > > The 'specifiers' argument will be either a string object or a > unicode object, depending on the type of the original format > string. The __format__ method should test the type of the > specifiers parameter to determine whether to return a string or > unicode object. It is the responsibility of the __format__ > method > to return an object of the proper type. > > string.format() will format each field using the following steps: > > 1) See if the value to be formatted has a __format__ method. If > it does, then call it. > > 2) Otherwise, check the internal formatter within string.format > that contains knowledge of certain builtin types. > > 3) Otherwise, call str() or unicode() as appropriate. > > > User-Defined Formatting Classes > > The code that interprets format strings can be called explicitly > from user code. This allows the creation of custom formatter > classes that can override the normal formatting rules. > > The string and unicode classes will have a class method called > 'cformat' that does all the actual work of formatting; The > format() method is just a wrapper that calls cformat. > > The parameters to the cformat function are: > > -- The format string (or unicode; the same function handles > both.) > -- A field format hook (see below) > -- A tuple containing the positional arguments > -- A dict containing the keyword arguments > > The cformat function will parse all of the fields in the format > string, and return a new string (or unicode) with all of the > fields replaced with their formatted values. > > For each field, the cformat function will attempt to call the > field format hook with the following arguments: > > field_hook(value, conversion, buffer) > > The 'value' field corresponds to the value being formatted, which > was retrieved from the arguments using the field name. (The > field_hook has no control over the selection of values, only > how they are formatted.) > > The 'conversion' argument is the conversion spec part of the > field, which will be either a string or unicode object, depending > on the type of the original format string. > > The 'buffer' argument is a Python array object, either a byte > array or unicode character array. The buffer object will contain > the partially constructed string; the field hook is free to > modify > the contents of this buffer if needed. > > The field_hook will be called once per field. The field_hook may > take one of two actions: > > 1) Return False, indicating that the field_hook will not > process this field and the default formatting should be > used. This decision should be based on the type of the > value object, and the contents of the conversion string. > > 2) Append the formatted field to the buffer, and return True. > > > Alternate Syntax > > Naturally, one of the most contentious issues is the syntax of > the > format strings, and in particular the markup conventions used to > indicate fields. > > Rather than attempting to exhaustively list all of the various > proposals, I will cover the ones that are most widely used > already. > > - Shell variable syntax: $name and $(name) (or in some variants, > ${name}). This is probably the oldest convention out there, > and > is used by Perl and many others. When used without the braces, > the length of the variable is determined by lexically scanning > until an invalid character is found. > > This scheme is generally used in cases where interpolation is > implicit - that is, in environments where any string can > contain > interpolation variables, and no special subsitution function > need be invoked. In such cases, it is important to prevent the > interpolation behavior from occuring accidentally, so the '$' > (which is otherwise a relatively uncommonly-used character) is > used to signal when the behavior should occur. > > It is the author's opinion, however, that in cases where the > formatting is explicitly invoked, that less care needs to be > taken to prevent accidental interpolation, in which case a > lighter and less unwieldy syntax can be used. > > - Printf and its cousins ('%'), including variations that add a > field index, so that fields can be interpolated out of order. > > - Other bracket-only variations. Various MUDs (Multi-User > Dungeons) such as MUSH have used brackets (e.g. [name]) to do > string interpolation. The Microsoft .Net libraries uses braces > ({}), and a syntax which is very similar to the one in this > proposal, although the syntax for conversion specifiers is > quite > different. [4] > > - Backquoting. This method has the benefit of minimal > syntactical > clutter, however it lacks many of the benefits of a function > call syntax (such as complex expression arguments, custom > formatters, etc.). > > - Other variations include Ruby's #{}, PHP's {$name}, and so > on. > > > Sample Implementation > > A rought prototype of the underlying 'cformat' function has been > coded in Python, however it needs much refinement before being > submitted. > > > Backwards Compatibility > > Backwards compatibility can be maintained by leaving the existing > mechanisms in place. The new system does not collide with any of > the method names of the existing string formatting techniques, so > both systems can co-exist until it comes time to deprecate the > older system. > > > References > > [1] Python Library Reference - String formating operations > http://docs.python.org/lib/typesseq-strings.html > > [2] Python Library References - Template strings > http://docs.python.org/lib/node109.html > > [3] [Python-3000] String formating operations in python 3k > http://mail.python.org/pipermail/python-3000/2006-April/ > 000285.html > > [4] Composite Formatting - [.Net Framework Developer's Guide] > > http://msdn.microsoft.com/library/en-us/cpguide/html/ > cpconcompositeformatting.asp?frame=true > > > Copyright > > This document has been placed in the public domain. > > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ > zpincus%40stanford.edu From janssen at parc.com Sun Apr 30 19:53:05 2006 From: janssen at parc.com (Bill Janssen) Date: Sun, 30 Apr 2006 10:53:05 PDT Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: Your message of "Sat, 29 Apr 2006 11:24:17 PDT." <4453AF51.6050605@acm.org> Message-ID: <06Apr30.105308pdt."58641"@synergy1.parc.xerox.com> > For most built-in types, the conversion specifiers will be the > same or similar to the existing conversion specifiers used with > the '%' operator. Thus, instead of '%02.2x", you will say > '{0:2.2x}'. Don't you mean, "{0:02.2x}"? Bill From g.brandl at gmx.net Sun Apr 30 20:00:49 2006 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 30 Apr 2006 20:00:49 +0200 Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: <2FCC26A1-D2C2-4A4A-9474-2908B684F699@stanford.edu> References: <4453AF51.6050605@acm.org> <2FCC26A1-D2C2-4A4A-9474-2908B684F699@stanford.edu> Message-ID: <e32u0h$5dn$1@sea.gmane.org> Zachary Pincus wrote: > I'm not sure about introducing a special syntax for accessing > dictionary entries, array elements and/or object attributes *within a > string formatter*... much less an overloaded one that differs from > how these elements are accessed in "regular python". Yes, I also think that's a bad idea. >> Compound names are a sequence of simple names seperated by >> periods: >> >> "My name is {0.name} :-\{\}".format(dict(name='Fred')) And these escapes are also a bad idea. As it is, the backslashes stay in the string, but it is not obvious to newcomers whether \{ is a general string escape. The "right" way to do it would be "\\{", but that's becoming rather longly. Why not use something like "{{"? >> Compound names can be used to access specific dictionary entries, >> array elements, or object attributes. In the above example, the >> '{0.name}' field refers to the dictionary entry 'name' within >> positional argument 0. > > Barring ambiguity about whether .name would mean the "name" attribute > or the "name" dictionary entry if both were defined, I'm not sure I > really see the point. How is: > d = {last:'foo', first:'bar'} > "My last name is {0.last}, my first name is {0.first}.".format(d) > > really that big a win over: > d = {last:'foo', first:'bar'} > "My last name is {0}, my first name is {1}.".format(d['last'], d > ['first']) Or even: d = {last:'foo', first:'bar'} "My last name is {last}, my first name is {first}.".format(**d) Georg From unknown_kev_cat at hotmail.com Sun Apr 30 20:05:45 2006 From: unknown_kev_cat at hotmail.com (Joe Smith) Date: Sun, 30 Apr 2006 14:05:45 -0400 Subject: [Python-Dev] PEP 3102: Keyword-only arguments References: <4453B025.3080100@acm.org> Message-ID: <e32u9t$676$1@sea.gmane.org> "Talin" <talin at acm.org> wrote in message news:4453B025.3080100 at acm.org... > Abstract > > This PEP proposes a change to the way that function arguments are > assigned to named parameter slots. In particular, it enables the > declaration of "keyword-only" arguments: arguments that can only > be supplied by keyword and which will never be automatically > filled in by a positional argument. > > > Rationale > > The current Python function-calling paradigm allows arguments to > be specified either by position or by keyword. An argument can be > filled in either explicitly by name, or implicitly by position. > > There are often cases where it is desirable for a function to take > a variable number of arguments. The Python language supports this > using the 'varargs' syntax ('*name'), which specifies that any > 'left over' arguments be passed into the varargs parameter as a > tuple. > > One limitation on this is that currently, all of the regular > argument slots must be filled before the vararg slot can be. > > This is not always desirable. One can easily envision a function > which takes a variable number of arguments, but also takes one > or more 'options' in the form of keyword arguments. Currently, > the only way to do this is to define both a varargs argument, > and a 'keywords' argument (**kwargs), and then manually extract > the desired keywords from the dictionary. > First of all, let me remark that The current python symantics almost perfectly match those of VB6. Sure there is a little bit of syntax differences, but overall they are approximately equivlent. This is actually a good thing The one area thing that VB6 allows that python does not is optional arguments without a default value. However, what really happens is that the compiler assigns a default value, so it really is only a tiny difference. The main proposal here adds an aditional feature, and thus will break th matching of VB6, but not in a negative way, as VB6 could benefit from the same extention. So I would be +1. However, I'm not sure what the use case is for keyword only arguments on functions that do *not* accept a variable number of arguments. Could you please provide an example use case? From jcarlson at uci.edu Sun Apr 30 20:30:05 2006 From: jcarlson at uci.edu (Josiah Carlson) Date: Sun, 30 Apr 2006 11:30:05 -0700 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <44548C17.9050807@v.loewis.de> References: <20060430023808.673B.JCARLSON@uci.edu> <44548C17.9050807@v.loewis.de> Message-ID: <20060430111344.673E.JCARLSON@uci.edu> "Martin v. L?wis" <martin at v.loewis.de> wrote: > Josiah Carlson wrote: > > Specifically in the case of bytes.join(), the current common use-case of > > <literal>.join(...) would become something similar to > > bytes(<literal>).join(...), unless bytes objects got a syntax... Or > > maybe I'm missing something? > > I think what you are missing is that algorithms that currently operate > on byte strings should be reformulated to operate on character strings, > not reformulated to operate on bytes objects. By "character strings" can I assume you mean unicode strings which contain data, and not some new "character string" type? I know I must have missed some conversation. I was under the impression that in Py3k: Python 1.x and 2.x str -> mutable bytes object Python 2.x unicode -> str I was also under the impression that str.encode(...) -> bytes, bytes.decode(...) -> str, and that there would be some magical argument to pass to the file or open open(fn, 'rb', magical_parameter).read() -> bytes. I mention this because I do binary data handling, some ''.join(...) for IO buffers as Guido mentioned (because it is the fastest string concatenation available in Python 2.x), and from this particular conversation, it seems as though Python 3.x is going to lose some expressiveness and power. - Josiah From martin at v.loewis.de Sun Apr 30 20:52:02 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 20:52:02 +0200 Subject: [Python-Dev] methods on the bytes object In-Reply-To: <20060430111344.673E.JCARLSON@uci.edu> References: <20060430023808.673B.JCARLSON@uci.edu> <44548C17.9050807@v.loewis.de> <20060430111344.673E.JCARLSON@uci.edu> Message-ID: <44550752.1050907@v.loewis.de> Josiah Carlson wrote: >> I think what you are missing is that algorithms that currently operate >> on byte strings should be reformulated to operate on character strings, >> not reformulated to operate on bytes objects. > > By "character strings" can I assume you mean unicode strings which > contain data, and not some new "character string" type? I mean unicode strings, period. I can't imagine what "unicode strings which do not contain data" could be. > I know I must > have missed some conversation. I was under the impression that in Py3k: > > Python 1.x and 2.x str -> mutable bytes object No. Python 1.x and 2.x str -> str, Python 2.x unicode -> str In addition, a bytes type is added, so that Python 1.x and 2.x str -> bytes The problem is that the current string type is used both to represent bytes and characters. Current applications of str need to be studied, and converted appropriately, depending on whether they use "str-as-bytes" or "str-as-characters". The "default", in some sense of that word, is that str applications are assumed to operate on character strings; this is achieved by making string literals objects of the character string type. > I was also under the impression that str.encode(...) -> bytes, > bytes.decode(...) -> str Correct. > and that there would be some magical argument > to pass to the file or open open(fn, 'rb', magical_parameter).read() -> > bytes. I think the precise details of that are still unclear. But yes, the plan is to have two file modes: one that returns character strings (type 'str') and one that returns type 'bytes'. > I mention this because I do binary data handling, some ''.join(...) for > IO buffers as Guido mentioned (because it is the fastest string > concatenation available in Python 2.x), and from this particular > conversation, it seems as though Python 3.x is going to lose > some expressiveness and power. You certainly need a "concatenate list of bytes into a single bytes". Apparently, Guido assumes that this can be done through bytes().join(...); I personally feel that this is over-generalization: if the only practical application of .join is the empty bytes object as separator, I think the method should be omitted. Perhaps bytes(...) or bytes.join(...) could work? Regards, Martin From martin at v.loewis.de Sun Apr 30 20:55:09 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 20:55:09 +0200 Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: <20060430131656.GA13868@panix.com> References: <4453AF51.6050605@acm.org> <20060430131656.GA13868@panix.com> Message-ID: <4455080D.7090405@v.loewis.de> Aahz wrote: > First of all, I recommend that you post this to comp.lang.python. This > is the kind of PEP where wide community involvement is essential to > success; be prepared for massive revision. Actually, *all* PEPs should be posted to c.l.p at some point; the PEP author is responsible for collecting all feedback, and either updating the specification, or at least, summarizing the discussion and the open issues (so that the same argument isn't made over and over again). Regards, Martin From pje at telecommunity.com Sun Apr 30 20:58:56 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 30 Apr 2006 14:58:56 -0400 Subject: [Python-Dev] More on contextlib - adding back a contextmanager decorator In-Reply-To: <ca471dc20604300953q112991baw698564024b0eec15@mail.gmail.co m> References: <4454BA62.5080704@iinet.net.au> <4454BA62.5080704@iinet.net.au> Message-ID: <5.1.1.6.0.20060430144919.01e6bb18@mail.telecommunity.com> At 09:53 AM 4/30/2006 -0700, Guido van Rossum wrote: >I have a counter-proposal: let's drop __context__. Nearly all use >cases have __context__ return self. In the remaining cases, would it >really be such a big deal to let the user make an explicit call to >some appropriately named method? The only example that I know of where >__context__ doesn't return self is the decimal module. So the decimal >users would have to type > > with mycontext.some_method() as ctx: # ctx is a clone of mycontext > ctx.prec += 2 > <BODY> > >The implementation of some_method() could be exactly what we currently >have as the __context__ method on the decimal.Context object. Its >return value is a decimal.WithStatementContext() instance, whose >__enter__() method returns a clone of the original context object >which is assigned to the variable in the with-statement (here 'ctx'). > >This even has an additional advantage -- some_method() could have >keyword parameters to set the precision and various other context >parameters, so we could write this: > > with mycontext.some_method(prec=mycontext.prec+2): > <BODY> > >Note that we can drop the variable too now (unless we have another >need to reference it). An API tweak for certain attributes that are >often incremented or decremented could reduce writing: > > with mycontext.some_method(prec_incr=2): > <BODY> But what's an appropriate name for some_method? Given that documentation is the sore spot that keeps us circling around this point, doesn't this just push the problem to finding a name to use in place of __context__? And not only for this use case, but for others? After all, for any library that has a notion of "the current X", it seems reasonable to want to be able to say "with some_X" to mean "use some_X as the current X for this block". And it thus seems to me that people will want to have something like: def using(obj): if hasattr(obj,'__context__'): obj = obj.__context__() return obj so they can do "with using(some_X)", because "with some_X.using()" or "with some_X.as_current()" is awkward. If you can solve the naming issue for these use cases (and I notice you punted on that issue by calling it "some_method"), then +1 on removing __context__. Otherwise, I'm -0; we're just fixing one documentation/explanation problem (that only people writing contexts will care about) by creating others (that will affect the people *using* contexts too). From pje at telecommunity.com Sun Apr 30 21:00:15 2006 From: pje at telecommunity.com (Phillip J. Eby) Date: Sun, 30 Apr 2006 15:00:15 -0400 Subject: [Python-Dev] Problem with inspect and PEP 302 In-Reply-To: <e322l2$t1f$1@sea.gmane.org> Message-ID: <5.1.1.6.0.20060430145924.01e79268@mail.telecommunity.com> At 12:13 PM 4/30/2006 +0200, Georg Brandl wrote: >Recently, the inspect module was updated to conform with PEP 302. > >Now this is broken: > > >>> import inspect > >>> inspect.stack() > >The traceback shows clearly what's going on. However, I don't know how >to resolve the problem. The problem as that '<string>' and '<stdin>' filenames were producing an infinite regress. I've checked in a fix. From martin at v.loewis.de Sun Apr 30 21:47:45 2006 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 30 Apr 2006 21:47:45 +0200 Subject: [Python-Dev] Tkinter lockups. In-Reply-To: <20060424011815.GB12580@unpythonic.net> References: <9e804ac0604231436q491d7289m97bb165dc42a5de@mail.gmail.com> <20060424011815.GB12580@unpythonic.net> Message-ID: <44551461.40603@v.loewis.de> Jeff Epler wrote: > However, on this system, I couldn't recreate the problem you reported > with either the "using _tkinter directly" instructions, or using this > "C" test program: > > #include <tcl.h> > #include <tk.h> > > int main(void) { > Tcl_Interp *trp; > unsetenv("DISPLAY"); > trp = Tcl_CreateInterp(); > printf("%d\n", Tk_Init(trp)); > printf("%d\n", Tk_Init(trp)); > return 0; > } The problem only occurs when Tcl and Tk were compiled with --enable-threads, and it occurs because Tk fails to unlock a mutex in a few error cases. The patch below fixes the problem. I'll report it to the Tcl people, and see whether I can work around in _tkinter. Regards, Martin -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: tk.diff Url: http://mail.python.org/pipermail/python-dev/attachments/20060430/55b272b8/attachment.diff From me+python-dev at modelnine.org Sun Apr 30 21:33:15 2006 From: me+python-dev at modelnine.org (Heiko Wundram) Date: Sun, 30 Apr 2006 21:33:15 +0200 Subject: [Python-Dev] socket module recvmsg/sendmsg Message-ID: <200604302133.15205.me+python-dev@modelnine.org> Hi all! I've implemented recvmsg and sendmsg for the socket module in my private Python tree for communication between two forked processes, which are essentially wrappers for proper handling of SCM_RIGHTS and SCM_CREDENTIALS Unix-Domain-Socket messages (which are the two types of messages that are defined on Linux). The main reason I need these two primitives is that I require (more or less transparent) file/socket descriptor exchange between two forked processes, where one process accepts a socket, and delegates processing of the socket connection to another process of a set of processes; this is much like a ForkingTCPServer, but with the Handler-process prestarted. As connection to the Unix-Domain-Socket is openly available, the receiving process needs to check the identity of the first process; this is done using a getsockopt(SO_PEERCRED) call, which is also handled more specifically by my socket extension to return a socket._ucred-type structure, which wraps the pid/uid/gid-structure returned by the corresponding getsockopt call, and also the socket message (SCM_CREDENTIALS) which passes or sets this information for the remote process. I'd love to see these two socket message primitives (of which the first, SCM_RIGHTS, is available on pretty much any Unix derivative) included in a Python distribution at some point in time, and as I've not got the time to push for an inclusion in the tree (and even less time to work on other Python patches myself) at the moment, I thought that I might just post here so that someone interested might pick up the work I've done so far and check the implementation for bugs, and at some stage these two functions might actually find their way into the Python core. Anyway, my private Python tree (which has some other patches which aren't of general interest, I'd think) is available at: http://dev.modelnine.org/hg/python and I can, if anyone is interested, post a current diff of socketmodule.* against 2.4.3 to the Python bug tracker at sourceforge. I did that some time ago (about half a year) when socket-passing code wasn't completely functioning yet, but at least at that point there didn't seem much interest in the patch. The patch should apply pretty cleanly against the current HEAD too, at least it did the last time I checked. I'll add a small testsuite for both functions to my tree some time tomorrow. --- Heiko. From talin at acm.org Sun Apr 30 21:58:52 2006 From: talin at acm.org (Talin) Date: Sun, 30 Apr 2006 19:58:52 +0000 (UTC) Subject: [Python-Dev] PEP 3102: Keyword-only arguments References: <4453B025.3080100@acm.org> <d11dcfba0604291850q7b1eeac3y57b6793dd038ceeb@mail.gmail.com> <9e804ac0604300159i35a80414r85a7b2800b4ea34b@mail.gmail.com> Message-ID: <loom.20060430T215120-525@post.gmane.org> Thomas Wouters <thomas <at> python.org> writes: > Pfft, implementation is easy. I have the impression Talin wants to implement it himself, but even if he doesn't, I'm sure I'll have a free week somewhere in the next year and a half in which I can implement it :) It's not that hard a problem, it just requires a lot of reading of the AST and function-call code (if you haven't read it already.) If someone wants to implement this, feel free - I have no particular feelings of "ownership" over this idea. If someone can do it better than I can, then that's the implementation that should be chosen. One suggestion I would have would be to implement the two parts of the PEP (keyword-only arguments vs. the 'naked star' syntax) as two separate patches; While there seems to be a relatively wide-spread support for the former, the latter is still somewhat controversial and will require further discussion. -- Talin From talin at acm.org Sun Apr 30 22:14:42 2006 From: talin at acm.org (Talin) Date: Sun, 30 Apr 2006 20:14:42 +0000 (UTC) Subject: [Python-Dev] PEP 3102: Keyword-only arguments References: <4453B025.3080100@acm.org> <6ED61C59-D93B-4473-955E-EA0A0E5C79DB@stanford.edu> Message-ID: <loom.20060430T220248-132@post.gmane.org> Zachary Pincus <zpincus <at> stanford.edu> writes: > That seems a bit odd, as my natural expectation wouldn't be to see > kw1 ands kw2 as required, no-default keyword args, but as misplaced > positional args. > > Perhaps this might be a little better? > def foo(*args, kw1=, kw2=): > I'm rather not sure. At least it makes it clear that kw1 and kw2 are > keyword arguments, and that they have no default values. > > Though, I'm kind of neutral on the whole bit -- in my mind "keyword > args" and "default-value args" are pretty conflated and I can't think > of any compelling reasons why they shouldn't be. It's visually handy > when looking at some code to see keywords and be able to say "ok, > those are the optional args for changing the handling of the main > args". I'm not sure where the big win with required keyword args is. I think a lot of people conflate the two, because of the similarity in syntax, however they aren't really the same thing. In a function definition, any argument can be a keyword argument, and the presence of '=' means 'default value', not 'keyword'. I have to admit that the primary reason for including required keyword arguments in the PEP is because they fall out naturally from the implementation. If we remove this from the PEP, then the implementation becomes more complicated - because now you really are assigning a meaning to '=' other than 'default value'. >From a user standpoint, I admit the benefits are small, but I don't see that it hurts either. The only use case that I can think of is when you have a function that has some required and some optional keyword arguments, and there is some confusion about the ordering of such arguments. By disallowing the arguments to be positional, you eliminate the possibility of having an argument be assigned to the incorrect parameter slot. > For that matter, why not "default positional args"? I think everyone > will agree that seems a bit odd, but it isn't too much odder than > "required keyword args". (Not that I'm for the former! I'm just > pointing out that if the latter is OK, there's no huge reason why the > former wouldn't be, and that is in my mind a flaw.) > > Second: > > def compare(a, b, *, key=None): > This syntax seems a bit odd to me, as well. I always "understood" the > *args syntax by analogy to globbing -- the asterisk means to "take > all the rest", in some sense in both a shell glob and *args. > > In this syntax, the asterisk, though given a position in the comma- > separated list, doesn't mean "take the rest and put it in this > position." It means "stop taking things before this position", which > is a bit odd, in terms of items *in* an argument list. > > I grant that it makes sense as a derivation from "*ignore"-type > solutions, but as a standalone syntax it feels off. How about > something like: > def compare(a, b; key=None): I wanted the semicolon as well, but was overruled. The current proposal is merely a summary of the output of the discussions so far. -- Talin From talin at acm.org Sun Apr 30 22:33:54 2006 From: talin at acm.org (Talin) Date: Sun, 30 Apr 2006 13:33:54 -0700 Subject: [Python-Dev] PEP 3101: Advanced String Formatting In-Reply-To: <2FCC26A1-D2C2-4A4A-9474-2908B684F699@stanford.edu> References: <4453AF51.6050605@acm.org> <2FCC26A1-D2C2-4A4A-9474-2908B684F699@stanford.edu> Message-ID: <44551F32.2020607@acm.org> Zachary Pincus wrote: > I'm not sure about introducing a special syntax for accessing > dictionary entries, array elements and/or object attributes *within a > string formatter*... much less an overloaded one that differs from how > these elements are accessed in "regular python". > >> Compound names are a sequence of simple names seperated by >> periods: >> >> "My name is {0.name} :-\{\}".format(dict(name='Fred')) >> >> Compound names can be used to access specific dictionary entries, >> array elements, or object attributes. In the above example, the >> '{0.name}' field refers to the dictionary entry 'name' within >> positional argument 0. > > > Barring ambiguity about whether .name would mean the "name" attribute > or the "name" dictionary entry if both were defined, I'm not sure I > really see the point. How is: > d = {last:'foo', first:'bar'} > "My last name is {0.last}, my first name is {0.first}.".format(d) > > really that big a win over: > d = {last:'foo', first:'bar'} > "My last name is {0}, my first name is {1}.".format(d['last'], d > ['first']) At one point I had intended to abandon the compound-name syntax, until I realized that it had one beneficial side-effect, which is that it offers a way around the 'dict-copying' problem. There are a lot of cases where you want to pass an entire dict as the format args using the **kwargs syntax. One common use pattern is for debugging code, where you want to print out a bunch of variables that are in the local scope: print "Source file: {file}, line: {line}, column: {col}"\ .format( **locals() ) The problem with this is one of efficiency - the interpreter handles ** by copying the entire dictionary and merging it with any keyword arguments. Under most sitations this is fine; However if the dictionary is particularly large, it might be a problem. So the intent of the compound name syntax is to allow something very similar: print "Source file: {0.file}, line: {0.line}, column: {0.col}"\ .format( locals() ) Now, its true that you could also do this by passing in the 3 parameters as individual arguments; However, there have been some strong proponents of being able to pass in a single dict, and rather than restating their points I'll let them argue their own positions (so as not to accidentally mis-state them.) > Plus, the in-string syntax is limited -- e.g. what if I want to call a > function on an attribute? Unless you want to re-implement all python > syntax within the formatters, someone will always be able to level > these sort of complaints. Better, IMO, to provide none of that than a > restricted subset of the language -- especially if the syntax looks and > works differently from real python. The in-string syntax is limited deliberately for security reasons. Allowing arbitrary executable code within a string is supported by a number of other scripting languages, and we've seen a good number of exploits as a result. I chose to support only __getitem__ and __getattr__ because I felt that they would be relatively safe; usually (but not always) those functions are written in a way that has no side effects. -- Talin