Notice of intent: rich comparisons
Now that 1.5.2 is finalized, I'll be starting to prepare a patch for the rich comparisons. Guido, how do you envision python-dev working? Do you want to hand out a TODO list for 1.6? --david
Now that 1.5.2 is finalized, I'll be starting to prepare a patch for the rich comparisons.
Cool!
Guido, how do you envision python-dev working?
I though we'd just crack Monty Python jokes while pretending to work :-)
Do you want to hand out a TODO list for 1.6?
Well, I'm not saying that this is the definitive list, but certainly there are a bunch of things that definitely need to be incorporated. Here's an excerpt of a list that I keep for myself, in no particular order, with brief annotations (I have a meeting in 15 minutes :): Redesign thread/init APIs See the recent discussion in the thread-sig. There should be some kind of micro-init that initializes essential components (including the thread lock!) but doesn't create an interpreter or a threadstate object. There should also be a notion of a default interpreter. Break up into VM, parser, import, mainloop, and other components? The porting efforts to seriously small machines (Win/CE, Pilot, etc.) all seem to need a Python VM that doesn't automatically pull in the parser and interactive main loop. There are other reasons as well to isolate various parts of the functionality (including all numeric data types except ints). This could include: restructuring of the parser so codeop.py can be simplified; making the interactive main loop a script. String methods Barry has his patches ready. Unicode What is the status of /F's latest attempts? Rich comparisons This is Dave's plan. Coercions Marc-Andre Lemburg has some good suggestions here: http://starship.python.net/~lemburg/coercion-0.6.zip Import revamp It should be much easier to add hooks to allow importing from zip files, from a URL, etc., or (different) hooks to allow looking for different extensions (e.g. to automatically generate ILU stubs). This should help the small platforms too, since they often don't have a filesystem, so their imports will have to come from some other place. ANSI C I'm tired of supporting K&R C. All code should be converted to using prototypes. The Py_PROTO macros should go. We can add const-correctness to all code. Too bad for platforms without decent compilers (let them get GCC). Buffer object API extensions Marc-Andre Lemburg has a patch Distutil Should we start integrating some of the results of the distutil SIG? (What's the status? I haven't been paying attention.) Misc Jeremy Hylton's urllib2.py Greg Stein's new httplib.py (or Jeremy's?) Andrew Kuchling's new PCRE release? The IPv6 patches? Gotta run! --Guido van Rossum (home page: http://www.python.org/~guido/)
On 21 April 1999, Guido van Rossum said:
Distutil
Should we start integrating some of the results of the distutil SIG? (What's the status? I haven't been paying attention.)
It's in a state that other people can look at the code, but (apparently) not yet good enough for floods of patches to come in. Right now, Distutils has *just* enough marbles to "build" and install itself on Unix systems, and someone has contributed a patch so it works on NT. No support yet for compiling extensions, which of course is the important thing. Greg -- Greg Ward - software developer gward@cnri.reston.va.us Corporation for National Research Initiatives 1895 Preston White Drive voice: +1-703-620-8990 Reston, Virginia, USA 20191-5434 fax: +1-703-620-0913
On Wed, 21 Apr 1999, Guido van Rossum wrote:
Guido, how do you envision python-dev working?
I though we'd just crack Monty Python jokes while pretending to work :-)
That'd be better than what I've read on the perl5-porters list in the last couple of days (which I'm reading because of the ad hoopla). They seem to spend their time arguing about whether Perl codes are programs or scripts. Viciously, too. =) More seriously, some of the discussions they're having a worth keeping an eye on, just as a form of legal industrial spying.
Import revamp
There was an interesting bit from Greg, Jack, and MarkH on this topic recently in the distutil-sig. Maybe one of them can bring the rest of us up to date on the effort. --david
Guido wrote:
String methods
Barry has his patches ready.
Unicode
What is the status of /F's latest attempts?
most of the code is written and tested; the API still needs some cleaning up. I also have a few patches and other contributions in my inbox that I gotta look into... btw, the unicode module can be compiled for 8-bit characters too. we could merge it with the string module (one place to fix bugs...)
Import revamp
It should be much easier to add hooks to allow importing from zip files, from a URL, etc., or (different) hooks to allow looking for different extensions (e.g. to automatically generate ILU stubs). This should help the small platforms too, since they often don't have a filesystem, so their imports will have to come from some other place.
something similar to greg's imputil.py should definitely be in the core...
ANSI C
I'm tired of supporting K&R C. All code should be converted to using prototypes. The Py_PROTO macros should go. We can add const-correctness to all code. Too bad for platforms without decent compilers (let them get GCC).
or ansi2knr.c (available from http://www.cs.wisc.edu/~ghost/). otoh, it only converts function headers, so I'm not sure it's worth the effort... </F>
[Guido and /F]
Unicode
What is the status of /F's latest attempts?
most of the code is written and tested; the API still needs some cleaning up. I also have a few patches and other contributions in my inbox that I gotta look into...
Im one of those contributors :-) I have made changes to the Win32 extensions that allows it to use /F's type instead of its own built-in Unicode type. Further, the Windows CE port of these extensions actually enables this - so we have some code actively using it. However, this still doesnt give me anything more than I had before. We need at least _some_ level of integration of this type into Python. The key areas to me are: * PyArg_ParseTuple and Py_BuildValue - Seems OK we use the "u" format character analogous to "s". We havent thought of a "z" equivilent. This is fairly easy. * Some way for these functions to auto-convert. Eg, when the user passes an 8bit string object, but the C side of the world wants a Unicode string. This is harder, as an extra memory buffer for the conversion is needed. This second problem is, to me, the next stumbling block. When we sort these out, I feel we would have the start of a good platform to start experimenting. Mark.
Guido van Rossum writes:
ANSI C
I'm tired of supporting K&R C. All code should be converted to using prototypes. The Py_PROTO macros should go. We can add const-correctness to all code. Too bad for platforms without decent compilers (let them get GCC).
Maybe I'm just lucky, but I've written nothing but ANSI C for the last 8 years and have never had a problem compiling it on any machine. I don't see this as being a huge issue.
Misc
Depending on how deep I want to dig, I may have some patches to ParseTuple() that would make life easier for some of us extension writing fans. As it is right now, my experimental version of Swig is using it's own version of ParseTuple (for various reasons). Also, out of curiousity, is anyone making heavy use of the CObject type right now? Cheers, Dave
David Beazley wrote:
Guido van Rossum writes:
ANSI C
I'm tired of supporting K&R C. All code should be converted to using prototypes. The Py_PROTO macros should go. We can add const-correctness to all code. Too bad for platforms without decent compilers (let them get GCC).
Maybe I'm just lucky, but I've written nothing but ANSI C for the last 8 years and have never had a problem compiling it on any machine. I don't see this as being a huge issue.
I've had the same experience with things like prototypes. The recent introduction of indented preprocessor instructions caused me significant pain on some platforms. Fortunately, I haven't had to deal with those platforms recently, still, the elegence of indented preporicessor instructions wasn't worth the pain it caused. As far as choice of compilers goes, sometimes gcc isn't an option. I've had cases where I *had* to use the native compiler in order to link to third-party libraries. I support moving to ANSI C, for example, wrt prototypes, but lets still be somewhat sensative to portability and not abuse bad compilers without good reason.
Misc
Depending on how deep I want to dig, I may have some patches to ParseTuple() that would make life easier for some of us extension writing fans. As it is right now, my experimental version of Swig is using it's own version of ParseTuple (for various reasons). Also, out of curiousity, is anyone making heavy use of the CObject type right now?
Yes. Jim -- Jim Fulton mailto:jim@digicool.com Python Powered! Technical Director (888) 344-4332 http://www.python.org Digital Creations http://www.digicool.com http://www.zope.org Under US Code Title 47, Sec.227(b)(1)(C), Sec.227(a)(2)(B) This email address may not be added to any commercial mail list with out my permission. Violation of my privacy with advertising or SPAM will result in a suit for a MINIMUM of $500 damages/incident, $1500 for repeats.
David Beazley wrote:
Misc
Depending on how deep I want to dig, I may have some patches to ParseTuple() that would make life easier for some of us extension writing fans. As it is right now, my experimental version of Swig is using it's own version of ParseTuple (for various reasons). Also, out of curiousity, is anyone making heavy use of the CObject type right now?
You mean the one in Objects/cobject.c ? Sure, all my type extensions export their C API that way... and it works just great (no more linker problems, nice error messages when extensions are not found, etc.). -- Marc-Andre Lemburg Y2000: 254 days left --------------------------------------------------------------------- : Python Pages >>> http://starship.skyport.net/~lemburg/ : ---------------------------------------------------------
Guido van Rossum wrote:
Break up into VM, parser, import, mainloop, and other components?
The porting efforts to seriously small machines (Win/CE, Pilot, etc.) all seem to need a Python VM that doesn't automatically pull in the parser and interactive main loop. There are other reasons as well to isolate various parts of the functionality (including all numeric data types except ints).
This could include: restructuring of the parser so codeop.py can be simplified; making the interactive main loop a script.
There's some good work out there by Skip Montanaro to revamp the VM into a combined register/stack machine. He calls it rattlesnake. More infos can be found in the list archive (it's currently sleeping): http://www.egroups.com/folders/rattlesnake I would like to see the VM/compiler pluggable so that experiments like rattlesnake become easier to implement (extension modules vs. patches to the core interpreter).
Coercions
Marc-Andre Lemburg has some good suggestions here: http://starship.python.net/~lemburg/coercion-0.6.zip
I will continue to work on these patches now that 1.5.2 is out. There is a (more or less) detailed explanation of the patch set at: http://starship.skyport.net/~lemburg/CoercionProposal.html Instead of using of using the PY_NEWSTYLENUMBER approach I'll turn to the new type flags that were introduced in 1.5.2. The Py_NotImplemented singleton return value will stay, because performance tests have shown that this method does not cause any significant hit where as the exception raise and catch method does introduce a significant slow-down.
Import revamp
Greg's imputil.py (it's in the distutils package) should provide a good start.
Buffer object API extensions
I'll put the complete patch up on starship later this week... here are the basic prototypes: DL_IMPORT(int) PyObject_AsCharBuffer Py_PROTO((PyObject *obj, const char **buffer, int *buffer_len)); /* Takes an arbitrary object which must support the (character, single segment) buffer interface and returns a pointer to a read-only memory location useable as character based input for subsequent processing. buffer and buffer_len are only set in case no error occurrs. Otherwise, -1 is returned and an exception set. */ DL_IMPORT(int) PyObject_AsReadBuffer Py_PROTO((PyObject *obj, const void **buffer, int *buffer_len)); /* Same as PyObject_AsCharBuffer() except that this API expects (readable, single segment) buffer interface and returns a pointer to a read-only memory location which can contain arbitrary data. buffer and buffer_len are only set in case no error occurrs. Otherwise, -1 is returned and an exception set. */ DL_IMPORT(int) PyObject_AsWriteBuffer Py_PROTO((PyObject *obj, void **buffer, int *buffer_len)); /* Takes an arbitrary object which must support the (writeable, single segment) buffer interface and returns a pointer to a writeable memory location in buffer of size buffer_len. buffer and buffer_len are only set in case no error occurrs. Otherwise, -1 is returned and an exception set. */
Distutil
Should we start integrating some of the results of the distutil SIG? (What's the status? I haven't been paying attention.)
With all the different modules and packages available for Python starting to slowly introduce dependencies, I think the main objective should be introducing a well organized installation info system, e.g. a registry where modules and packages can query and save version and dependency information. Someone on the main list mentioned that JPython has something like this...
Misc
Jeremy Hylton's urllib2.py
Greg Stein's new httplib.py (or Jeremy's?)
Andrew Kuchling's new PCRE release?
The IPv6 patches?
These are on my wish list, so I'll simply add them here: · APIs to faciliate creation of extensions classes (using Python classes and C functions as methods) The main open question here is where and how to store C data in the instances. This could also provide a way to migrate to all Python classes for some future version. · Restructure the calling mechanism used in ceval.c to make it clearer, more flexible and faster (of course, :-). Also, inline calling of C functions/methods (this produces a noticable speedup). I have a patch for the restructuring and an old one (against 1.5) for the C function call inlining. · A "fastpath" hook made available through the sys module. This should hook into the module/package loader and should be used whenever set to find a module (reverting to the standard lookup method in case it can't find it). I have an old patch that does this. It uses a simple Python function to redirect the lookup to a precalculated dictionary of installed modules/packages. This results in faster startup of the Python interpreter (by saving a few hundred stat() calls). -- Marc-Andre Lemburg Y2000: 254 days left --------------------------------------------------------------------- : Python Pages >>> http://starship.skyport.net/~lemburg/ : ---------------------------------------------------------
mal> There's some good work out there by Skip Montanaro to revamp the VM mal> into a combined register/stack machine. He calls it rattlesnake. mal> More infos can be found in the list archive (it's currently mal> sleeping): Thanks to Marc-Andre for mentioning this. Rattlesnake is indeed sleeping at the moment. If anyone is interested in the idea I'd be happy to awaken it. The main observation was that much of the virtual machine's activity is pushing/popping objects to/from the stack. By treating a chunk of storage as a register file much of that churn can be avoided. Skip Montanaro | Mojam: "Uniting the World of Music" http://www.mojam.com/ skip@mojam.com | Musi-Cal: http://www.musi-cal.com/ 518-372-5583
Guido van Rossum wrote:
... Import revamp
It should be much easier to add hooks to allow importing from zip files, from a URL, etc., or (different) hooks to allow looking for different extensions (e.g. to automatically generate ILU stubs). This should help the small platforms too, since they often don't have a filesystem, so their imports will have to come from some other place.
I would recommend that Python's import mechanism be changed to use some of the concepts from my imputil.py module and in a couple of my posts to distutils-sig: http://www.python.org/pipermail/distutils-sig/1999-January/000142.html http://www.python.org/pipermail/distutils-sig/1998-December/000077.html
ANSI C
I'm tired of supporting K&R C. All code should be converted to using prototypes. The Py_PROTO macros should go. We can add const-correctness to all code. Too bad for platforms without decent compilers (let them get GCC).
Or those platforms can stick with Python 1.5.2
Misc
Jeremy Hylton's urllib2.py
Greg Stein's new httplib.py (or Jeremy's?)
Mine is a drop-in replacement for the current httplib.py. httplib.HTTP is compatible with the current version and does HTTP/1.0. httplib.HTTPConnection is similar to the other (but (IMO) cleans up a few elements of the API); it does HTTP/1.1, including persistent connections. This module follows the design of the original httplib.py much more closely. Jeremy's module uses a very different design approach (no value judgement here; it's just different); also, it simply copies over the old HTTP class, while mine derives the compatible version from HTTPConnection. I'm not sure how much testing Jeremy's has had, but I use my module for all Python-based http work now. Some number of other people have downloaded and used it, too. I would recommend my new module, given the two options. I'll eventually have a davlib.py to add to the distribution, but it needs more work. I'll contribute that later. --- I'd also like to see some cleanup of the build process. For example, (Guido:) I sent in a different way to handle the signal stuff (shifting files around and stuff) as opposed to the "hassignal" thing. About the build process: it may be nice to move to a single makefile. Here is an interesting paper about this: http://www.canb.auug.org.au/~millerp/rmch/recu-make-cons-harm.html Is it reasonable to split out the Demo, Doc, Misc, and Tools subdirs since those never really end up as part of an installed Python? e.g. into a python-extras distribution? (which would allow doc and tools updates at arbitrary times) Does it make sense to remove platform-specific modules? (e.g. the SGI modules) While distutils doesn't have anything *yet*, there are changes that can be made in the build process to improve the distutils process. I know that Greg Ward wants distutils to work on previous versions of Python, but (IMO) things would be simpler all around if distutils-based packages were simply designed for 1.6 systems as a basic dependency. However, that choice is quite separate from the decision to generate distutils support during the build. Oh: there were some things floating around at one point about using Python itself within the build process (e.g. for modules). Is any of that work useful? Cheers, -g -- Greg Stein, http://www.lyra.org/
Greg Stein writes:
Is it reasonable to split out the Demo, Doc, Misc, and Tools subdirs since those never really end up as part of an installed Python? e.g. into a python-extras distribution? (which would allow doc and tools updates at arbitrary times)
The Doc/ stuff is already in a separate distribution; this has been in place for about a year (if I recall correctly). I try to make doc releases shortly after Python releases (I think the 1.5.2 doc release is the longest I've waited past the Python release so far, and it may end up being next week), and I have made interim releases as the documentation has grown. I certainly think that doing something similar for the Demo/ and Tools/ directories would be reasonable; they could share a single distribution. -Fred -- Fred L. Drake, Jr. <fdrake@acm.org> Corporation for National Research Initiatives
Fred> I certainly think that doing something similar for the Demo/ and Fred> Tools/ directories would be reasonable; they could share a single Fred> distribution. We need to make sure none of the tools are needed in the install process. h2py comes to mind. Skip Montanaro | Mojam: "Uniting the World of Music" http://www.mojam.com/ skip@mojam.com | Musi-Cal: http://www.musi-cal.com/ 518-372-5583
"GS" == Greg Stein <gstein@lyra.org> writes:
Greg Stein's new httplib.py (or Jeremy's?)
GS> Mine is a drop-in replacement for the current GS> httplib.py. httplib.HTTP is compatible with the current version GS> and does HTTP/1.0. httplib.HTTPConnection is similar to the GS> other (but (IMO) cleans up a few elements of the API); it does GS> HTTP/1.1, including persistent connections. This module follows GS> the design of the original httplib.py much more GS> closely. Jeremy's module uses a very different design approach GS> (no value judgement here; it's just different); also, it simply GS> copies over the old HTTP class, while mine derives the GS> compatible version from HTTPConnection. I'm not sure how much GS> testing Jeremy's has had, but I use my module for all GS> Python-based http work now. Some number of other people have GS> downloaded and used it, too. GS> I would recommend my new module, given the two options. I think I agree. I stopped working on my version quite some time ago, before I made enough progress to make it useful. Thus, it makes a lot of sense to use Greg's working code. I'd be curious to see how to use HTTPConnection to do persistent connections and pipelined requests. It wasn't entirely clear from the code what I would need to do to keep the connection open. Do you just re-use the HTTPConnection object -- call putrequest again after getreply? I like the revised API that Greg's code uses. The request method on HTTPConnection objects cleans the API up in exactly the right way. It has always seemed unnecessary to have many separate calls to add headers and then a call to endheaders after putrequest. Packaging them all together in one method will simplify client code a lot. GS> I'll eventually have a davlib.py to add to the distribution, but GS> it needs more work. I'll contribute that later. Cool! Jeremy
participants (12)
-
David Ascher
-
David Beazley
-
Fred L. Drake
-
Fredrik Lundh
-
Greg Stein
-
Greg Ward
-
Guido van Rossum
-
Jeremy Hylton
-
Jim Fulton
-
M.-A. Lemburg
-
Mark Hammond
-
skip@mojam.com