Earlier this month, Tim P. wrote:
> Semi-unfortunately, the author of that [bzip2] has
>
> no idea if it actually works on 95/98/ME/NT/XP
>
> and in the docs for "3.8 Making a Windows DLL"
>
> I haven't tried any of this stuff myself, but it all looks
> plausible.
>
> That means it will require some real work to build and test this
> stuff on 6 flavors of Windows. Not a showstopper, but does raise
> the bar for getting into the PLabs Windows distro.
Sorry if I'm way off base here, but does the underlying bzip2 package
have to be in a DLL, or can't that be built as a static library, which
gets linked into the .pyd, which *is* a DLL? In either case, it doesn't
seem like it would be very difficult to create whatever flavor library
is needed. The code for bzip2 seems to be very portably written.
The python-bz2 code, on the other hand, needed a little bit of tweaking
to make it compile with Microsoft's compiler.
I'll be happy to help in whatever way would be useful in dealing with
the "raised bar," as the prospect of having Python support on all
platforms for bz2 compression (and tarfiles) is very appealing.
--
Bob Kline
mailto:bkline@rksystems.com
http://www.rksystems.com
I would like to check in a patch for http://www.python.org/sf/576711, which
adds the _ssl module to Windows builds.
The only slightly controversial thing is how the build process operates. I
have quoted the relevant part of PCbuild/readme.txt below. test_socket_ssl
passes once this is built, and at least 2 other people have reported build
success.
Are there any objections to the scheme? Should I check it in?
Mark.
_ssl
Python wrapper for the secure sockets library.
Get the latest source code for OpenSSL from
http://www.openssl.org
Unpack into the "dist" directory, retaining the folder name from
the archive - for example, the latest stable OpenSSL will install as
dist/openssl-0.9.6g
You can (theoretically) use any version of OpenSSL you like - the
build process will automatically select the latest version.
You must also install ActivePerl from
http://www.activestate.com/Products/ActivePerl/
as this is used by the OpenSSL build process. Complain to them <wink>
The MSVC project simply invokes PCBuild/build_ssl.py to perform
the build. This Python script locates and builds your OpenSSL
installation, then invokes a simple makefile to build the final .pyd.
build_ssl.py attempts to catch the most common errors (such as not
being able to find OpenSSL sources, or not being able to find a Perl
that works with OpenSSL) and give a reasonable error message.
If you have a problem that doesn't seem to be handled correctly
(eg, you know you have ActivePerl but we can't find it), please take
a peek at build_ssl.py and suggest patches. Note that build_ssl.py
should be able to be run directly from the command-line.
build_ssl.py/MSVC isn't clever enough to clean OpenSSL - you must do
this
by hand.
>>>>> "TP" == Tim Peters <tim.one(a)comcast.net> writes:
TP> I'd rather find a way to avoid __reduce__ entirely, though!
Me, too!
TP> The Python implementation of these things didn't need it, and in
TP> the date and datetime cases it's creating bigger pickles than it
TP> should -- __getstate__ and __setstate__ already did all that was
TP> necessary, and no more than that. Supplying an argument tuple
TP> for __reduce__'s benefit loses either way: I either put the real
TP> date/datetime arguments there, but then the pickle is of a big
TP> tuple rather than of a tiny string. Or I put a dummy argument
TP> tuple there and also include the tiny string for __setstate__,
TP> but these constructors require at least 3 arguments so that the
TP> "dummy argument tuple" consumes substantial space of its own.
TP> So, as it stands, ignoring the new-style-class administrative
TP> pickle bloat in the Python implementation, the *guts* of the
TP> pickles produced by the Python implementation are substantially
TP> smaller than those produced by the C implementation.
The __reduce__() approach seems to be (either or both) overly complex
and underdocumented. It's a real shame that something simple like
__getstate__() and __setstate__() can't be made to work for new-style
classes.
If I recall correctly, the problem is that there is no way to
distinguish a user-defined new-style class from a builtin type from a
user-defined subclass of a builtin type. As a result, there's no way
for pickle to decide if it should be looking for __getstate__() or
invoking the complicated machinery that allows subclasses of builtin
types to be pickleable. Another victim of unification.
> I figured timedelta *was* "a class", although now I guess that, in
> this context, it's not "a class" but "a type". That's what you get
> when you unify the concepts <wink>.
Jerem
tim_one(a)users.sourceforge.net writes:
> + /* XXX A further broken attempt to get pickling to work.
> + * XXX This avoids the problem above, but dies instead with
> + * XXX PicklingError: Can't pickle <type 'timedelta'>: it's not
> + * XXX found as __builtin__.tmedelta
You need to arrange for a module name to appear in timedelta's
tp_name:
Index: obj_delta.c
===================================================================
RCS file: /cvsroot/python/python/nondist/sandbox/datetime/obj_delta.c,v
retrieving revision 1.15
diff -c -r1.15 obj_delta.c
*** obj_delta.c 2 Dec 2002 17:31:21 -0000 1.15
--- obj_delta.c 2 Dec 2002 17:37:55 -0000
***************
*** 673,679 ****
static PyTypeObject PyDateTime_DeltaType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
! "timedelta", /* tp_name */
sizeof(PyDateTime_Delta), /* tp_basicsize */
0, /* tp_itemsize */
0, /* tp_dealloc */
--- 673,679 ----
static PyTypeObject PyDateTime_DeltaType = {
PyObject_HEAD_INIT(NULL)
0, /* ob_size */
! "datetime.timedelta", /* tp_name */
sizeof(PyDateTime_Delta), /* tp_basicsize */
0, /* tp_itemsize */
0, /* tp_dealloc */
seems a good bet.
Cheers,
M.
--
I'm not sure that the ability to create routing diagrams
similar to pretzels with mad cow disease is actually a
marketable skill. -- Steve Levin
-- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html
When running test_bsddb3 on Mac OSX, using BSDDB 4.0.14, I get
tracebacks like these:
Exception in thread writer 0:
Traceback (most recent call last):
File "/Users/guido/python/src/Lib/threading.py", line 416, in __bootstrap
self.run()
File "/Users/guido/python/src/Lib/threading.py", line 404, in run
apply(self.__target, self.__args, self.__kwargs)
File "/Users/guido/python/src/Lib/bsddb/test/test_thread.py", line 115, in writerThread
dbutils.DeadlockWrap(d.put, key, self.makeData(key), max_retries=12)
File "/Users/guido/python/src/Lib/bsddb/dbutils.py", line 61, in DeadlockWrap
except _db.DBLockDeadlockError:
NameError: global name '_db' is not defined
and these:
Exception in thread reader 8:
Traceback (most recent call last):
File "/Users/guido/python/src/Lib/threading.py", line 416, in __bootstrap
self.run()
File "/Users/guido/python/src/Lib/threading.py", line 404, in run
apply(self.__target, self.__args, self.__kwargs)
File "/Users/guido/python/src/Lib/bsddb/test/test_thread.py", line 140, in readerThread
c = d.cursor()
DBNoMemoryError: (12, 'Cannot allocate memory -- Lock table is out of available locks')
Any clues???
I don't get these on Linux, although there I get a failure too:
test test_bsddb3 failed -- errors occurred; run in verbose mode for details
Frankly, verbose mode waas too verbose to bother. :-(
--Guido van Rossum (home page: http://www.python.org/~guido/)
Martin:
> Jack Jansen <Jack.Jansen(a)oratrix.com> writes:
>
>> How about taking a completely different angle on this matter, and
>> looking at PyArg_Parse itself? If we can keep PyArg_Parse 100%
>> backward compatible (which would mean that its "i" format would take
>> any IntObject or LongObject between -1e31 and 1e32-1) and introduce a
>> new (preferred) way to parse arguments that not only does the right
>> thing by being expressive enough to make a difference between
>> "currency integers" and "bitmap integers", but also cleans up the
>> incredible amount of cruft that PyArg_Parse has accumulated over the
>> years?
>
> I had a similar idea, so I'd encourage you to spell out your proposal
> in more detail, or even in an implementation.
> My idea was to provide a ParseTuple wrapper, [...]
> Those of you needing to support older Python releases could
>
> #define PyArg_ParseTupleLenient PyArg_ParseTuple
>
> in your distribution, or provide other appropriate wrappers.
Unfortunately this wouldn't work if the distributions are binary
distributions.
But, aside from that, I think I would want to go much further than this
_if_ I put time in redesigning PyArg_Parse. I would first like to take
inventory of all the problems there are with PyArg_Parse, and then see
whether we can design something that will solve most or all of these
issues, without being overly complex in everyday use.
And before I embark on that journey I would first like to have a group
of people willing to put effort into this, plus the go-ahead of Guido
(there's little point in designing a new mechanism if there is no
chance of it being adopted as the general case, especially if this new
mechanism may need a new PyMethodDef flag or some such thing).
As a kickoff, here are some of my gripes about PyArg_Parse.
1. The format chars are arcane and without any logic. There is no logic
to signed/unsigned type specifiers, some modifiers are suffixes (s#),
some
are different format chars (s versus z), some are prefixes (e).
Everyone knows a basic set of 5 or 6 and has to look the rest up. Some
types have special shortcuts without a clear rationale (String and
Unicode are the only types to have an "O!", typename shortcut in the
form of "S" and "U").
2. There is conversion information interspersed with the argument list,
for instance with O!, O& or es formats. This makes it very difficult to
represent or build an argument parsing format in Python (this is
worsened because some of the C objects in the argument list, such as
the O& routine pointers, have no Python equivalent). And representing
an argument list parser in Python is something that you need if you
want to do dynamic wrapping of any kind (calldll, PyObjC, etc).
3. There is no way to create new, temporary objects during argument
parsing, because there is no corresponding "release" call and no way to
make the caller release new objects. Having temporary objects would
make conversion a lot easier. Unicode and strings are the first types
that come to mind, but there are probably others.
4. PyArg_ParseTupleAndKeywords makes the situation even worse. Each
argument now has *three* different "index positions", the real index in
the keyword list, a modified one in the format string (ignore all
non-alphabetic chars and "e") and a third one in the argument list
(ignore all extraneous arguments corresponding to es or O& or
what-have-you).
--
- Jack Jansen <Jack.Jansen(a)oratrix.com>
http://www.cwi.nl/~jack -
- If I can't dance I don't want to be part of your revolution -- Emma
Goldman -
Jack Jansen <Jack.Jansen(a)cwi.nl> writes:
> I'm +0 on this. The reason I'm not +1 is that 2MB is just as
> arbitrary as 1MB,
Not quite; it's also the stack size passed to pthreads IIRC. So after
ulimit -s 2048 all threads get the same stack size (I think).
Cheers,
M.
--
42. You can measure a programmer's perspective by noting his
attitude on the continuing vitality of FORTRAN.
-- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html
On vrijdag, nov 29, 2002, at 21:47 Europe/Amsterdam,
jvr(a)users.sourceforge.net wrote:
> Update of /cvsroot/python/python/dist/src/Python
> In directory sc8-pr-cvs1:/tmp/cvs-serv20813/Python
>
> Modified Files:
> import.c
> Log Message:
> Slightly improved version of patch #642578: "Expose
> PyImport_FrozenModules
> in imp". This adds two functions to the imp module: get_frozenmodules()
> and set_frozenmodules().
Something that's been bothering me about frozen modules in the
classical sense (i.e. those that are stored in C static data
structures) is that the memory used by them is gone without any chance
at recovery. For big frozen Python programs that are to be run on small
machines this is a waste of precious memory.
With modules "frozen" with set_frozenmodules you could conceivably free
the data again after it has been imported (similar to what
MacPython-OS9 does with modules "frozen" in "PYC " resources).
Would that be worth the added complexity?
--
- Jack Jansen <Jack.Jansen(a)oratrix.com>
http://www.cwi.nl/~jack -
- If I can't dance I don't want to be part of your revolution -- Emma
Goldman -