Hi.
[Mark Hammond]
> The point isn't about my suffering as such. The point is more that
> python-dev owns a tiny amount of the code out there, and I don't believe we
> should put Python's users through this.
>
> Sure - I would be happy to "upgrade" all the win32all code, no problem. I
> am also happy to live in the bleeding edge and take some pain that will
> cause.
>
> The issue is simply the user base, and giving Python a reputation of not
> being able to painlessly upgrade even dot revisions.
I agree with all this.
[As I imagined explicit syntax did not catch up and would require
lot of discussions.]
[GvR]
> > Another way is to use special rules
> > (similar to those for class defs), e.g. having
> >
> > <frag>
> > y=3
> > def f():
> > exec "y=2"
> > def g():
> > return y
> > return g()
> >
> > print f()
> > </frag>
> >
> > # print 3.
> >
> > Is that confusing for users? maybe they will more naturally expect 2
> > as outcome (given nested scopes).
>
> This seems the best compromise to me. It will lead to the least
> broken code, because this is the behavior that we had before nested
> scopes! It is also quite easy to implement given the current
> implementation, I believe.
>
> Maybe we could introduce a warning rather than an error for this
> situation though, because even if this behavior is clearly documented,
> it will still be confusing to some, so it is better if we outlaw it in
> some future version.
>
Yes this can be easy to implement but more confusing situations can arise:
<frag>
y=3
def f():
y=9
exec "y=2"
def g():
return y
return y,g()
print f()
</frag>
What should this print? the situation leads not to a canonical solution
as class def scopes.
or
<frag>
def f():
from foo import *
def g():
return y
return g()
print f()
</frag>
[Mark Hammond]
> > This probably won't be a very popular suggestion, but how about pulling
> > nested scopes (I assume they are at the root of the problem)
> > until this can be solved cleanly?
>
> Agreed. While I think nested scopes are kinda cool, I have lived without
> them, and really without missing them, for years. At the moment the cure
> appears worse then the symptoms in at least a few cases. If nothing else,
> it compromises the elegant simplicity of Python that drew me here in the
> first place!
>
> Assuming that people really _do_ want this feature, IMO the bar should be
> raised so there are _zero_ backward compatibility issues.
I don't say anything about pulling nested scopes (I don't think my opinion
can change things in this respect)
but I should insist that without explicit syntax IMO raising the bar
has a too high impl cost (both performance and complexity) or creates
confusion.
[Andrew Kuchling]
> >Assuming that people really _do_ want this feature, IMO the bar should be
> >raised so there are _zero_ backward compatibility issues.
>
> Even at the cost of additional implementation complexity? At the cost
> of having to learn "scopes are nested, unless you do these two things
> in which case they're not"?
>
> Let's not waffle. If nested scopes are worth doing, they're worth
> breaking code. Either leave exec and from..import illegal, or back
> out nested scopes, or think of some better solution, but let's not
> introduce complicated backward compatibility hacks.
IMO breaking code would be ok if we issue warnings today and implement
nested scopes issuing errors tomorrow. But this is simply a statement
about principles and raised impression.
IMO import * in an inner scope should end up being an error,
not sure about 'exec's.
We will need a final BDFL statement.
regards, Samuele Pedroni.
The 2001 O'Reilly Open Source Convention, was, as usual, a very stimulating
event and a forum for a lot of valuable high-level conversations between
the principal developers of many open-source projects.
Many of you know that I maintain friendly and relatively close
relations with a number of senior Perl hackers, including both Larry
Wall himself and others like Chip Salzenberg, Randall Schwartz, Tom
Christiansen, Adam Turoff, and more recently Simon Cozens (who lurks on
this list these days). At OSCon I believe I got a pretty good picture
of what the leaders of the Perl community are thinking and planning
these days. They have definitely come out of the slump they were in a
year ago -- there's a much-renewed sense of energy over there.
I think their plans offer both the Perl and Python communities some
large strategic opportunities. Specifically, I'm urging the Python
community's leadership to seriously explore the possibility of helping
make the Parrot hoax into a working reality. I have discussed this
with Guido by phone, and though he is skeptical about such an
implementation being actually possible, he also thinks the idea has
tremendous potential and says he is willing to support it in public.
The Perl people have blocked out an architecture for Perl 6 that
envisages a new bytecode level, designed and implemented from
scratch. They're very serious about this; I participated in some
discussions of the bytecode design (and, incidentally, argued that
the bytecode should emulate a stack rather than a register machine
because the cost/speed disparities that justify register architectures
in hardware don't exist in a software VM).
The Perl people are receptive -- indeed, some of them are actively pushing --
the idea that their new bytecode should not be Perl-specific. Dan Sugalski,
the current lead for the bytecode interpreter project, has named it Parrot.
At the Perl 6 talk I attended, Chip Salzenberg speculated in public about
possibly supporting a common runtime for Perl, Python, Ruby, and Intercal(!).
One of the things that makes this an unprecedented opportunity is that
the design of Perl 6 is not yet set in stone -- and Larry has already
shown a willingness to move it in a Pythonward direction. Syntactically,
Perl 5's -> syntax is going away to be replaced by a Python-like dot
with implicit dereferencing (and Larry said in public this was
Python's influence, not Java's). The languages have of course converged
in other ways recently -- Python's new lexical scoping actually brings
it closer to Perl "use strict" semantics.
I believe the way is open for Python's leading technical people to be
involved as co-equals in the design and implementation of the Parrot
bytecode interpreter. I have even detected some willingness to use
Python's existing bytecode as a design model for Parrot, and perhaps
even portions of the Python interpreter codebase!
One bold but possibly workable proposal would be to offer Dan and the
Parrot project the Python bytecode interpreter as a base for the Parrot
code, and then be willing to incorporate whatever (probably relatively
minor) extensions are needed to support Perl 6's primitives.
Following my conversation with Guido, I've put doing an architectural
comparison of the existing Python and Perl bytecodes at the top of my
priority list. I'm flying to Taipei tomorrow and will have a lot of
hours on airplanes with my laptop to do this.
Committing a common runtime with Perl would mean relinquishing
exclusive design control of our bytecode level, but the Perl people
seem themselves willing to give up *their* exclusive control to make
this work. It is rather remarkable how respectful of Python they have
become, and I can't emphasize enough that I think they are truly ready
for us to come to the project as equal partners.
(One important place where I think everybody understands the Python
side of the force would clearly win out in a final Parrot design is in
the extension-and-embedding facilities. Perl XS is acknowledged to be
a nasty mess. My guess is the Perl guys would drop it like a hot rock
for our stuff -- that would be as clear a win for them as co-opting
Perl-style regexps was for us.)
I think the benefits of a successful unification at the bytecode
level, together with Larry's willingness to Pythonify the upper level
of Perl 6 a bit, could be vast -- both for the Python community in
particular and for scripting-language users in general.
1. Mixed-language programming in Perl and Python could become almost
seamless, with all that implies for both languages getting the use of
each others' libraries.
2. The prospects for getting decent Python compilation to native code would
improve if both the Python and Perl communities were strongly motivated
to solve the bytecode-compilation problem.
3. More generally, the fact remains that Perl's user/developer base is
still much larger than ours. Successful unification would co-opt a
lot of that energy for Python. Because the brain drain between Perl
and Python is pretty much unidirectional in Python's favor (a fact
even the top Perl hackers ruefully acknowledge), I don't think we need
worry about being subsumed in that larger community either.
I think there is a wonderful opportunity here for the Python and Perl
developers to lead the open-source world. If we can do a good Parrot
design together, I think it will be hard for the smaller scripting
language communities to resist its pull. Ultimately, the common
Parrot runtime could become the open-source community's answer -- a
very competititive answer -- to the common runtime Microsoft is
pushing for .NET.
I think trying to make Parrot work would be worth some serious effort.
--
<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>
The Bible is not my book, and Christianity is not my religion. I could never
give assent to the long, complicated statements of Christian dogma.
-- Abraham Lincoln
> Following itojun's proposal, I have now added an autoconf test for
> snprintf, and use sprintf if it is not available.
In PySocket_getaddrinfo, would it make sense to increase the allocation
of pbuf from 10 characters to, say, 30 characters, in case
sprintf(pbuf, "%ld", PyInt_AsLong(pobj));
gets run on a 64-bit machine?
Alex.
Hello
Is there a reason the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS
don't allow an argument specifying what variable to save the state to? I
needed this myself so I wrote the following:
#ifdef WITH_THREAD
# define MY_BEGIN_ALLOW_THREADS(st) \
{ st = PyEval_SaveThread(); }
# define MY_END_ALLOW_THREADS(st) \
{ PyEval_RestoreThread(st); st = NULL; }
#else
# define MY_BEGIN_ALLOW_THREADS(st)
# define MY_END_ALLOW_THREADS(st) { st = NULL; }
#endif
It works just fine but has one drawback: Whenever Py_BEGIN_ALLOW_THREADS
changes, I have to change my macros too.
Wouldn't it be reasonable to supply two sets of macros, one that allows
exactly this, and one that does what Py_BEGIN_ALLOW_THREADS currently
does.
Martin Sjögren
--
Martin Sjögren
martin(a)strakt.com ICQ : 41245059
Phone: +46 (0)31 405242 Cell: +46 (0)739 169191
GPG key: http://www.strakt.com/~martin/gpg.html
Hi guys.
Sorry i've been fairly quiet recently -- at least life isn't dull.
I wanted to put in a few words for cgitb.py for your consideration.
I think you all saw it at IPC 9 -- if you missed the presentation,
there are examples at http://www.lfw.org/python to check out.
What i'm proposing is that we toss cgitb.py into the standard library
(pretty small at about 100 lines, since all the heavy lifting is in
pydoc and inspect). Then we can add this to site.py:
if os.environ.has_key("GATEWAY_INTERFACE"):
import sys, cgitb
sys.excepthook = cgitb.excepthook
I think this is pretty safe, since GATEWAY_INTERFACE is guaranteed
to exist under the CGI specification and should never appear in any
other context. cgitb.py is written in paranoid fashion -- if anything
goes wrong during generation of the HTML traceback, sys.stderr still
goes to the browser; and if for some reason the page gets dumped to
a shell somewhere, the original traceback is still visible in a comment
at the end of the page.
The upside is that we *automagically* get pretty tracebacks for all
the Python CGI scripts there, with zero effort from the CGI script
writers. I think this is a really strong hook for people getting
started with Python.
No more "internal server error" messages followed by the annoying
task of inserting "print 'Content-Type: text/html\n\n<pre>'" into
all your scripts! As for me, i've probably done this hundreds of
times now, and would love to stop doing it.
I anticipate a possible security concern (as this shows bits of your
source code to strangers when problems happen). So i have tried to
address that by providing a SECRET flag in cgitb that causes the
tracebacks to get written to files instead of the Web browser.
Opinions and suggestions are welcomed! (I'm looking at the good
stuff that the WebWare people have done with it, and i plan to
merge in their improvements. For the HTML-heads out there in
particular, i'm looking for your thoughts on the reset() routine.)
-- ?!ng
Martin has uploaded a patch which modifies the Python API level
number depending on the setting of the compile time option
for internal Unicode width (UCS-2/UCS-4):
https://sourceforge.net/tracker/?func=detail&aid=445717&group_id=5470&atid=…
I am not sure whether this is the right way to approach this
problem, though, since it affects all extensions -- not only
ones using Unicode.
If at all possible, I'd prefer some other means to
handle this situation (extension developers are certainly not
going to start shipping binaries for narrow and wide Python
versions if their extension does not happen to use Unicode).
Any ideas ?
Thanks,
--
Marc-Andre Lemburg
CEO eGenix.com Software GmbH
______________________________________________________________________
Consulting & Company: http://www.egenix.com/
Python Software: http://www.lemburg.com/python/
Just to let you know and to initiate some cross-platform
testing:
While working on the warning patch for modsupport.c,
I've added two new APIs which hopefully make it easier for Python
to switch to buffer overflow safe [v]snprintf() APIs for error
reporting et al.
The two new APIs are PyOS_snprintf() and
PyOS_vsnprintf() and work just like the standard ones in many
C libs. On platforms which have snprintf(), the native APIs are used,
on all other an emulation with snprintf() tries to do its best.
Please try them out on your platform. If all goes well, I think
we should replace all sprintf() (without the n in the name)
with these new safer APIs.
Thanks,
--
Marc-Andre Lemburg
CEO eGenix.com Software GmbH
______________________________________________________________________
Company & Consulting: http://www.egenix.com/
Python Software: http://www.lemburg.com/python/
PEP: 2XX
Title: Adding a Decimal type to Python
Version: $Revision:$
Author: mclay(a)nist.gov <mclay(a)nist.gov>
Status: Draft
Type: ??
Created: 25-Jul-2001
Python-Version: 2.2
Introduction
This PEP describes the addition of a decimal number type to Python.
Rationale
The original Python numerical model included int, float, and long.
By popular request the imaginary type was added to improve support
for engineering and scientific applications. The addition of a
decimal number type to Python will improve support for business
applications as well as improve the utility of Python a teaching
language.
The number types currently used in Python are encoded as base two
binary numbers. The base 2 arithmetic used by binary numbers closely
approximates the decimal number system and for many applications the
differences in the calculations are unimportant. The decimal number
type encodes numbers as decimal digits and use base 10 arithmetic.
This is the number system taught to the general public and it is the
system used by businesses when making financial calculations.
For financial and accounting applications the difference between
binary and decimal types is significant. Consequently the computer
languages used for business application development, such as COBOL,
use decimal types.
The decimal number type meets the expectations of non-computer
scientists when making calculations. For these users the rounding
errors that occur when using binary numbers is a source of confusion
and irritation.
Implementation
The tokenizer will be modified to recognized number literals with
a 'd' suffix and a decimal() function will be added to __builtins__.
A decimal number can be used to represent integers and floating point
numbers and decimal numbers can also be displayed using scientific
notation. Examples of decimal numbers include:
1234d
-1234d
1234.56d
-1234.56d
1234.56e2d
-1234.56e-2d
The type returned by either a decimal floating point or a decimal
integer is the same:
>>> type(12.2d)
<type 'decimal'>
>>> type(12d)
<type 'decimal'>
>>> type(-12d+12d)
<type 'decimal'>
>>> type(12d+12.0d)
<type 'decimal'>
This proposal will also add an optional 'b' suffix to the
representation of binary float type literals and binary int type
literals.
>>> float(12b)
12.0
>>> type(12.2b)
<type 'float'>
>>> type(float(12b))
<type 'float'>
>>> type(12b)
<type 'int'>
The decimal() conversion function added to __builtins__ will support
conversions of strings, and binary types to decimal.
>>> type(decimal("12d"))
<type 'decimal'>
>>> type(decimal("12"))
<type 'decimal'>
>>> type(decimal(12b))
<type 'decimal'>
>>> type(decimal(12.0b))
<type 'decimal'>
>>> type(decimal(123456789123L))
<type 'decimal'>
The conversion functions int() and float() in the __builtin__ module
will support conversion of decimal numbers to the binary number
types.
>>> type(int(12d))
<type 'int'>
>>> type(float(12.0d))
<type 'float'>
Expressions that mix integers with decimals will automatically convert
the integer to decimal and the result will be a decimal number.
>>> type(12d + 4b)
<type 'decimal'>
>>> type(12b + 4d)
<type 'decimal'>
>>> type(12d + len('abc'))
<type 'decimal'>
>>> 3d/4b
0.75
Expressions that mix binary floats with decimals introduce the
possibility of unexpected results because the two number types use
different internal representations for the same numerical value. The
severity of this problem is dependent on the application domain. For
applications that normally use binary numbers the error may not be
important and the conversion should be done silently. For newbie
programmers a warning should be issued so the newbie will be able to
locate the source of a discrepancy between the expected results and
the results that were achieved. For financial applications the mixing
of floating point with binary numbers should raise an exception.
To accommodate the three possible usage models the python interpreter
command line options will be used to set the level for warning and
error messages. The three levels are:
promiscuous mode, -f or --promiscuous
safe mode -s or --save
pedantic mode -p or --pedantic
The default setting will be set to the safe setting. In safe mode
mixing decimal and binary floats in a calculation will trigger a warning
message.
>>> type(12.3d + 12.2b)
Warning: the calculation mixes decimal numbers with binary floats
<type 'decimal'>
In promiscuous mode warnings will be turned off.
>>> type(12.3d + 12.2b)
<type 'decimal'>
In pedantic mode warning from safe mode will be turned into exceptions.
>>> type(12.3d + 12.2b)
Traceback (innermost last):
File "<stdin>", line 1, in ?
TypeError: the calculation mixes decimal numbers with binary floats
Semantics of Decimal Numbers
??
Samuele Pedroni <pedroni(a)inf.ethz.ch> writes:
> ...
> > > >
> > > > Does codeop currently work in Jython? The solution should continue to
> > > > work in Jython then.
> > > We have our interface compatible version of codeop that works.
> >
> > Would implementing the new interfaces I sketched out for codeop.py be
> > possible in Jython? That's the bit I care about, not so much the
> > interface to __builtin__.compile.
> Yes, it's of possible.
Good; hopefully we can get somewhere then.
> > > > Does Jython support the same flag bit values as
> > > > CPython? If not, Paul Prescod's suggestion to use keyword arguments
> > > > becomes very relevant.
> > > we support a subset of the co_flags, CO_NESTED e.g. is there with the same
> > > value.
> > >
> > > But the embedding API is very different, my implementation of nested
> > > scopes does not define any Py_CF... flags, we have an internal CompilerFlags
> > > object but is more similar to PyFutureFeatures ...
> >
> > Is this object exposed to Python code at all?
> Not publicily, but in Jython the separating line is a bit different,
> because public java classes are always accessible from jython,
> even most of the internals. That does not mean and every use of that
> is welcome and supported.
Ah, of course. I'd forgotten how cool Jython was in some ways.
> > One approach would be
> > PyObject-izing PyFutureFlags and making *that* the fourth argument to
> > compile...
> >
> > class Compiler:
> > def __init__(self):
> > self.ff = ff.new() # or whatever
> > def __call__(self, source, filename, start_symbol):
> > code = compile(source, filename, start_symbol, self.ff)
> > self.ff.merge(code.co_flags)
> > return code
> I see, "internally" we already have a compiler_flags function
> that do the same of:
> > code = compile(source, filename, start_symbol, self.ff)
> > self.ff.merge(code.co_flags)
>
> where self.ff is a CompuilerFlags object.
>
> I can re-arrange things for any interface,
Well, I don't want to make more work for you - I imagine Guido's doing
enough of that for two!
> I was only trying to explain our approach and situation and a
> possible way to avoid duplicating some internal code in Python.
Can you point me to the code in CVS that implements this sort of
thing? I don't really know Java but I can probably muddle through to
some extent. We might as well have CPython copy Jython for once...
Cheers,
M.
--
On the other hand, the following areas are subject to boycott
in reaction to the rampant impurity of design or execution, as
determined after a period of study, in no particular order:
... http://www.naggum.no/profile.html