Just for the record, I never called them bills just because they contain a
dollar sign! I was thinking of a short word which describes an expression
who's evaluation changes depending on the local circumstances. Like a legal
bill, not a dollar bill.
The name macro doesn't really work, as it's only an expression, it's not a
The idea was never to start passing complex expressions around the place.
It was just to allow a function to take an expression containing about one
or two names, and evaluate the expression internally, so this...
func(lambda foo: foo > 0)
func($foo > 0)
Other uses are there, and might be helpful from time to time, but it's
mainly about cleaning up APIs. I personally work with user facing Python
APIs all day, and users are generally programming interactively, so perhaps
my take is atypical.
What the hell is a thunk anyway? It's a horrible name.
On the jQuery like syntax, $(foo), that might work and might do away with
the issue of expressions containing bills always evaluating to bills. I
took that too far. Good point.
Starting new thread because this bike has a different shape and color.
Yesterday I was thinking that just making the keyword lambda assignable
like True, False, and None, would be enough. But the issue with that is
lambda isn't a name to an actual object or type. That was the seed for
this idea. How to get lambda like functionality into some sort of object
that would be easy to use and explain.
This morning I thought we could have in a functions definition something,
like "*", and "**", to take an expression. Similar to Nicks idea with =:,
but more general.
The idea is to have "***" used in def mean to take "any" call expression
and not evaluate it until *** is used on it. ie... the same rules as *.
When used in a def to pack a tuple, and when used outside def, to unpack
it. So, "***" used in a def, stores the call expression, at call time, and
when used later, expresses it.
A function call that captures an expression may be tricky to do. Here's one
approach that requires sugar when a function defined with "***" is called.
def __init__(self, expr):
""" expr is a callable that takes no arguments. """
self.expr = expr
""" ***obj --> result """
(Any other suggestions for how to do this would be good.)
And at call time....
fn(...) --> fn(TriStar(expr=lambda:...))
So presuming we can do something like the above, the first case is ...
def star_fn(***expr) return ***expr
... = star_fn(...)
Which is a function that just returns whatever it's input is, and is even
more general than using *args, **kwds.
The call signature stored in expr isn't evaluated until it's returned with
***expr. So the evaluation is delayed, or lazy, but it's still explicit
and very easy to read.
This returns a lambda-like function.
def star_lambda(***expr): return expr
And is used this way...
result = star_lambda(a * b + c) # captures expression.
actual_result = ***result # *** resolves "result" here!
The resolution is done with ***name, rather than name().
That's actually very good because it can pass through callable tests. So
you can safely pass callable objects around without them getting called at
the wrong time or place.
We can shorten the name because star_lambda is just a function.
L = star_lambda
To me this is an exceptionally clean solution. Easy to use, and not to
hard to explain. Seems a lot more like a python solution to me as well.
Hoping it doesn't get shot down too quickly,
The tentatively proposed idea here is using dollar signed expressions to
define 'bills'. A bill object is essentially an expression which can be
evaluated any number of times, potentially in different scopes.
The following expression [a bill literal] would be pointless, but would
define a bill that always evaluates to 1.
Some better examples...
* assign a bill to `a` so that `a` will evaluate to the value of the name
`foo` any time that `a` is evaluated, in the scope of that evaluation
a = $foo
* as above, but always plus one
a = $foo + 1
* make `a` a bill that evaluates to the value of the name `foo` at the time
that `a` is evaluated, in that scope, plus the value of `bar` **at the time
and in the scope of the assignment to `a`**
a = $foo + bar
Note. Similarly to mixing floats with ints, any expression that contains a
bill evaluates to a bill, so if `a` is a bill, `b=a+1` makes `b` a bill
too. Passing a bill to eval should be the obvious way to get the value.
The point? It allows functions to accept bills to use internally. The
function would specify any names the bill can reference in the function's
API, like keywords.
def f(b): # the bill arg `b` can reference `item`
for item in something:
if eval(b): return True
f($item < 0)
You could also use a function call, for example `$foo()` would evaluate to
a bill that evaluates to a call to `foo` in the scope and at the time of
any evaluation of the bill.
I've no idea if this is even possible in Python, and have no hope of
implementing it, but thought I'd share :)
Python3.3 Decimal Library v0.3 is Released here:
pdeclib.py is the decimal library, pilib.py is the PI library.
pdeclib.py provides scientific and transcendental functions
for the C Accelerated Decimal module written by Stefan Krah. The
library is open source, GLPv3, comprised of two py files.
My idea for python is to two things really, 1) make floating point decimal
the default floating point type in python4.x, and 2) make
these functions ( pdeclib.py ) or equiv available in python4.x by
Thank you for your consideration.
Mark H. Harris
On Tue, Mar 4, 2014 at 8:51 AM, Simon Kennedy <sffjunkie(a)gmail.com> wrote:
> On Monday, 3 March 2014 18:55:17 UTC, Ziad Sawalha wrote:
>> Thanks, Guido.
>> I'll follow up with updates to common tools as I come across them (ex.
>> pep257: https://github.com/GreenSteam/pep257/pull/64).
> The footnote's still in the PEP text.
--Guido van Rossum (python.org/~guido)
PEP-257 includes this recommendation:
“The BDFL  recommends inserting a blank line between the last paragraph in a multi-line
docstring and its closing quotes, placing the closing quotes on a line by themselves. This way,
Emacs' fill-paragraph command can be used on it.”
I believe emacs no longer has this limitation. "If you do fill-paragraph in emacs in Python mode
within a docstring, emacs already ignores the closing triple-quote. In fact, the most recent version
of emacs supports several different docstring formatting styles and gives you the ability to switch
between them.” - quoting Kevin L. Mitchell who is more familiar with emacs than I am.
I’m considering removing that recommendation and updating some of the examples in PEP-257,
but I’d like some thoughts from this group before I submit the patch. Any thoughts or references to
conversations that may have already been had on this topic?
The most quoted reason for staying with python 2 is that some required
library is not available to support python 3. All it takes is one legacy
library module in python 2 to keep an entire project in python 2.
Many significant projects (e.g web2py) continue in python 2 for this
reason. Even projects that produce code that can work under either python 2
or python 3, are themselves trapped into only using python 2 features.
This is really holding back python!
You cannot just import python2 code into the python 3 interpreter as it is
But could this be solved by a new approach which is to treat python 2 as
another language and not directly import the code but handle python 2 code
with an an extension module, which acts as bridge between versions.
I mean, python three program can access C/C++ code (which is not python3),
why can't the access python 2 code by treating it like C/C++?
This would require an extension C module called from python 3. How to do
this is well known. This extension C module would itself call the required
module library(s) from python2. Calling python from C is also a well known
Ptyhon can call C. C can call python. So python3 call C which calls
So in python3
imput <python2-module> # this usually won't work because of language changes
<python2-module> = python2.import('<python2-module>')
The resultant imported module would appear as a c++ extension to python3
and in effect be a wrapper for the python2 module using the python2
extension to execute the code
This would mean programs using python2 imports have the overhead of both
interpreters running in memory and calls across the python3/python2
boundary are 'wrapped' as c++ calls that then activate the required code
within python 2. In reality quite a small overhead on modern computers.
(within the 'python2' module)
or similar would be needed for the separate import space running inside
the python2 boundary.
Callbacks back across the python3/python2 boundary back into python3 are a
further complication but several possible solutions exist to deliver this.
Not many library modules require callback functions to the library module
itself so this not the central issue. There is some work for the person
coding the python 3 app in terms of a different syntax for legacy module
imports and potentially setting python 2 environment values (like
python2.sys.path), but not a lot and the python3 is the new program being
worked on as opposed to the legacy library which can remain untouched.
I think this type of approach could change the recommendation on 'python 2
or python 3' to a simple 'get the latest available to you and here is how
to use legacy libraries if you have the need'.
Thoughts or criticisms?
Stephen J. Turnbull wrote:
> Vernon D. Cole writes:
>> I cannot compile a Python extension module with any Microsoft compiler
>> I can obtain.
> Your pain is understood, but it's not simple to address it.
FWIW, I'm working on making the compiler easily obtainable. The VS 2008 link that was posted is unofficial, and could theoretically disappear at any time (I'm not in control of that), but the Windows SDK for Windows 7 and .NET 3.5 SP1 (http://www.microsoft.com/en-us/download/details.aspx?id=3138) should be around for as long as Windows 7 is supported. The correct compiler (VC9) is included in this SDK, but unfortunately does not install the vcvarsall.bat file that distutils expects. (Though it's pretty simple to add one that will switch on %1 and call the correct vcvars(86|64|...).bat.)
The SDK needed for Python 3.3 and 3.4 (VC10) is even worse - there are many files missing. I'm hoping we'll be able to set up some sort of downloadable package/tool that will fix this. While we'd obviously love to move CPython onto our latest compilers, it's simply not possible (for good reason). Python 3.4 is presumably locked to VC10, but hopefully 3.5 will be able to use whichever version is current when that decision is made.
> The basic problem is that the ABI changes. Therefore it's going to require
> a complete new set of *all* C extensions for Windows, and the duplication
> of download links for all those extensions from quite a few different vendors
> is likely to confuse a lot of users.
Specifically, the CRT changes. The CRT is an interesting mess of data structures that are exposed in header files, which means while you can have multiple CRTs loaded, they cannot touch each other's data structures at all or things will go bad/crash, and there's no nice way to set it up to avoid this (my colleague who currently owns MSVCRT suggested a not-very-nice way to do it, but I don't think it's going to be reliable enough). Python's stable ABI helps, but does not solve this problem.
The file APIs are the worst culprits. The layout of FILE* objects can and do change between CRT versions, and file descriptors are simply indices into an array of these objects that is exposed through macros rather than function calls. As a result, you cannot mix either FILE pointers or file descriptors between CRTs. The only safe option is to build with the matching CRT, and for MSVCRT, this means with the matching compiler. It's unfortunate, and the responsible teams are well aware of the limitation, but it's history at this point, so we have no choice but to work with it.