On Tuesday, December 27, 2011 10:53:56 PM UTC+3, RunThePun wrote:
>
> On Tue, Dec 27, 2011 at 7:01 PM, anatoly techtonik <tech...(a)gmail.com>wrote:
>
>> As you may know, the python-ideas list is opened only to subscribers.
>> This is inconvenient, because:
>> 1. it requires three step subscription process
>> 2. it is impossible to post a reply to existing thread/idea
>>
>> There is a web-interface in Google Groups at https://groups.google.com/**
>> forum/#!forum/python-ideas<https://groups.google.com/forum/#!forum/python-ideas> that
>> can solve problems above and provide some more nifty features such as
>> embedded search. But there comes another problem that messages posted
>> through the group doesn't end in list, because list requires subscription.
>> I've already tried to find a solution, but run out of time, so I
>> summarized the proposal at
>> http://wiki.python.org/moin/MailmanWithGoogleGroups
>>
>> I may or may not be able to publish outcomes of my research, so it would
>> be nice to get some help in investigating the problem and publishing a
>> solution on aforementioned wiki page. Thanks.
>>
>>
>>
>
> Concerning the search problem I've used google queries as such:
>
> list comprehensions site:
> http://mail.python.org/pipermail/python-ideas/
>
> I agree that having a "nosey" or "star" feature like issue trackers could
> be nice, though I'm not sure Google Groups is the most modern
> infrastructure to solve all our problems. I remember hearing that open
> google groups get a lot of spam for example.
>
Over last six months I found only 3 spam messages sent from
https://groups.google.com/forum/#!forum/python-ideas and for some reason I
think that my mailbox filter would be smart enough to put them into
appropriate folder even they came from Mailman.
Maybe mailman can be improved?
> eg would it help if the PyPI login cookie allowed you to post on
> mailman? If mailman allowed starring threads?
>
Certainly. But the threads should be stacked in different order that just
by month, because some threads can span over several months.
Since this is a problem which occurs very often, I'd like to hear your
opinion as to whether something like this might find some place in the
python stdlib (signal module maybe?).
Please read the discussion included in this recipe:
http://code.activestate.com/recipes/577997/
It should provide a description of the problem and the general use case.
Thanks in advance for your comments,
--- Giampaolo
http://code.google.com/p/pyftpdlib/http://code.google.com/p/psutil/
Pythons:
What are your thoughts on the concept of a `defaultattrgetter`? It
would be to operator.attrgetter what getattr(foo, x, default) is to
getattr(foo, x). I dont like that attrgetter requires the attribute to
exist or else the getter be wrapped in a try catch with respect to how
defaultdict solves roughly the same problem for dictionary keys.
The semantics could be something such as:
from operator import defaultattrgetter
_x = defaultattrgetter({'x': 0})
_y = defaultattrgetter({'y': 1})
- or -
_xy = defaultattrgetter({'x': 0, 'y': 1})
One use case I am thinking of is functions that may be decorated with
attributes x and/or y. Obviously a python implementation of
defaultattrgetter would be trivial to implement but one of the
benefits of these functions is the speed. It also seems like it would
fit in well with the rest of the operator module.
Generally speaking, would anyone else have a use for this?
- John
I believe it would be a good idea in instances where it is known that a
collection of a single type is going to be returned, to return a subclass
with type information and type specific methods "mixed in". You could
provide member methods as collection methods that operate in a
vectorized manner, returning a new collection or iterator with the results
much like the mathematical functions in NumPy. This would also give
people a reliable method to make functions operate on both scalar and
vector values. I believe this could be implemented without needing
subclasses for everything under the sun with a generic collection "type
contract" mix-in. If a developer wanted to provide additional type specific
collection/iterator methods they would of course need to subclass that.
To avoid handcuffing people with types (which is definitely un-pythonic)
and maintain backwards compatibility, the standard collection
modification methods could be hooked so that if an object of an
incorrect type is added, a warning is raised and the collection
gracefully degrades by removing mixed-in type information and
methods. Additionally, a method could be provided that lets the user
"terminate the contract" causing the collection to degrade without a
warning.
I have several motivations for this:
-- Performing a series of operations using comprehensions or map
tends to be highly verbose in an uninformative way. Compare the
current method with what would be possible using "typed" collections:
L2 = [X(e) for e in L1]
L3 = [Y(e) for e in L2]
vs
L2 = X(L1) # assuming X has been updated to work in both vector/scalar
L3 = Y(L2) # context...
L2 = [Z(Y(X(e))) for e in L1]
vs
L2 = Z(Y(X(L1)))
L2 = [e.X().Y().Z() for e in L1]
vs
L2 = L1.X().Y().Z() # assuming vectorized versions of member methods
#are folded into the collection via the mixin.
-- Because collections are type agnostic, it is not possible to place
methods on them that are type specific. This leads to a lot of cases
where python forces you to read inside out or a the syntax gets
very disjoint in general. A good example of this is:
"\n".join(l.capitalize() for l in my_string.split("\n"))
which could reduce to something far more readable, such as:
my_string.split("\n").capitalize().join_items("\n")
Besides the benefits to basic language usability (in my opinion) there
are tool and development benefits:
-- The additional type information would simplify static analysis and
provide cues for optimization (I'm looking at pypy here; their list
strategies play to this perfectly)
-- The warning on "violating the contract" and without first terminating
it would be a helpful tool in catching and debugging errors.
I have some thoughts on syntax and specifics that I think would work well,
however I wanted to solicit more feedback before I go too far down that path.
Nathan
Hi~ alls
recently, i focus on the module "cmd", and find some confused things-- the function named "columnize". Why we need a multiloop as
"for nrows ..
for col ..
for row.."
?? i think we can make a easier method, for example, first, find out the maxlen str in list, and use its length as the standard size to format the list.
Ok, maybe i ignore something, so please give me some hints.
---
thanks
tom
>> L2 = [X(e) for e in L1]>> L3 = [Y(e) for e in L2]>> vs>> L2 = X(L1) # assuming X has been updated to work in both vector/scalar>> L3 = Y(L2) # context...>>> L = ['a', 'bc', ['ada', 'a']]>> What is len(L)? 3 or [1, 2, 2] or [1, 2, [3, 1]]?>>>> L2 = [Z(Y(X(e))) for e in L1]>> vs>> L2 = Z(Y(X(L1)))>>>> L2 = [e.X().Y().Z() for e in L1]>> vs>> L2 = L1.X().Y().Z() # assuming vectorized versions of member methods>> #are folded into the collection via the mixin.>>> What is L.count('a')? 1 or [1, 0, 1] or [1, 0, [2, 1]]?
A fair concern; if the vectorized version of the child method were
given the same name as the child method, I agree that this could
result in ambiguity.
There are multiple ways that member methods could be made available on
the collection, including a proxy attribute, renaming, etc.
...
>> I find
>> using operator& functools _far_ clearer in intent than using lambda,
>>
>> _and it works right now_, which was the point I was trying to make
>> here.
>
>
> I find using list comprehensions and generator expressions even more
> clearer.
In general I think comprehensions are superior to map because they are
more intuitive. Based on my experience reading code in the community
I think that is well supported.
Sadly, even though I have issues with the way they are used in many
cases, lambdas are superior to a lot of other options because they are
simple and cover many use cases. When used inline with simple
expressions they provide a lot of mileage.
Hello,
while I was implementing a connection pool i've noticed a pitfall of our
beloved with statements:
with pool.get_connection() as conn:
....conn.execute(...)
conn.execute(...) # the connection has been returned to the pool and does
not belong to the user!
a proposed solution:
with protected_context(pool.get_connection()) as conn:
....conn.execute(...)
conn.execute(...) # raises OutOfContextError()
with protected_context(file("/tmp/bla.txt", "w")) as f:
....file.write("blo")
file.write("blu") # raises OutOfContextError()
the solution is to basically proxy all methods to the real object until the
context ends and then the proxy expires.
what do you think about adding it to contextlib?
Thanks, Alon Horev
Although I love Python there are some aspects of the language design which
are disappointing and which can even lead to problems in some cases.
A classic example is a mutable default argument having the potential to
produce unexpected side-effects, as a consequence of the non-intuitive
scoping rules.
Another awkward 'feature' is the requirement for a trailing comma in
singleton tuples, due I believe to the use of expression parentheses rather
than (say) the use of special brackets like chevrons.
Something that I personally wish for is the ability to declare variable
types 'up front' but that facility is missing from Python.
This is an important issue, so I propose that the Python tutorial be
updated to highlight such problems. I would be willing to write a draft
section myself but obviously it would need to be reviewed.
I am not sure if this is the appropriate place to make such a comment but
it seems to be a good starting point. Any advice on making a more formal
proposal would be welcome.
Cheers,
Richard Prosser
PS Is it too late to fix such warts in version 3?
Twice recently I've found myself wanting to write the following code:
def fn(a_file=None):
responsible_for_closing = False
if a_file is None:
a_file = open(a_default_location)
responsible_for_closing = True
do_stuff(a_file)
if responsible_for_closing:
a_file.close()
which can be written slightly shorter I know, but it's still a tiny bit
messy and repetitive. What I'd prefer to write is something more like:
def fn(a_file=None):
with contextlib.maybe(a_file, open, default) as a_file:
do_stuff(a_file)
where `maybe` takes an object and conditionally runs a context manager
if a check fails. Implementation would be:
@contextlib.contextmanager
def maybe(got, contextfactory, *args, checkif=bool, **kwargs):
if checkif(got):
yield got
else:
with contextfactory(*args, **kwargs) as got:
yield got
It's hard to gauge utility for such simple functions (though contextlib
already has closing(), so I figured it'd be worth asking at least).
Would this be useful to others? Or perhaps I'm completely missing
something and you've got suggestions on how to better have an API where
an argument can be fetched if not provided but a context manager would
preferably need to be run to do so.
Hi,
in IDL/GDL, if at have an array
x = [1,2,3]
?I can simply construct a new array by
y = [0,x]
which would give
[0,1,2,3]
I know you can do this case with
x[0:0] = [0]
but obviously more complicated cases are conceivable, common for me, in fact
x=[1,2,3]
y=[5,6,7]
z=[0,x,4,y,8]
result
[0,1,2,3,4,5,6,7,8]
or maybe even
z=[0,x[0:2],5]
and so forth.
On the other hand, if you have a function you can pass the list of
arguments to another function
def f(*args):
? g(*args)
which expands the list args. ?So, would it be possible - not even
reasonable to have something similar for list and arrays, maybe
dictionaries, in comprehensions as well?
x = [1,2,3]
y = [0,*x]
to obtain [0,1,2,3]
or similar for lists
x = (1,2,3)
y = (0,*x)
to obtain (0,1,2,3)
(for dictionaries, which are not sorted, the extend function seems fine)
Or is there a way of doing this that in a similarly compact and
obvious way I did not yet discover?
-Alexander