As an alternative to the recently-proposed "except expression", I suggested this in its thread. I was recommended to post this separately, as, while it is related, it is different enough from the original idea. The general idea is to extend the context manager protocol so that it can produce a return value in alternative to simply letting or denying an exception from propagating, and introduce an inline form of "with". The motivation behind is, in all honesty, that I find the suggested except-expressions from PEP 463 to be too verbose to inline, and that context managers have a much broader reach. The motivation from the aforementioned PEP(inline evaluation to a "default" expression upon catching an exception) also applies here. It could look a bit like this: contexted_expr with context_manager as c Once more, the rationale from PEP 463 also applies here, in that we can shift "easy defaulting" from being a potential concern for the callee to being always available to the caller through new syntax. Likewise, currently in-lining the use of a context manager can be done, but only through manually encapsulating what would be the context-ed code through lambdas or eval. A full example could be as such: class Default(object): """Context manager that returns a given default value when an exception is caught.""" def __init__(self, value, *exception_types): self.value = value self.exception_types = exception_types or BaseException def __enter__(self): pass def __exit__(self, typ, val, tb): if typ and issubclass(typ, self.exception_types): return True, self.value lst = [1, 2] # found is assigned 2 found = lst[1] with Default(0, IndexError) # not found is assigned 0 not_found = lst[2] with Default(0, IndexError) The different interpretation of __exit__'s return value is probably something that needs to be discussed. In this form, it is potentially backwards-incompatible, so a preferable alternative would be a different special method, perhaps: def __return__(self, typ, val, tb, ret): if typ is None: return False, ret * 3 elif isinstance(typ, IndexError): return True, 10 The alternatively-named special method would take priority over __exit__ and take over its augmented function: If no exception has occurred, typ, val, and tb are given None, as with the regular __exit__ incarnation, but ret, or whatever is the fourth positional parameter, is supplied with what "expr" evaluated to in "expr with cmgr". When an exception has occurred or propagated, typ, val and tb are set to the appropriate values(Since exceptions now keep their traceback as an attribute, maybe only supply the exception object?), and ret is given None. If the return value of this special method is None, the exception or return value is propagated as is. If it is a sequence, the first element is used like the return value of __exit__ would be, and the second element is used to (re-)place the return value of the whole with expression. When multiple context managers are being chained, the return value/exception is forwarded much like it is with twisted's Deferreds. In the use case of providing a default value, if the default value is the product of an expensive operation, an alternative context manager can be designed to compute the value only when needed, for instance: fa = factorials[n] with SetDefault(factorials, n, lambda: math.factorial(n)) Other examples using existing context managers: contents = f.read() with open('file') as f with open('file') as f: contents = f.read() d = Decimal(1) / Decimal(7) with Context(prec=5) with Context(prec=5): d = Decimal(1) / Decimal(7) I think that's all I can think of so far. Sorry as this might be a little too detailed to start with, so I will remind you there is no offense in rethinking any of what I posted here. -yk
What I'd personally like to see is a combined for-with, something like x = [x.split() for x with in open(thisfile)] and for x with in open(somefile): print(x) A with expression might help too, --- Elazar 2014-02-20 20:50 GMT+02:00 Yann Kaiser <kaiser.yann@gmail.com>:
As an alternative to the recently-proposed "except expression", I suggested this in its thread. I was recommended to post this separately, as, while it is related, it is different enough from the original idea.
The general idea is to extend the context manager protocol so that it can produce a return value in alternative to simply letting or denying an exception from propagating, and introduce an inline form of "with".
The motivation behind is, in all honesty, that I find the suggested except-expressions from PEP 463 to be too verbose to inline, and that context managers have a much broader reach. The motivation from the aforementioned PEP(inline evaluation to a "default" expression upon catching an exception) also applies here.
It could look a bit like this:
contexted_expr with context_manager as c
Once more, the rationale from PEP 463 also applies here, in that we can shift "easy defaulting" from being a potential concern for the callee to being always available to the caller through new syntax. Likewise, currently in-lining the use of a context manager can be done, but only through manually encapsulating what would be the context-ed code through lambdas or eval.
A full example could be as such:
class Default(object): """Context manager that returns a given default value when an exception is caught."""
def __init__(self, value, *exception_types): self.value = value self.exception_types = exception_types or BaseException
def __enter__(self): pass
def __exit__(self, typ, val, tb): if typ and issubclass(typ, self.exception_types): return True, self.value
lst = [1, 2] # found is assigned 2 found = lst[1] with Default(0, IndexError) # not found is assigned 0 not_found = lst[2] with Default(0, IndexError)
The different interpretation of __exit__'s return value is probably something that needs to be discussed. In this form, it is potentially backwards-incompatible, so a preferable alternative would be a different special method, perhaps:
def __return__(self, typ, val, tb, ret): if typ is None: return False, ret * 3 elif isinstance(typ, IndexError): return True, 10
The alternatively-named special method would take priority over __exit__ and take over its augmented function: If no exception has occurred, typ, val, and tb are given None, as with the regular __exit__ incarnation, but ret, or whatever is the fourth positional parameter, is supplied with what "expr" evaluated to in "expr with cmgr". When an exception has occurred or propagated, typ, val and tb are set to the appropriate values(Since exceptions now keep their traceback as an attribute, maybe only supply the exception object?), and ret is given None. If the return value of this special method is None, the exception or return value is propagated as is. If it is a sequence, the first element is used like the return value of __exit__ would be, and the second element is used to (re-)place the return value of the whole with expression. When multiple context managers are being chained, the return value/exception is forwarded much like it is with twisted's Deferreds.
In the use case of providing a default value, if the default value is the product of an expensive operation, an alternative context manager can be designed to compute the value only when needed, for instance:
fa = factorials[n] with SetDefault(factorials, n, lambda: math.factorial(n))
Other examples using existing context managers:
contents = f.read() with open('file') as f
with open('file') as f: contents = f.read()
d = Decimal(1) / Decimal(7) with Context(prec=5)
with Context(prec=5): d = Decimal(1) / Decimal(7)
I think that's all I can think of so far. Sorry as this might be a little too detailed to start with, so I will remind you there is no offense in rethinking any of what I posted here.
-yk _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On Feb 20, 2014, at 10:54, אלעזר <elazarg@gmail.com> wrote:
What I'd personally like to see is a combined for-with, something like
x = [x.split() for x with in open(thisfile)]
and
for x with in open(somefile): print(x)
This has two prepositions in a row, attempting to share the same object. That isn't readable in English, or any other human language. Trying to parse it makes my brain hurt. Also, it obviously only works for objects which are iterable, and are also context managers whose context is self--which basically means file-like objects only. This is why we have the "as" clause in with statements (and in the proposed with expression). If you reorganize it to make more sense: x = [x.split() for x in f with open(thisfile) as f] ... then it's exactly the same thing I proposed last year, which was shot down for good reasons.
A with expression might help too,
--- Elazar
2014-02-20 20:50 GMT+02:00 Yann Kaiser <kaiser.yann@gmail.com>:
As an alternative to the recently-proposed "except expression", I suggested this in its thread. I was recommended to post this separately, as, while it is related, it is different enough from the original idea.
The general idea is to extend the context manager protocol so that it can produce a return value in alternative to simply letting or denying an exception from propagating, and introduce an inline form of "with".
The motivation behind is, in all honesty, that I find the suggested except-expressions from PEP 463 to be too verbose to inline, and that context managers have a much broader reach. The motivation from the aforementioned PEP(inline evaluation to a "default" expression upon catching an exception) also applies here.
It could look a bit like this:
contexted_expr with context_manager as c
Once more, the rationale from PEP 463 also applies here, in that we can shift "easy defaulting" from being a potential concern for the callee to being always available to the caller through new syntax. Likewise, currently in-lining the use of a context manager can be done, but only through manually encapsulating what would be the context-ed code through lambdas or eval.
A full example could be as such:
class Default(object): """Context manager that returns a given default value when an exception is caught."""
def __init__(self, value, *exception_types): self.value = value self.exception_types = exception_types or BaseException
def __enter__(self): pass
def __exit__(self, typ, val, tb): if typ and issubclass(typ, self.exception_types): return True, self.value
lst = [1, 2] # found is assigned 2 found = lst[1] with Default(0, IndexError) # not found is assigned 0 not_found = lst[2] with Default(0, IndexError)
The different interpretation of __exit__'s return value is probably something that needs to be discussed. In this form, it is potentially backwards-incompatible, so a preferable alternative would be a different special method, perhaps:
def __return__(self, typ, val, tb, ret): if typ is None: return False, ret * 3 elif isinstance(typ, IndexError): return True, 10
The alternatively-named special method would take priority over __exit__ and take over its augmented function: If no exception has occurred, typ, val, and tb are given None, as with the regular __exit__ incarnation, but ret, or whatever is the fourth positional parameter, is supplied with what "expr" evaluated to in "expr with cmgr". When an exception has occurred or propagated, typ, val and tb are set to the appropriate values(Since exceptions now keep their traceback as an attribute, maybe only supply the exception object?), and ret is given None. If the return value of this special method is None, the exception or return value is propagated as is. If it is a sequence, the first element is used like the return value of __exit__ would be, and the second element is used to (re-)place the return value of the whole with expression. When multiple context managers are being chained, the return value/exception is forwarded much like it is with twisted's Deferreds.
In the use case of providing a default value, if the default value is the product of an expensive operation, an alternative context manager can be designed to compute the value only when needed, for instance:
fa = factorials[n] with SetDefault(factorials, n, lambda: math.factorial(n))
Other examples using existing context managers:
contents = f.read() with open('file') as f
with open('file') as f: contents = f.read()
d = Decimal(1) / Decimal(7) with Context(prec=5)
with Context(prec=5): d = Decimal(1) / Decimal(7)
I think that's all I can think of so far. Sorry as this might be a little too detailed to start with, so I will remind you there is no offense in rethinking any of what I posted here.
-yk _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 20 February 2014 20:54, Andrew Barnert <abarnert@yahoo.com> wrote:
On Feb 20, 2014, at 10:50, Yann Kaiser <kaiser.yann@gmail.com> wrote:
In the use case of providing a default value, if the default value is the product of an expensive operation, an alternative context manager can be designed to compute the value only when needed, for instance:
fa = factorials[n] with SetDefault(factorials, n, lambda: math.factorial(n))
This one is less useful. That SetDefault has to repeat the factorials and n references, and it makes that fact explicit to the reader, and writes the same thing in two different ways.
But, more importantly, a simple "setdefault" function that did the same thing as your SetDefault class without the context management would be easier to write, and both nicer and easier to use:
fa = setdefault(factorials, n, lambda: math.factorial(n))
Agreed. I was asked in the other thread for a way to defer evaluation for the default value, which can simply be done by swapping the context manager. It boils down to: x = func() with ProduceDefault(lambda: expensive_func(), SomeException) I still can't come up with an example that doesn't ask for simply caching the expensive calculation, or where the cheap version's domain can't be checked beforehand. It may be possible that the "ProduceDefault" use case simply does not exist, but ultimately it is something that can be user-implemented at the python programmer's whim. Maybe someone will come up with a tangible use case for that particular aspect, but there are other aspects to an inline "with", as we've both noted. -yk
Other examples using existing context managers:
contents = f.read() with open('file') as f
This probably buys you more in a context where a statement doesn't work just as well, like a function parameter, or a lambda callback:
self.read = Button('Read', command=lambda: dostuff(f) with open(path) as f)
But either way, it's something I've wanted before.
with open('file') as f: contents = f.read()
d = Decimal(1) / Decimal(7) with Context(prec=5)
with Context(prec=5): d = Decimal(1) / Decimal(7)
I think that's all I can think of so far. Sorry as this might be a little too detailed to start with, so I will remind you there is no offense in rethinking any of what I posted here.
-yk _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
The first one could be accomplished like: x = [line.split() for line in f] with open(thisfile) as f It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident: x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file. This does mirror this mistake though: with open('name') as f: return (line.split() for line in f) When what was meant was: with open('name') as f: for line in f: yield line.split() (Which is, unfortunately, a __del__ in disguise, which is frowned upon.) This does raise the question of if we need a "for..in..with" construct for purposes other than setting a new "highest keyword density" record :-)
20.02.2014 22:33, Yann Kaiser wrote:
The first one could be accomplished like:
x = [line.split() for line in f] with open(thisfile) as f
It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident:
x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file.
This does mirror this mistake though:
with open('name') as f: return (line.split() for line in f)
When what was meant was:
with open('name') as f: for line in f: yield line.split()
(Which is, unfortunately, a __del__ in disguise, which is frowned upon.)
Why 'unfortunately'? Cheers. *j
On Feb 20, 2014, at 13:46, Jan Kaliszewski <zuo@chopin.edu.pl> wrote:
20.02.2014 22:33, Yann Kaiser wrote:
The first one could be accomplished like:
x = [line.split() for line in f] with open(thisfile) as f
It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident:
x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file.
This does mirror this mistake though:
with open('name') as f: return (line.split() for line in f)
When what was meant was:
with open('name') as f: for line in f: yield line.split()
(Which is, unfortunately, a __del__ in disguise, which is frowned upon.)
Why 'unfortunately'?
Relying on __del__ to clean up your files (and other expensive resources) is a bad idea if you only use CPython, and a terrible idea if your code might ever be run on other implementations: it means you don't get deterministic cleanup. And writing something using a with statement--which was explicitly designed to solve that problem--that still relies on __del__ would be dangerously misleading. However, as I pointed out in my other message, this is not really a __del__ in disguise, because if you use this in cases where the returned iterator is fully consumed in normal circumstances (e.g., a chain of iterator transformations that feeds into a final listcomp or writerows or executemany or the like), you _do_ get deterministic cleanup.
Cheers. *j
_______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
W dniu 20.02.2014 23:07, Andrew Barnert napisał(a):
On Feb 20, 2014, at 13:46, Jan Kaliszewski <zuo@chopin.edu.pl> wrote:
20.02.2014 22:33, Yann Kaiser wrote:
The first one could be accomplished like:
x = [line.split() for line in f] with open(thisfile) as f
It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident:
x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file.
This does mirror this mistake though:
with open('name') as f: return (line.split() for line in f)
When what was meant was:
with open('name') as f: for line in f: yield line.split()
(Which is, unfortunately, a __del__ in disguise, which is frowned upon.)
Why 'unfortunately'?
Relying on __del__ to clean up your files (and other expensive resources) is a bad idea if you only use CPython, and a terrible idea if your code might ever be run on other implementations: it means you don't get deterministic cleanup.
Yup, I see the point. At least, starting with CPython 3.4, the risk of creating uncollectable reference cycle no longer exists (PEP 442). (That, of course, does not invalidate the concern about deterministic finalization of resources).
And writing something using a with statement--which was explicitly designed to solve that problem--that still relies on __del__ would be dangerously misleading.
Unless the programmer knows what (s)he does. :)
However, as I pointed out in my other message, this is not really a __del__ in disguise, because if you use this in cases where the returned iterator is fully consumed in normal circumstances (e.g., a chain of iterator transformations that feeds into a final listcomp or writerows or executemany or the like), you _do_ get deterministic cleanup.
Indeed. Regards. *j
On Feb 20, 2014, at 15:10, Jan Kaliszewski <zuo@chopin.edu.pl> wrote:
W dniu 20.02.2014 23:07, Andrew Barnert napisał(a):
On Feb 20, 2014, at 13:46, Jan Kaliszewski <zuo@chopin.edu.pl> wrote:
20.02.2014 22:33, Yann Kaiser wrote:
The first one could be accomplished like:
x = [line.split() for line in f] with open(thisfile) as f
It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident:
x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file.
This does mirror this mistake though:
with open('name') as f: return (line.split() for line in f)
When what was meant was:
with open('name') as f: for line in f: yield line.split()
(Which is, unfortunately, a __del__ in disguise, which is frowned upon.)
Why 'unfortunately'?
Relying on __del__ to clean up your files (and other expensive resources) is a bad idea if you only use CPython, and a terrible idea if your code might ever be run on other implementations: it means you don't get deterministic cleanup.
Yup, I see the point.
At least, starting with CPython 3.4, the risk of creating uncollectable reference cycle no longer exists (PEP 442).
(That, of course, does not invalidate the concern about deterministic finalization of resources).
And writing something using a with statement--which was explicitly designed to solve that problem--that still relies on __del__ would be dangerously misleading.
Unless the programmer knows what (s)he does. :)
It doesn't matter what the programmer knows, it matters what the reader knows. And the reader may be the same programmer 6 months after he forgot what he was doing, or a completely different person. So misleading code is always dangerous. Sure, sometimes the best answer is to write it anyway and comment it to explain away the confusion, but that's never an ideal situation.
However, as I pointed out in my other message, this is not really a __del__ in disguise, because if you use this in cases where the returned iterator is fully consumed in normal circumstances (e.g., a chain of iterator transformations that feeds into a final listcomp or writerows or executemany or the like), you _do_ get deterministic cleanup.
Indeed.
Regards. *j
On Feb 20, 2014, at 13:33, Yann Kaiser <kaiser.yann@gmail.com> wrote:
The first one could be accomplished like:
x = [line.split() for line in f] with open(thisfile) as f
Yes, if we have a fully general with expression, we don't need a with clause in comprehensions.
It would keep f opened while the listcomp is being evaluated. This makes me think however of a likely accident:
x = (line.split() for line in f) with open('name') as f next(x) # ValueError: I/O operation on closed file.
This does mirror this mistake though:
with open('name') as f: return (line.split() for line in f)
When what was meant was:
with open('name') as f: for line in f: yield line.split()
Or just replace the return with yield from. This is pretty much the paradigm case of when you have to be a generator rather than just return an iterator, which is one of the two major reasons for yield from.
(Which is, unfortunately, a __del__ in disguise, which is frowned upon.)
No it's not. The file will be closed as soon as the iterator is fully consumed, even if it's not deleted until much later. Of course it does mean that if you abandon the iterator in the middle you're leaking the generator and therefore the file until deletion. But there are many use cases where that doesn't come up.
This does raise the question of if we need a "for..in..with" construct for purposes other than setting a new "highest keyword density" record :-)
As was pointed out to me when I suggested such a construct last year, the only cases where this helps are the two cases where you don't need it. Either it's a statement, so you just use statement with, or you can simulate it perfectly with a two-line function (which I believe is available as more_itertools.with_iter if you don't want to write it yourself). These are the two cases you just covered in this email. Anyone who wants to argue for this idea should read the previous thread first. (I would provide a link, but searching from my phone is painful.)
On Fri, Feb 21, 2014 at 9:01 AM, Andrew Barnert <abarnert@yahoo.com> wrote:
No it's not. The file will be closed as soon as the iterator is fully consumed, even if it's not deleted until much later.
Of course it does mean that if you abandon the iterator in the middle you're leaking the generator and therefore the file until deletion. But there are many use cases where that doesn't come up.
ISTM that could be dealt with by explicitly ending the generator. with open('name') as f: for line in f: if (yield line.split()): break Then you just send it a nonzero value and there you are, out of your difficulty at once! ChrisA
On Feb 20, 2014, at 20:47, Chris Angelico <rosuav@gmail.com> wrote:
On Fri, Feb 21, 2014 at 9:01 AM, Andrew Barnert <abarnert@yahoo.com> wrote:
No it's not. The file will be closed as soon as the iterator is fully consumed, even if it's not deleted until much later.
Of course it does mean that if you abandon the iterator in the middle you're leaking the generator and therefore the file until deletion. But there are many use cases where that doesn't come up.
ISTM that could be dealt with by explicitly ending the generator.
with open('name') as f: for line in f: if (yield line.split()): break
Then you just send it a nonzero value and there you are, out of your difficulty at once!
Good point! But there are still plenty of cases where you're feeding an iterator to some function that may just abandon it mid-way through. In that case, you cannot use the with-iter trick. Only when you know the consumer will consume the whole thing--or, as you point out, when you can make the consumer cooperate with you (which is easy if you're writing the consumer code, not so much otherwise)--is it helpful. I have a lot of scripts that use this; it's just important to know the limitations on it.
On Feb 20, 2014, at 10:50, Yann Kaiser <kaiser.yann@gmail.com> wrote:
As an alternative to the recently-proposed "except expression", I suggested this in its thread. I was recommended to post this separately, as, while it is related, it is different enough from the original idea.
The general idea is to extend the context manager protocol so that it can produce a return value in alternative to simply letting or denying an exception from propagating, and introduce an inline form of "with".
The motivation behind is, in all honesty, that I find the suggested except-expressions from PEP 463 to be too verbose to inline, and that context managers have a much broader reach. The motivation from the aforementioned PEP(inline evaluation to a "default" expression upon catching an exception) also applies here.
It could look a bit like this:
contexted_expr with context_manager as c
Once more, the rationale from PEP 463 also applies here, in that we can shift "easy defaulting" from being a potential concern for the callee to being always available to the caller through new syntax. Likewise, currently in-lining the use of a context manager can be done, but only through manually encapsulating what would be the context-ed code through lambdas or eval.
A full example could be as such:
class Default(object): """Context manager that returns a given default value when an exception is caught."""
def __init__(self, value, *exception_types): self.value = value self.exception_types = exception_types or BaseException
def __enter__(self): pass
def __exit__(self, typ, val, tb): if typ and issubclass(typ, self.exception_types): return True, self.value
lst = [1, 2] # found is assigned 2 found = lst[1] with Default(0, IndexError) # not found is assigned 0 not_found = lst[2] with Default(0, IndexError)
This case is a nice example.
The different interpretation of __exit__'s return value is probably something that needs to be discussed. In this form, it is potentially backwards-incompatible, so a preferable alternative would be a different special method, perhaps:
def __return__(self, typ, val, tb, ret): if typ is None: return False, ret * 3 elif isinstance(typ, IndexError): return True, 10
The alternatively-named special method would take priority over __exit__ and take over its augmented function: If no exception has occurred, typ, val, and tb are given None, as with the regular __exit__ incarnation, but ret, or whatever is the fourth positional parameter, is supplied with what "expr" evaluated to in "expr with cmgr". When an exception has occurred or propagated, typ, val and tb are set to the appropriate values(Since exceptions now keep their traceback as an attribute, maybe only supply the exception object?), and ret is given None. If the return value of this special method is None, the exception or return value is propagated as is. If it is a sequence, the first element is used like the return value of __exit__ would be, and the second element is used to (re-)place the return value of the whole with expression. When multiple context managers are being chained, the return value/exception is forwarded much like it is with twisted's Deferreds.
In the use case of providing a default value, if the default value is the product of an expensive operation, an alternative context manager can be designed to compute the value only when needed, for instance:
fa = factorials[n] with SetDefault(factorials, n, lambda: math.factorial(n))
This one is less useful. That SetDefault has to repeat the factorials and n references, and it makes that fact explicit to the reader, and writes the same thing in two different ways. But, more importantly, a simple "setdefault" function that did the same thing as your SetDefault class without the context management would be easier to write, and both nicer and easier to use: fa = setdefault(factorials, n, lambda: math.factorial(n))
Other examples using existing context managers:
contents = f.read() with open('file') as f
This probably buys you more in a context where a statement doesn't work just as well, like a function parameter, or a lambda callback: self.read = Button('Read', command=lambda: dostuff(f) with open(path) as f) But either way, it's something I've wanted before.
with open('file') as f: contents = f.read()
d = Decimal(1) / Decimal(7) with Context(prec=5)
with Context(prec=5): d = Decimal(1) / Decimal(7)
I think that's all I can think of so far. Sorry as this might be a little too detailed to start with, so I will remind you there is no offense in rethinking any of what I posted here.
-yk _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/
On 20 February 2014 18:50, Yann Kaiser <kaiser.yann@gmail.com> wrote:
...
some uses I've thought of: # current: try: with open('foo.txt', 'r') as f: thing = f.read() except IOError: thing = 'default' # except-expression: with open('foo.txt', 'r') as f: thing = f.read() except IOError: 'default' # with-expression: thing = f.read() with open('foo.txt', 'r') as f, default('default', IOError) ######### # current: try: thing = operation_may_fail() except: thing = 42 # except-expression: thing = operation_may_fail() except Exception: 42 # with-expression: thing = operation_may_fail() with computed_default(lambda: 42) ######### # current: # assuming one of these functions takes a very long time, and the other # blocks a thread you don't want to be blocking for long with some_lock: x = do_unsafe_thing() with another_lock: y = other_thing() frobnicate(x, y) # except-expression: # (no change) # with-expression: frobnicate((do_unsafe_thing() with some_lock), (other_thing() with another_lock)) ######### # current: try: os.unlink('/sbin/init') except OSError: pass # -- or -- with suppress(OSError): os.unlink('/sbin/init') # except-expression: os.unlink('/sbin/init') except OSError: None # (PEP 463 says you should use a with statement) # with-expression: os.unlink('/sbin/init') with suppress(OSError)
participants (7)
-
Andrew Barnert -
Chris Angelico -
Ed Kellett -
Ethan Furman -
Jan Kaliszewski -
Yann Kaiser -
אלעזר