Re: [Python-ideas] Operator as first class citizens -- like in scala -- or yet another new operator?
On Mon, Jun 3, 2019 at 8:01 PM Andrew Barnert <abarnert@yahoo.com> wrote:
I think it was a mistake to even mention assignment here (much less arbitrary operators). Most of the resistance you’re facing is because of that, and I think you’re missing a fundamental reason behind that resistance.
Overloading variable assignment makes no sense in Python. In a language with lvalue variable semantics like Scala or C++, a variable is a thing with type, address, and identity, and values live in variables. So assignment affects a variable, and variables have types that can overload how that works. In a language with name-binding variable semantics like Python, it’s the value that has type, address, and identity, and it lives somewhere unspecified on the heap; a variable is nothing more than a name that can be bound to a value in a namespace. So assignment doesn’t affect variables, and, even if it did, they don’t have types that could overload how that works. Overloading assignment isn’t impossible just because, it’s impossible because it makes no sense.
But the fact that in Scala, and Verilog, you send values to signals with an assignment-like operator on an lvalue for the signal doesn’t mean it actually is assignment. What you’re doing is an operation on the signal object itself, not on the signal variable. So neither = nor := would have been a good fit in the first place, and the fact that the variable is just a name rather than an lvalue isn’t a limitation at all, and there’s really no good reason not to have a “send a value to a signal” operator other than the (already very high) bar to new operators.
Acknowledged. I do love the dynamic nature of Python, and as many of you could have experienced that you might be developing a type system sooner or later. And newer python with type hint makes it even easier to do that. That being said, I fully agree that we should not mess up with the = or := assignment operators, that was taken as an example. And I fully agree with you I probably should not even use the term "assignment operators", what I needed is an operator that does not collide with all existing number/matrix operators.
If you just explained the operation without reference to assignment, then the questions would be about why << isn’t a good enough fit, how widely the new operator would be used, whether there might be uses in other domains (Erlang-style channels?), etc. At that point, PEP 465 is a much better parallel.
Because << is really one of the operator I need for signals ... it is even very straight forward to be translated into actual hardware implementation. I looked at every single operator I could have re-used ... the most promising one being @= operation, but then I realized I really want my signal matrix to use that operator for the sake of readability, it is really far more beautiful to read.
As a side note, if you haven’t already looked at other expression tree libraries like SymPy and SQLAnywhere and their parallels in languages like Scala and C++, it’s worth doing so. For many things (like most desirable uses of SymPy) the lack of variable assignment is no problem; for a few things (like implicit futures, or replacing namespaces with dataflow), it’s an insurmountable problem. I think your problem is more like the former.
I do have experience with ORMs (Object-Relational-Mapper) like in Django and others, and just looked at SymPy and it is a great project. And indeed I agree with you that this problem is actually easily fixable ... I can instead use: signal.next = something, or use signal.assign(something), which is doable and many people are doing it, and you have many different ways of doing so. And then the choice of "signal.next/assign" becomes arbitrary ... e.g. you can use signal._next, signal._assign so on and so forth. But all those for me are actually implementation details and it is killing me that there is no way to hide it. Every single other major HDLs in the world uses an operator to do so, (vhdl/verilog <=, chisel :=). And this is not actually forcing a HDL way of doing things on Python, it actually represented a common category of problems in Python: there is no operator support to mean: drop a value on me (where "me" is already defined somewhere else, it has a type and identity). In HDL implementation alone I can see already two significant use cases: * signal assignment: signal <== signal << 2 (vs. signal.next = signal << 2 or signal.assign(signal << 2)), and it allows too something like: signal_a <== signal_b <== signal_c (vs signal_a.next = signal_b, signal_b.next = signal_c). * module integration/connection and pipe line construction: module_a <== module_b <== module_c (vs. module_a.connect(module_b); module_b.connect(module_c)) And yet again let me acknowledge: I understood all these problem can be solved by using a function call instead, but it is not as pretty as an operator. This is the whole point of PEP 465 ... make it looks good, as readability matters. So my question: does this looking good to python community? (You don't care much on hardware design is probably another thing), I think it looks much better, and around <== a well defined concept can be established in a particular domain/project.
Also, I can imagine a reason why << isn’t acceptable: it’s probably not that uncommon to need actual bitshift operators. But if you could build something that uses << anyway, and works perfectly in today’s Python for examples that don’t need shift, and then show that off—and show an example where not having bitshift is a problem—you’d be in an even better position. (See the PEP 465 section on numerical libraries that used * for matmul. They clearly work, and are useful for many things, but not having * for elementwise multiplication significantly limits a wide range of useful programs, which is why @ was needed for numpy.)
Yep, that is my plan too. But instead of using <<, I have already modified CPython anyway to support <== and ==> operators and release a fully open sourced, powered-by-python HDL simulator with all major features you'd expect in hardware design (including dumping waveforms to VCD files, automatically building the hierarchy of module instance and signal instance names etc.). Code is already working but I want to make sure a few corner cases are well covered and it will be released alone with a PEP. I understand that after the assignment expression (:=) PEP572 people are already not happy with anything that could be slightly related to assignment overloading, I actually don't understand why people object PEP572 so hard, its practical use is clear and it makes code shorter and easier to read. Sometimes it seems the obsession of people in one idea makes them to even deny the existence of problems ... problems that having been solved well in other programming languages, and is well understood by majority of the programmers.
On Tue, Jun 4, 2019 at 8:07 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
I understand that after the assignment expression (:=) PEP572 people are already not happy with anything that could be slightly related to assignment overloading, I actually don't understand why people object PEP572 so hard, its practical use is clear and it makes code shorter and easier to read. Sometimes it seems the obsession of people in one idea makes them to even deny the existence of problems ... problems that having been solved well in other programming languages, and is well understood by majority of the programmers.
Assignment overloading and PEP 572 are completely orthogonal. The := operator makes assignment available in an expression context, rather than only as a statement, but this is nothing to do with allowing the target operator to redefine assignment. ChrisA
I'd like to get rid of all the signal and HDL stuff (whatever that means) in this thread, so I think what the original poster really wants is an "assign in place" operator. Basically, something like += or *= but without the arithmetic. When you think of it this way, it's not an unreasonable request. There would be at least one major use of this operator within CPython, for lists. With this proposal, the awkward syntax (there are 219 instances of this in the CPython sources) L[:] = new_list would become L <== new_list The implementation would be completely analogous to the existing in-place arithmetic operators. For example A <== B would become equivalent to A = type(A).__iassign__(A, B).
On Tue, Jun 04, 2019 at 12:47:30PM +0200, Jeroen Demeyer wrote:
When you think of it this way, it's not an unreasonable request. There would be at least one major use of this operator within CPython, for lists. With this proposal, the awkward syntax (there are 219 instances of this in the CPython sources)
L[:] = new_list
What is so awkward about slice assignment? It is an obvious generalisation of item assignment to slices of more than one index, with the start and end positions being optional. If you can use ``L[index] = value`` than slice assignment just follows from that.
would become
L <== new_list
Creating new syntax to make it easy to do things which are currently impossible or difficult is worth considering; creating new syntax just because some people don't like the colour of the bike-shed just creates language churn for its own sake. Introducing <== to give alternate syntax to slice assignment is, I think, a non-starter.
The implementation would be completely analogous to the existing in-place arithmetic operators. For example A <== B would become equivalent to A = type(A).__iassign__(A, B).
As far as I can tell, there is no difference between your proposal and the OP's proposal except you have changed the name of the dunder from __arrow__ to __iassign__. __iassign__ is inappropriate because there is no __assign__ dunder: x += y __iadd__ is related to x + y __add__ x -= y __isub__ is related to x - y __sub__ x *= y __imul__ is related to x * y __mul__ # etc x <== y __iassign__ is related to x <what?> y __assign__ ? and it is not a form of *augmented assignment*, it's just a method call. -- Steven
On 2019-06-04 13:29, Steven D'Aprano wrote:> As far as I can tell, there is no difference between your proposal
and the OP's proposal except you have changed the name of the dunder from __arrow__ to __iassign__.
I never claimed that there was a difference. I just tried to clarify what the original poster asked and put it in a wider context, because the original post was way too much focused on hardware stuff.
On Tue, 4 Jun 2019 at 12:47, Jeroen Demeyer <J.Demeyer@ugent.be> wrote:
On 2019-06-04 13:29, Steven D'Aprano wrote:> As far as I can tell, there is no difference between your proposal
and the OP's proposal except you have changed the name of the dunder from __arrow__ to __iassign__.
I never claimed that there was a difference. I just tried to clarify what the original poster asked and put it in a wider context, because the original post was way too much focused on hardware stuff.
... and I'll confirm that for me at least, rephrasing the request this way did help me understand the proposal better. (I'm neutral on whether it's a good idea, though) Paul
I agree this needs to be reframed but suggest that assignment in place isn't the most useful mental model. Instead, something like "generically apply a value to another" (dunder apply) or "update an object with another" (dunder update) might have a prayer of making sense. Perhaps there are other situations where having a *generic* operator meant to communicate the concept of sending and object into another makes sense. A few come to mind: my_dict.update my_gen.send my_list.append my_list.extend my_stream.write It's worth considering whether there would be benefit in providing a generic operator that provides nice syntax, intended for general use dependent entirely on that the user wants to make it mean. And whether some of the existing functionality above might benefit from some prettier syntax as well.
On 2019-06-04 14:34, Ricky Teachey wrote:
"update an object with another" (dunder update)
Yes, that's essentially what I meant. To me, "assign an object in place" and "update an object with another" mean the same thing.
A few come to mind:
my_dict.update
This is PEP 584, where += is used
my_gen.send
Sure, this makes sense to me!
my_list.append
I disagree because this keeps the contents of the old list.
my_list.extend
This should just be a generalization of the += operator.
my_stream.write
I'm not convinced. If you have an operator for writing, you expect an operator for reading too. But then, the analogy with += breaks down for me.
Ok agreed on .update and .extend. Two operators (+= and <==) doing the same thing is dumb. And for .append I agree "this thing is the same, just add this thing" is a little at odds with "update this thing when i send this other thing into it".
my_gen.send
Sure, this makes sense to me!
I have to admit, I *kind of* love this one.
If you have an operator for writing, you expect an
operator for reading too. But then, the analogy with += breaks down for me.
For reading, can't you just switch the order? fstream <== x # write into existing stream x <== fstream # read into existing object (ie, this is NOT an assignment type of action-- this might be confusing) Worth pointing out that fstream is a type of iterator, so might writing just be thought of as a specific case of sending into a generator? Similarly: reading is yield from the generator. Of course you'd have a battle to the death over whether the read should be Unicode based (reading lines) or reading a byte at a time. On the original topic: I was wondering the other day if the HDL thing could most conveniently be implemented using a generator-like class that allows things sent into it. But the .send syntax doesn't help the problem the OP was trying to solve of ugly syntax. I'm beginning to wonder of associating an operator with .send makes a lot sense. I think I would definitely find code more readable with that kind of syntax.
On 6/4/2019 6:47 AM, Jeroen Demeyer wrote:
I'd like to get rid of all the signal and HDL stuff (whatever that means) in this thread, so I think what the original poster really wants is an "assign in place" operator. Basically, something like += or *= but without the arithmetic.
I believe that what he wanted, at least initially, was not an in place mutation, which is nonsensical for ints, but a delayed binding. -- Terry Jan Reedy
Jeroen Demeyer writes:
When you think of it this way, it's not an unreasonable request. There would be at least one major use of this operator within CPython, for lists. With this proposal, the awkward syntax (there are 219 instances of this in the CPython sources)
L[:] = new_list
I'd rather not replace it. It's a perfectly Pythonic syntax, although it's a Python-specific idiom. It's an opportunity to see slice assignment in operation, which I would guess is relatively unusual in the general case. Steve
On Tue, Jun 4, 2019 at 12:47 PM Jeroen Demeyer <J.Demeyer@ugent.be> wrote:
I'd like to get rid of all the signal and HDL stuff (whatever that means) in this thread, so I think what the original poster really wants is an "assign in place" operator. Basically, something like += or *= but without the arithmetic.
When you think of it this way, it's not an unreasonable request. There would be at least one major use of this operator within CPython, for lists. With this proposal, the awkward syntax (there are 219 instances of this in the CPython sources)
L[:] = new_list
would become
L <== new_list
The part I liked it is, with <== basically all kinds of unnecessary details has been hidden from users. For example, L[:] if appeared at the right hand side, means a copy (not a reference) of L, but now when appear on the left hand side, it behaves like an in-place copy. This two isn't it mentally contradicting each other?
Think of it more like indexing a range. Say you have: L[:] = M[:] Which is the same as: L[0:len(L)] = M[0:len(M)] Which mentally you can think of like: L[0], L[1],...L[len(L)] = M[0],M[1],...M[len(M)] Slicing is just indexing that represents more than one element, and if you think about it like that slice assignment makes much more sense. -- Ryan https://refi64.com/ On Jun 5, 2019, 2:56 AM -0500, Yanghao Hua <yanghao.py@gmail.com>, wrote:
On Tue, Jun 4, 2019 at 12:47 PM Jeroen Demeyer <J.Demeyer@ugent.be> wrote:
I'd like to get rid of all the signal and HDL stuff (whatever that means) in this thread, so I think what the original poster really wants is an "assign in place" operator. Basically, something like += or *= but without the arithmetic.
When you think of it this way, it's not an unreasonable request. There would be at least one major use of this operator within CPython, for lists. With this proposal, the awkward syntax (there are 219 instances of this in the CPython sources)
L[:] = new_list
would become
L <== new_list
The part I liked it is, with <== basically all kinds of unnecessary details has been hidden from users.
For example, L[:] if appeared at the right hand side, means a copy (not a reference) of L, but now when appear on the left hand side, it behaves like an in-place copy. This two isn't it mentally contradicting each other? _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Jun 6, 2019 at 6:31 AM Ryan Gonzalez <rymg19@gmail.com> wrote:
Think of it more like indexing a range. Say you have:
L[:] = M[:]
Which is the same as:
L[0:len(L)] = M[0:len(M)]
Which mentally you can think of like:
L[0], L[1],...L[len(L)] = M[0],M[1],...M[len(M)]
Slicing is just indexing that represents more than one element, and if you think about it like that slice assignment makes much more sense.
This makes definitely sense on its own right. Problem is, when it moves to the right hand side, it means exactly the opposite: instead of indexing the full range of L, it is now representing a copy of L (X = L[:]). So L[:] = thing vs thing = L[:], although in both assignment L shows up exactly the same: L[:], but means completely different things.
On Thu, Jun 6, 2019 at 5:40 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Thu, Jun 6, 2019 at 6:31 AM Ryan Gonzalez <rymg19@gmail.com> wrote:
Think of it more like indexing a range. Say you have:
L[:] = M[:]
Which is the same as:
L[0:len(L)] = M[0:len(M)]
Which mentally you can think of like:
L[0], L[1],...L[len(L)] = M[0],M[1],...M[len(M)]
Slicing is just indexing that represents more than one element, and if you think about it like that slice assignment makes much more sense.
This makes definitely sense on its own right. Problem is, when it moves to the right hand side, it means exactly the opposite: instead of indexing the full range of L, it is now representing a copy of L (X = L[:]).
Come again? It's still indexing the full range of L - that's not "exactly the opposite"!
So L[:] = thing vs thing = L[:], although in both assignment L shows up exactly the same: L[:], but means completely different things.
They have a difference for the built-in list type in that slicing a list returns a new list with references to the same objects, thus "x = x[:]" is going to give you an equivalent but distinct list. That's an important point in some contexts, but it's by no means "completely different". ChrisA
On Thu, Jun 6, 2019 at 9:48 AM Chris Angelico <rosuav@gmail.com> wrote:
They have a difference for the built-in list type in that slicing a list returns a new list with references to the same objects, thus "x = x[:]" is going to give you an equivalent but distinct list. That's an important point in some contexts, but it's by no means "completely different".
Alright Chris, I would rephrase it like this: they are different because one of them represent the original list (when at the left hand side), and the other represent a new list (when at the right hand side). Whether or not they are "completely different" is a subjective matter and I will not enforce it. For me, they are really different. I think the trick it used here is: let's sacrifice a little bit of low-level consistency, as normally nobody would use list slicing (which always means a copy on right hand side) on the left hand side, so let's redefine it to mean in-place modification. And descriptor and left hand slicing is probably the only two exceptions where an assignment (=) can actually change things in place (you see, this is really an exceptional case for =). And it will become more confusing if L[:] := thing is used, it is now an expression which kind of let you think about "right hand side" (e.g. what about z = (L[:] := [7,8,9,10])). L[:] := thing currently does not work in python3.8a3, throws a "Syntax Error: cannot use named assignment with subscript".
I'm not sure if you saw my reply earlier: https://mail.python.org/archives/list/python-ideas@python.org/thread/B7QPHTQ... I proposed some alternative syntax already supported.
On Thu, Jun 6, 2019 at 11:10 AM Angus Hollands <goosey15@gmail.com> wrote:
I'm not sure if you saw my reply earlier: https://mail.python.org/archives/list/python-ideas@python.org/thread/B7QPHTQ...
I proposed some alternative syntax already supported.
Yes, saw it ... not sure signal[...] = thing is better than signal.next = thing though.
Yanghao Hua writes:
For example, L[:] if appeared at the right hand side, means a copy (not a reference) of L, but now when appear on the left hand side, it behaves like an in-place copy. This two isn't it mentally contradicting each other?
No. I suspect you're confused by the specifics. Slice notation is fully general. L[m:m+k] specifies that a list operation will take place on the k elements starting with m. As a value, it makes a new list of references to those elements. As an assignment target, it deletes those elements and splices the sequence on the right hand side into L starting with m, "pushing m+k and following elements to the right". The semantics of lvalues and rvalues are different in all languages I know of, and in this way. Slices are an unusual kind of lvalue, for sure, but once you've got the syntax, the semantics are pretty obvious and definitely useful. You can write L[n:n+1] = [x] instead of L[n] = x (only crazy people do that) L[n:n] = [x] instead of L.insert(n, x) (pretty crazy, too) L[n:n+1] = [] instead of del L[n] (only crazy people do that) L = []; [:] = [1, 2] (only crazy people do that, NB, this isn't "in-place" 'cause there's no "place" there to be "in"!) L[n:] = [] (useful, truncates to length n) L1[n:] = L2 The meaning of "L[:] = ..." is just the standard definition of slice as assignment target, with the endpoints defaulting to first and last. Since in your notation, the slice doesn't appear, no non-default slice assignment can even be expressed. For this purpose, "<==" is redundant and less powerful than "slice as left hand side" notation. Any non-trivial left hand side can accept a right hand side of arbitrary length, the lengths don't have to match. And of course the right hand side can be a slice itself.
Stephen J. Turnbull wrote:
L[m:m+k] specifies that a list operation will take place on the k elements starting with m. As a value, it makes a new list of references to those elements.
Even that is specific to lists. There's no requirement that a RHS slice has to create new references to elements. A type can define it so that it returns a mutable view of part of the original object. This is how numpy arrays behave, for example. As syntax, slice notation simply denotes a range of elements, and it does that the same way whether it's on the LHS or RHS. -- Greg
The problem as I see it with slice assignment is that if we want to operator to mean type defined assignment not necessary in place assignment. It creates confusion for types which have __setitem__. Caleb Donovick On Thu, Jun 6, 2019 at 4:59 PM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Stephen J. Turnbull wrote:
L[m:m+k] specifies that a list operation will take place on the k elements starting with m. As a value, it makes a new list of references to those elements.
Even that is specific to lists. There's no requirement that a RHS slice has to create new references to elements. A type can define it so that it returns a mutable view of part of the original object. This is how numpy arrays behave, for example.
As syntax, slice notation simply denotes a range of elements, and it does that the same way whether it's on the LHS or RHS.
-- Greg _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/HF3RYO... Code of Conduct: http://python.org/psf/codeofconduct/
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why? That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it." -- Rhodri James *-* Kynesim Ltd
On Tue, Jun 4, 2019 at 2:20 PM Rhodri James <rhodri@kynesim.co.uk> wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it."
Rhodri, I don't know how could I be more specific, help me out here: signal << (signal << 2) --> the first >> means "assign" and the second means shift? Do you really think this is readable? Or maybe you have a better idea I am not aware of? signals are numbers (with arbitrary bit width), all arithmetic ops still holds.
On 04/06/2019 13:36, Yanghao Hua wrote:
On Tue, Jun 4, 2019 at 2:20 PM Rhodri James<rhodri@kynesim.co.uk> wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators. Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it." Rhodri, I don't know how could I be more specific, help me out here:
signal << (signal << 2) --> the first >> means "assign" and the second means shift? Do you really think this is readable? Or maybe you have a better idea I am not aware of?
I'm asking why you want the first "assignment" << *at all*. What is it about the operation you are doing (which, incidentally, I still don't get) that makes it *so much* better at expressing what you're doing than (say) a method call? What's wrong with signal.suitable_descriptive_verb(signal << 2) The bar for adding a new operator is intentionally high, and I haven't seen enough justification to satisfy me yet. Everyone will need to be able to read this and get your intention, don't forget. -- Rhodri James *-* Kynesim Ltd
On Tue, Jun 04, 2019 at 01:20:14PM +0100, Rhodri James wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it."
(1) Because they're already being used. That rules out operators that are supported by numbers at least, since the primitive values in his code are numbers. So once you remove the operators supported by numbers: + - * / // & ** ^ & | << >> ~ < > <= >= == != (have I missed any?) I think that only leaves @ remaining. (2) Because things which act different should look different, and things which act similar should look similar. Yanghao Hua wants an operator which suggests a kind of assignment. Out of the remaining set of operators, which one do you think suggests assignment? -- Steven
On 6/4/2019 8:38 AM, Steven D'Aprano wrote:
On Tue, Jun 04, 2019 at 01:20:14PM +0100, Rhodri James wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it."
(1) Because they're already being used. That rules out operators that are supported by numbers at least, since the primitive values in his code are numbers.
So once you remove the operators supported by numbers:
+ - * / // & ** ^ & | << >> ~ < > <= >= == !=
(have I missed any?) I think that only leaves @ remaining.
One problem is that this has no end. Say <== is added (for "signals"), then someone will say "I want something just like a signal, but with this one additional operator". What then? As sad as it may be to many people (and I'm one of them), Python just isn't the right fit for designing DSLs. And if we want to improve it in that area, SQL is the first place we should start looking. SQLAlchemy has any number of functions or weird use of operators (.in_(), &&, etc.) that could be improved. SQLAlchemy chose to just accept that Python is what it is, and did they best they could, even though some of the constructs are not ideal. And I think it's been pretty successful. Eric
On Tue, Jun 4, 2019 at 2:50 PM Eric V. Smith <eric@trueblade.com> wrote:
On 6/4/2019 8:38 AM, Steven D'Aprano wrote:
On Tue, Jun 04, 2019 at 01:20:14PM +0100, Rhodri James wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it."
(1) Because they're already being used. That rules out operators that are supported by numbers at least, since the primitive values in his code are numbers.
So once you remove the operators supported by numbers:
+ - * / // & ** ^ & | << >> ~ < > <= >= == !=
(have I missed any?) I think that only leaves @ remaining.
One problem is that this has no end. Say <== is added (for "signals"), then someone will say "I want something just like a signal, but with this one additional operator". What then?
As sad as it may be to many people (and I'm one of them), Python just isn't the right fit for designing DSLs. And if we want to improve it in that area, SQL is the first place we should start looking. SQLAlchemy has any number of functions or weird use of operators (.in_(), &&, etc.) that could be improved. SQLAlchemy chose to just accept that Python is what it is, and did they best they could, even though some of the constructs are not ideal. And I think it's been pretty successful.
As though as it is an objection I do like the reasoning a lot more, and that's exactly why the title for this thread I was choosing is " ... yet another operator?". And Yes SQLAlchemy shows similar scenarios where Python is not yet good at. And I know the potential huge challenge to actually change something fundamentally ... but sometimes hope is the most important thing: is Python visioned in a way to allow users to define there own operators? I think this will be a huge topic in the coming future. In a world where people just do general purpose programming/computing, to a world that every one can design its own system-on-chip, there will simply be more and more domain-specific languages. This is where I see scala picks up more and more. And as a Python fan, I really do hope Python could support this (just like it picked up async/await from C#) and make it truly the first-choice for designing a domain specific language.
Yanghao Hua wrote:
is Python visioned in a way to allow users to define there own operators?
No, it's not. Whenever the topic has come up, Guido has always said that he is against having user-defined syntax in Python. -- Greg
On Tue, Jun 4, 2019 at 3:24 PM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yanghao Hua wrote:
is Python visioned in a way to allow users to define there own operators?
No, it's not. Whenever the topic has come up, Guido has always said that he is against having user-defined syntax in Python.
Did Guido say "user defined syntax" or "user defined operator"? I love python's syntax and I don't see the need to change it. But for me defining new operator is different and we just had a few more recently (@, @=, :=). The way scala allows user to define new operator is really elegant, and this is actually the driving force for designing new DSL in scala.
Yanghao Hua wrote:
Did Guido say "user defined syntax" or "user defined operator"? ... for me defining new operator is different
To my mind it's not that much different. A reader encountering an unfamiliar operator is pretty much faced with a new piece of syntax to learn. Its spelling gives little to no clue as to its meaning, it precedence in relation to other operators is not obvious, etc.
The way scala allows user to define new operator is really elegant,
There are some differences between Python and Scala that present practical difficulties here. Some kind of declaration would be needed to define its arity, precedence and corresponding dunder method. This is no problem in Scala because it analyses imported code at compile time. But the Python compiler only sees the module being compiled, so it has no ability to import these kinds of declarations from another module. Plus, the way the parser works would probably have to be completely redesigned. So anyone suggesting user-defined operators be added to Python has to both convince the community that it's a good idea in the first place, and come up with solutions to all the technical problems. -- Greg
On Wed, Jun 5, 2019 at 12:52 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yanghao Hua wrote:
Did Guido say "user defined syntax" or "user defined operator"? ... for me defining new operator is different
To my mind it's not that much different. A reader encountering an unfamiliar operator is pretty much faced with a new piece of syntax to learn. Its spelling gives little to no clue as to its meaning, it precedence in relation to other operators is not obvious, etc.
The way scala allows user to define new operator is really elegant,
There are some differences between Python and Scala that present practical difficulties here. Some kind of declaration would be needed to define its arity, precedence and corresponding dunder method. This is no problem in Scala because it analyses imported code at compile time. But the Python compiler only sees the module being compiled, so it has no ability to import these kinds of declarations from another module.
My understood how scala achieves this is, whenenver it sees "taken1 token2 token3" kind of expression, it does token1.token2(token3). So it was assumed token2 is a method of token1 and it always takes one argument. With my very limited understanding of cpython internals (at least when I implement <==) it seems cpython is searching for an operator and then translates it into a method call on the objects, so what if instead of telling e.g. python to search __add__ method call, it should search int.+ (the "+" method call) instead? Not sure if this helps at all I believe there are tons of other problems too. This is definitely NOT an easy thing to add post-fact if not thought through from the very beginning.
On Wed, 5 Jun 2019 at 09:06, Yanghao Hua <yanghao.py@gmail.com> wrote:
With my very limited understanding of cpython internals (at least when I implement <==) it seems cpython is searching for an operator and then translates it into a method call on the objects
Not really. The valid operators are hard coded into the parser and lexer (see https://docs.python.org/3/reference/lexical_analysis.html#operators and https://docs.python.org/3/reference/expressions.html#unary-arithmetic-and-bi... onwards) and there's no way at runtime for the user to introduce a new operator. So any new operator requires a very low-level change to the core of Python itself. (Not necessarily a *hard* change, but you do have to build a custom Python interpreter). I suspect you know this (as you've implemented <==) so I'm not entirely clear why you think Scala's approach (which as I understand it allows arbitrary operator symbols to be defined at runtime, and is therefore different at a very fundamental level) is relevant. Paul
On Wed, Jun 5, 2019 at 6:08 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 5, 2019 at 12:52 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yanghao Hua wrote:
Did Guido say "user defined syntax" or "user defined operator"? ... for me defining new operator is different
To my mind it's not that much different. A reader encountering an unfamiliar operator is pretty much faced with a new piece of syntax to learn. Its spelling gives little to no clue as to its meaning, it precedence in relation to other operators is not obvious, etc.
The way scala allows user to define new operator is really elegant,
There are some differences between Python and Scala that present practical difficulties here. Some kind of declaration would be needed to define its arity, precedence and corresponding dunder method. This is no problem in Scala because it analyses imported code at compile time. But the Python compiler only sees the module being compiled, so it has no ability to import these kinds of declarations from another module.
My understood how scala achieves this is, whenenver it sees "taken1 token2 token3" kind of expression, it does token1.token2(token3). So it was assumed token2 is a method of token1 and it always takes one argument.
With my very limited understanding of cpython internals (at least when I implement <==) it seems cpython is searching for an operator and then translates it into a method call on the objects, so what if instead of telling e.g. python to search __add__ method call, it should search int.+ (the "+" method call) instead? Not sure if this helps at all I believe there are tons of other problems too. This is definitely NOT an easy thing to add post-fact if not thought through from the very beginning.
Part of the reason you can't just treat the + operator as a method call is that there are reflected methods. Consider: class Int(int): def __radd__(self, other): print("You're adding %s to me!" % other) return 1234 x = Int(7) print(x + 1) print(1 + x) If these were implemented as x.__add__(1) and (1).__add__(x), the second one would use the default implementation of addition. The left operand would be the only one able to decide how something should be implemented. ChrisA
On Wed, Jun 5, 2019 at 11:31 AM Chris Angelico <rosuav@gmail.com> wrote:
Part of the reason you can't just treat the + operator as a method call is that there are reflected methods. Consider:
class Int(int): def __radd__(self, other): print("You're adding %s to me!" % other) return 1234
x = Int(7) print(x + 1) print(1 + x)
If these were implemented as x.__add__(1) and (1).__add__(x), the second one would use the default implementation of addition. The left operand would be the only one able to decide how something should be implemented.
Yep, just did an experiment in Scala, where you can do x + 1, but not 1 + x. So it looses some flexibility in terms of how you write your expression, but still, it looks OK to only write x + 1 and when you write 1 + x you get an error immediately.
First off, I have admittedly not read all of this thread. However, as designer of DSL's in python, I wanted to jump in on a couple of things I have seen suggested. Sorry If I am repeating comments already made. Over the last few years I have thought about every suggestion I have seen in this thread and they all don't work or are undesirable for one reason or another. Regarding what code will become simpler if an assignment operator was available. I currently walk the AST to rewrite assignments into the form I want. This is code is really hard to read if you are not familiar with the python AST, but the task it is performing is not hard to understand at all (replace assignment nodes with calls to a function call to dsl_assign(target_names, value, globals(), locals()). The dsl_assign function basically performs some type checking before doing the assignment. Once again the code is much harder to understand than it should be as it operates on names of variables and the globals / locals dictionaries instead of on the variables themselves. Also to get the hook into the AST I have to have use an importer which further obscures my code and makes use kinda annoying as one has to do the following: ``` main.py: import dsl # sets up the importer to rewrite the AST import dsl_code # the code which should morally be the main but most be imported after dsl ``` Granted anything in a function or a class can be rewritten with a decorator but module level code must be rewritten by an importer. The problem with overloading obj@=value: As Yanghao have pointed out @ comes with expectations of behavior. Granted I would gamble most developers are unaware that @ is python operator but it still has meaning and as such I don't like abusing it. The problem with using obj[:]=value: Similar to @ getitem and slices have meaning which I don't necessarily want to override. Granted this is least objectionable solution I have seen although, it creates weird requirements for types that have __getitem__ so it can just be used by inheriting `TypedAssignment` or something similar. The problem with descriptors: They are hard to pass to functions, for example consider trying to fold assignment. ``` signals = [Signal('x'), Signal('y'), Signal('z')] out = functools.reduce(operator.iassign, signals) ``` vs ``` signals = SignalNamespace() signal_names = ['x', 'y', 'z'] out = functools.reduce(lambda v, name: settattr(signals, name, v)) ``` In general one has to pass the name of the signal and the namespace to a function instead the signal itself which is problematic. The problem with exec: First off its totally unpythonic, but even if I hide the exec with importer magic it still doesn't give me the behavior I want. Consider the following ``` class TypeCheckDict(dict, MutableMapping): # dict needed to be used as globals """ Dictionary which binds keys to a type on first assignment then type checks on future assignement. Will infer type if not already bound. """ __slots__ = '_d' def __init__(self, d=_MISSING): if d is _MISSING: d = {} self._d = d def __getitem__(self, name): v = self._d[name][1] if v is _MISSING: raise ValueError() else: return v def __setitem__(self, name, value): if name not in self._d: if isinstance(value, type): self._d[name] = [value, _MISSING] else: self._d[name] = [type(value), _MISSING] elif isinstance(value, self._d[name][0]): self._d[name][1] = value else: raise TypeError(f'{value} is not a {self._d[name][0]}') # __len__ __iter__ __delitem__ just dispatch to self._d S = ''' x = int x = 1 x = 'a' ''' exec(S, TypeCheckDict(), TypeCheckDict()) # raises TypeError 'a' is not a int S = ''' def foo(): # type of foo inferred x = int x = 'a' foo() ''' exec(S, TypeCheckDict(), TypeCheckDict()) # doesn't raise an error as a normal dict is used in foo ``` Caleb Donovick On Thu, Jun 6, 2019 at 12:54 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 5, 2019 at 11:31 AM Chris Angelico <rosuav@gmail.com> wrote:
Part of the reason you can't just treat the + operator as a method call is that there are reflected methods. Consider:
class Int(int): def __radd__(self, other): print("You're adding %s to me!" % other) return 1234
x = Int(7) print(x + 1) print(1 + x)
If these were implemented as x.__add__(1) and (1).__add__(x), the second one would use the default implementation of addition. The left operand would be the only one able to decide how something should be implemented.
Yep, just did an experiment in Scala, where you can do x + 1, but not 1 + x. So it looses some flexibility in terms of how you write your expression, but still, it looks OK to only write x + 1 and when you write 1 + x you get an error immediately. Python-Ideas mailing list -- python-dev(a)python.org To unsubscribe send an email to python-ideas-leave(a)python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/
On Mon, Jun 10, 2019 at 8:57 PM Caleb Donovick <donovick@cs.stanford.edu> wrote:
First off, I have admittedly not read all of this thread. However, as designer of DSL's in python, I wanted to jump in on a couple of things I have seen suggested. Sorry If I am repeating comments already made. Over the last few years I have thought about every suggestion I have seen in this thread and they all don't work or are undesirable for one reason or another.
Glad to see I am not alone in the woods :-) So the argument that this problem is only Yanghao -- a single person's problem -- can be gone. You have to rephrase it to "this is only two person's problem" now. ;-)
Regarding what code will become simpler if an assignment operator was available. I currently walk the AST to rewrite assignments into the form I want. This is code is really hard to read if you are not familiar with the python AST, but the task it is performing is not hard to understand at all (replace assignment nodes with calls to a function call to dsl_assign(target_names, value, globals(), locals()). The dsl_assign function basically performs some type checking before doing the assignment. Once again the code is much harder to understand than it should be as it operates on names of variables and the globals / locals dictionaries instead of on the variables themselves. Also to get the hook into the AST I have to have use an importer which further obscures my code and makes use kinda annoying as one has to do the following: ``` main.py: import dsl # sets up the importer to rewrite the AST import dsl_code # the code which should morally be the main but most be imported after dsl ``` Granted anything in a function or a class can be rewritten with a decorator but module level code must be rewritten by an importer.
I thought about doing it a lot ... eventually manipulating AST is not so much easier than actually re-develop the entire DSL from scratch ... and this suffers eventually similar isssues like @=/L[:] overriding, it abuses a common understanding. I have also been looking into MacroPy3 for some time now, and despite you still need something like P[your customized expression] (the P[...] overhead which is exposed to end-users), changing an existing python syntax to mean something completely different, or translating a non-exist python syntax into something else really makes me feel this will make things mentally inconsistent, and actually makes python no longer python ... And this haven't touched what it means for debugging later on ...
The problem with overloading obj@=value: As Yanghao have pointed out @ comes with expectations of behavior. Granted I would gamble most developers are unaware that @ is python operator but it still has meaning and as such I don't like abusing it.
Yep.
The problem with using obj[:]=value: Similar to @ getitem and slices have meaning which I don't necessarily want to override. Granted this is least objectionable solution I have seen although, it creates weird requirements for types that have __getitem__ so it can just be used by inheriting `TypedAssignment` or something similar.
Exactly. I will try to summarize all the pros and cons I saw people posting for the L[:] case, and the thing for DSL is L[:] is confusing on its own (not better than obj.next = ...).
The problem with descriptors: They are hard to pass to functions, for example consider trying to fold assignment. ``` signals = [Signal('x'), Signal('y'), Signal('z')] out = functools.reduce(operator.iassign, signals) ``` vs ``` signals = SignalNamespace() signal_names = ['x', 'y', 'z'] out = functools.reduce(lambda v, name: settattr(signals, name, v)) ``` In general one has to pass the name of the signal and the namespace to a function instead the signal itself which is problematic.
The problem with exec: First off its totally unpythonic, but even if I hide the exec with importer magic it still doesn't give me the behavior I want. Consider the following ``` class TypeCheckDict(dict, MutableMapping): # dict needed to be used as globals """ Dictionary which binds keys to a type on first assignment then type checks on future assignement. Will infer type if not already bound. """ __slots__ = '_d' def __init__(self, d=_MISSING): if d is _MISSING: d = {} self._d = d
def __getitem__(self, name): v = self._d[name][1] if v is _MISSING: raise ValueError() else: return v
def __setitem__(self, name, value): if name not in self._d: if isinstance(value, type): self._d[name] = [value, _MISSING] else: self._d[name] = [type(value), _MISSING] elif isinstance(value, self._d[name][0]): self._d[name][1] = value else: raise TypeError(f'{value} is not a {self._d[name][0]}')
# __len__ __iter__ __delitem__ just dispatch to self._d
S = ''' x = int x = 1 x = 'a' ''' exec(S, TypeCheckDict(), TypeCheckDict()) # raises TypeError 'a' is not a int
S = ''' def foo(): # type of foo inferred x = int x = 'a' foo() ''' exec(S, TypeCheckDict(), TypeCheckDict()) # doesn't raise an error as a normal dict is used in foo ```
Python provides us all the flexibility to do all kinds of "fancy" things, so flexible that we can make it *NOT* look like python at all. I have done similar things for descriptors, and I am still thinking was it the right approach to make descriptor not behaving like the way a descriptor is supposed to behave.
On 11 Jun 2019, at 09:50, Yanghao Hua <yanghao.py@gmail.com> wrote:
On Mon, Jun 10, 2019 at 8:57 PM Caleb Donovick <donovick@cs.stanford.edu <mailto:donovick@cs.stanford.edu>> wrote:
First off, I have admittedly not read all of this thread. However, as designer of DSL's in python, I wanted to jump in on a couple of things I have seen suggested. Sorry If I am repeating comments already made. Over the last few years I have thought about every suggestion I have seen in this thread and they all don't work or are undesirable for one reason or another.
Glad to see I am not alone in the woods :-) So the argument that this problem is only Yanghao -- a single person's problem -- can be gone. You have to rephrase it to "this is only two person's problem" now. ;-)
Some times a DSL is usable within the python syntax and that is great. I have use python for a number of DSL's. But when the DSL is beyond what python can help with directly I'm wondering why you do not parse the DSL with python and execute the results. In that way you can have any semantics that you want from any syntax that you wish to have. However you do not need the python language to be changed at all. And it must be clear that you are making little to no progress on convincing people that changing python is a good idea. Barry
Regarding what code will become simpler if an assignment operator was available. I currently walk the AST to rewrite assignments into the form I want. This is code is really hard to read if you are not familiar with the python AST, but the task it is performing is not hard to understand at all (replace assignment nodes with calls to a function call to dsl_assign(target_names, value, globals(), locals()). The dsl_assign function basically performs some type checking before doing the assignment. Once again the code is much harder to understand than it should be as it operates on names of variables and the globals / locals dictionaries instead of on the variables themselves. Also to get the hook into the AST I have to have use an importer which further obscures my code and makes use kinda annoying as one has to do the following: ``` main.py: import dsl # sets up the importer to rewrite the AST import dsl_code # the code which should morally be the main but most be imported after dsl ``` Granted anything in a function or a class can be rewritten with a decorator but module level code must be rewritten by an importer.
I thought about doing it a lot ... eventually manipulating AST is not so much easier than actually re-develop the entire DSL from scratch ... and this suffers eventually similar isssues like @=/L[:] overriding, it abuses a common understanding. I have also been looking into MacroPy3 for some time now, and despite you still need something like P[your customized expression] (the P[...] overhead which is exposed to end-users), changing an existing python syntax to mean something completely different, or translating a non-exist python syntax into something else really makes me feel this will make things mentally inconsistent, and actually makes python no longer python ... And this haven't touched what it means for debugging later on ...
The problem with overloading obj@=value: As Yanghao have pointed out @ comes with expectations of behavior. Granted I would gamble most developers are unaware that @ is python operator but it still has meaning and as such I don't like abusing it.
Yep.
The problem with using obj[:]=value: Similar to @ getitem and slices have meaning which I don't necessarily want to override. Granted this is least objectionable solution I have seen although, it creates weird requirements for types that have __getitem__ so it can just be used by inheriting `TypedAssignment` or something similar.
Exactly. I will try to summarize all the pros and cons I saw people posting for the L[:] case, and the thing for DSL is L[:] is confusing on its own (not better than obj.next = ...).
The problem with descriptors: They are hard to pass to functions, for example consider trying to fold assignment. ``` signals = [Signal('x'), Signal('y'), Signal('z')] out = functools.reduce(operator.iassign, signals) ``` vs ``` signals = SignalNamespace() signal_names = ['x', 'y', 'z'] out = functools.reduce(lambda v, name: settattr(signals, name, v)) ``` In general one has to pass the name of the signal and the namespace to a function instead the signal itself which is problematic.
The problem with exec: First off its totally unpythonic, but even if I hide the exec with importer magic it still doesn't give me the behavior I want. Consider the following ``` class TypeCheckDict(dict, MutableMapping): # dict needed to be used as globals """ Dictionary which binds keys to a type on first assignment then type checks on future assignement. Will infer type if not already bound. """ __slots__ = '_d' def __init__(self, d=_MISSING): if d is _MISSING: d = {} self._d = d
def __getitem__(self, name): v = self._d[name][1] if v is _MISSING: raise ValueError() else: return v
def __setitem__(self, name, value): if name not in self._d: if isinstance(value, type): self._d[name] = [value, _MISSING] else: self._d[name] = [type(value), _MISSING] elif isinstance(value, self._d[name][0]): self._d[name][1] = value else: raise TypeError(f'{value} is not a {self._d[name][0]}')
# __len__ __iter__ __delitem__ just dispatch to self._d
S = ''' x = int x = 1 x = 'a' ''' exec(S, TypeCheckDict(), TypeCheckDict()) # raises TypeError 'a' is not a int
S = ''' def foo(): # type of foo inferred x = int x = 'a' foo() ''' exec(S, TypeCheckDict(), TypeCheckDict()) # doesn't raise an error as a normal dict is used in foo ```
Python provides us all the flexibility to do all kinds of "fancy" things, so flexible that we can make it *NOT* look like python at all. I have done similar things for descriptors, and I am still thinking was it the right approach to make descriptor not behaving like the way a descriptor is supposed to behave. _______________________________________________ Python-ideas mailing list -- python-ideas@python.org <mailto:python-ideas@python.org> To unsubscribe send an email to python-ideas-leave@python.org <mailto:python-ideas-leave@python.org> https://mail.python.org/mailman3/lists/python-ideas.python.org/ <https://mail.python.org/mailman3/lists/python-ideas.python.org/> Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/UHSLG7... <https://mail.python.org/archives/list/python-ideas@python.org/message/UHSLG7JWXXRRAXS2JCFKV656TFO3QRUE/> Code of Conduct: http://python.org/psf/codeofconduct/ <http://python.org/psf/codeofconduct/>
On Tue, Jun 11, 2019 at 11:35 PM Barry Scott <barry@barrys-emacs.org> wrote:
Some times a DSL is usable within the python syntax and that is great. I have use python for a number of DSL's.
But when the DSL is beyond what python can help with directly I'm wondering why you do not parse the DSL with python and execute the results.
In that way you can have any semantics that you want from any syntax that you wish to have. However you do not need the python language to be changed at all.
And it must be clear that you are making little to no progress on convincing people that changing python is a good idea.
Barry
Hi Barry, realized that. Please do allow me sometime for formalize everything. You are absolutely right in that writing a new parser might not be a bad idea, but it is just soooo close for enabling Python to be used as the DSL language. We do not even need a DSL for HDL in this case, as Python can co-simulate with any HDL simulator that supports VPI/DPI/etc., but writing a full featured HDL is really just a few hundred of lines of code which enables you to throw away the entire HDL simulator (hundreds of thousands of lines, plus another hundreds of lines of glue code in between python and HDL simulator. Develop a dedicated parser eventually is NOT saving much more effort than using existing ones. And more importantly is the integration of Python and the DSL, as we want to use Python not only the write hardware, but write test for hardware which can be far more complex than the hardware itself. And python's ability to allow develop new test scenarios fast is the key here.
On Wed, Jun 12, 2019 at 5:26 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Tue, Jun 11, 2019 at 11:35 PM Barry Scott <barry@barrys-emacs.org> wrote:
Some times a DSL is usable within the python syntax and that is great. I have use python for a number of DSL's.
But when the DSL is beyond what python can help with directly I'm wondering why you do not parse the DSL with python and execute the results.
In that way you can have any semantics that you want from any syntax that you wish to have. However you do not need the python language to be changed at all.
And it must be clear that you are making little to no progress on convincing people that changing python is a good idea.
Barry
Hi Barry, realized that. Please do allow me sometime for formalize everything. You are absolutely right in that writing a new parser might not be a bad idea, but it is just soooo close for enabling Python to be used as the DSL language. We do not even need a DSL for HDL in this case, as Python can co-simulate with any HDL simulator that supports VPI/DPI/etc., but writing a full featured HDL is really just a few hundred of lines of code which enables you to throw away the entire HDL simulator (hundreds of thousands of lines, plus another hundreds of lines of glue code in between python and HDL simulator.
If Python is really THAT close, then devise two syntaxes: an abstract syntax for your actual source code, and then a concrete syntax that can be executed. It's okay for things to be a little bit ugly (like "signal[:] = 42") in the concrete form, because you won't actually be editing that. Then your program just has to transform one into the other, and then run the program. ChrisA
On Wed, Jun 12, 2019 at 9:39 AM Chris Angelico <rosuav@gmail.com> wrote:
If Python is really THAT close, then devise two syntaxes: an abstract syntax for your actual source code, and then a concrete syntax that can be executed. It's okay for things to be a little bit ugly (like "signal[:] = 42") in the concrete form, because you won't actually be editing that. Then your program just has to transform one into the other, and then run the program.
Thought about that too .... but as you can imagine, you can write: x <== 3 # or x \ <== 3 # or x \ \ ... \ <== 3 # This is crazy but valid python syntax! # more crazy ones are skipped ... so this is not a simple text replacement problem, eventually you end up writing a python parser? Or a HDL parser.
On Thu, Jun 13, 2019 at 6:51 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 12, 2019 at 9:39 AM Chris Angelico <rosuav@gmail.com> wrote:
If Python is really THAT close, then devise two syntaxes: an abstract syntax for your actual source code, and then a concrete syntax that can be executed. It's okay for things to be a little bit ugly (like "signal[:] = 42") in the concrete form, because you won't actually be editing that. Then your program just has to transform one into the other, and then run the program.
Thought about that too .... but as you can imagine, you can write:
x <== 3 # or x \ <== 3 # or x \ \ ... \ <== 3 # This is crazy but valid python syntax! # more crazy ones are skipped ...
so this is not a simple text replacement problem, eventually you end up writing a python parser? Or a HDL parser.
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex). ChrisA
Barry the reason I use python and don't parse syntax directly as I want to have python as meta programming environment for my DSLs. I can mostly work within the python syntax (with some pretty heavy metaclasses) I rarely have to touch the AST. Their only two places where I ever have to touch the AST, in assignment statements and in control flow. Theres no easy way around needing to rewrite control flow but an assignment operator would drastically decrease the amount of AST manipulation I do. In class bodies it is easy to redefine what assignment means, in every other context its very annoying, I don't see why that must be the case. Caleb On Wed, Jun 12, 2019 at 2:28 PM Chris Angelico <rosuav@gmail.com> wrote:
On Thu, Jun 13, 2019 at 6:51 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 12, 2019 at 9:39 AM Chris Angelico <rosuav@gmail.com> wrote:
If Python is really THAT close, then devise two syntaxes: an abstract syntax for your actual source code, and then a concrete syntax that can be executed. It's okay for things to be a little bit ugly (like "signal[:] = 42") in the concrete form, because you won't actually be editing that. Then your program just has to transform one into the other, and then run the program.
Thought about that too .... but as you can imagine, you can write:
x <== 3 # or x \ <== 3 # or x \ \ ... \ <== 3 # This is crazy but valid python syntax! # more crazy ones are skipped ...
so this is not a simple text replacement problem, eventually you end up writing a python parser? Or a HDL parser.
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex).
ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/BCGYTP... Code of Conduct: http://python.org/psf/codeofconduct/
Caleb Donovick writes:
In class bodies it is easy to redefine what assignment means, in every other context its very annoying, I don't see why that must be the case.
It's because Python doesn't actually have assignment to variables, it has binding to names. So there's no "there" there to provide a definition of assignment. In a class definition, the "local variables" are actually attributes of the class object. That class object provides the "there", which in turn allows redefinition via a metaclass. Of course this doesn't *have* to be the case. But in Python it is. AFAICS making assignment user-definable would require the compiler to be able to determine the type of the LHS in every assignment statement in order to determine whether name binding is meant or the name refers to an object which knows how to assign to itself. I don't see how to do that without giving up a lot of the properties that make Python Python, such as duck-typing. Steve
It's because Python doesn't actually have assignment to variables, it has binding to names. So there's no "there" there to provide a definition of assignment. In a class definition, the "local variables" are actually attributes of the class object. That class object provides the "there", which in turn allows redefinition via a metaclass.
I understand this and while I would love to have metamodules and metafunctions to provide me a 'there' that is for another thread. I don't really want to change the semantic of =. What Yanghao and I are asking for is an in-place update/assign operator which isn't burdened with numeric meaning. Caleb Donovick On Thu, Jun 13, 2019 at 1:42 PM Stephen J. Turnbull < turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Caleb Donovick writes:
In class bodies it is easy to redefine what assignment means, in every other context its very annoying, I don't see why that must be the case.
It's because Python doesn't actually have assignment to variables, it has binding to names. So there's no "there" there to provide a definition of assignment. In a class definition, the "local variables" are actually attributes of the class object. That class object provides the "there", which in turn allows redefinition via a metaclass.
Of course this doesn't *have* to be the case. But in Python it is. AFAICS making assignment user-definable would require the compiler to be able to determine the type of the LHS in every assignment statement in order to determine whether name binding is meant or the name refers to an object which knows how to assign to itself. I don't see how to do that without giving up a lot of the properties that make Python Python, such as duck-typing.
Steve
Caleb Donovick writes:
I don't really want to change the semantic of =. What Yanghao and I are asking for is an in-place update/assign operator which isn't burdened with numeric meaning.
And what I'm asking for is a justification for that. Python in general has done fine without it for almost 3 decades. I believe you that you have so far not found a way to make a pretty DSL without it, and similarly for Yanghao's HDL. But it's far from obvious that none exists that most Pythonistas would find satisfactory. It's easy to understand why NumPy wanted an additional operator: it's well-known and easily verified by looking at any numerical analysis textbook that matrices can benefit from having notation for a special multiplication operation as well as for all elementwise numerical operations. Making NumPy's job easier makes Python better for literally millions of Python users. So far the request for an in-place update operator seems to fail on both counts. "Need" fails for lack of examples. "Broad benefit" could be implied by "need" and a bit of imagination applied to concrete examples, but on the face of it seems unlikely because of the lack of persistent voices to date, and "need" itself hasn't been demonstrated. Maybe you'll persuade enough committers without examples. Maybe the problem will be solved en passant if the "issubclass needs an operator" thread succeeds (I've already suggested to Yanghao offlist that Guido's suggested spelling of "<:" seems usable for "update", even though in that thread it's a comparison operator). But both would require a lot of luck IMO. Steve
On Tue, Jun 18, 2019 at 10:57 AM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Maybe you'll persuade enough committers without examples. Maybe the problem will be solved en passant if the "issubclass needs an operator" thread succeeds (I've already suggested to Yanghao offlist that Guido's suggested spelling of "<:" seems usable for "update", even though in that thread it's a comparison operator). But both would require a lot of luck IMO.
I must have overlooked it ... <: seems good to me. I do agree with you this needs more materialized evidence, I am working on it, in a few areas more than just DSL/HDL. For now I have abandoned my local change to cpython and settled with list assignment signal[:] = thing. This in most case does not conflict with numeric operations, nor list operations. (HDL signals are both numbers and a list of individual signals). And it aligns with what it means with the general python list. Though, I am really looking forward to the success of <: operator as well ;-)
I have been following this discussion for a long time, and coincidentally I recently started working on a project that could make use of assignment overloading. (As an aside it is a configuration system for a astronomical data analysis pipeline that makes heavy use of descriptors to work around historical decisions and backward compatibility). Our system makes use of nested chains of objects and descriptors and proxy object to manage where state is actually stored. The whole system could collapse down nicely if there were assignment overloading. However, this works OK most of the time, but sometimes at the end of the chain things can become quite complicated. I was new to this code base and tasked with making some additions to it, and wished for an assignment operator, but knew the data binding model of python was incompatible from p. This got me thinking. I didnt actually need to overload assignment per-say, data binding could stay just how it was, but if there was a magic method that worked similar to how __get__ works for descriptors but would be called on any variable lookup (if the method was defined) it would allow for something akin to assignment. For example: class Foo: def __init__(self): self.value = 6 self.myself = weakref.ref(self) def important_work(self): print(self.value) def __get_self__(self): return self.myself def __setattr__(self, name, value): self.value = value foo = Foo() # Create an instance foo # The interpreter would return foo.myself foo.value # The interpreter would return foo.myself.value foo = 19 # The interpreter would run foo.myself = 6 which would invoke foo.__setattr__('myself', 19) I am being naive is some way I am sure, possibly to how the interpreter could be made to do this chaining, but I figured I would weight in in case this message could spark some thought. On Tue, Jun 18, 2019 at 5:41 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Tue, Jun 18, 2019 at 10:57 AM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Maybe you'll persuade enough committers without examples. Maybe the problem will be solved en passant if the "issubclass needs an operator" thread succeeds (I've already suggested to Yanghao offlist that Guido's suggested spelling of "<:" seems usable for "update", even though in that thread it's a comparison operator). But both would require a lot of luck IMO.
I must have overlooked it ... <: seems good to me. I do agree with you this needs more materialized evidence, I am working on it, in a few areas more than just DSL/HDL.
For now I have abandoned my local change to cpython and settled with list assignment signal[:] = thing. This in most case does not conflict with numeric operations, nor list operations. (HDL signals are both numbers and a list of individual signals). And it aligns with what it means with the general python list.
Though, I am really looking forward to the success of <: operator as well ;-) _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/VB4ETT... Code of Conduct: http://python.org/psf/codeofconduct/
And what I'm asking for is a justification for that. Python in general has done fine without it for almost 3 decades. I believe you that you have so far not found a way to make a pretty DSL without it, and similarly for Yanghao's HDL. But it's far from obvious that none exists that most Pythonistas would find satisfactory.
I have found a way to make a pretty DSL, as I stated earlier in the thread I rewrite the AST. From a user standpoint the problem is mostly moot. From a developer standpoint rewriting the AST is an incredibly painful way to operate.
So far the request for an in-place update operator seems to fail on both counts. "Need" fails for lack of examples. "Broad benefit" could be implied by "need" and a bit of imagination applied to concrete examples, but on the face of it seems unlikely because of the lack of persistent voices to date, and "need" itself hasn't been demonstrated.
Both Yanghao and I have provided examples, what precisely do you want in an example? Do you want my DSL code? Do you want the implementation of the AST rewriter? As far broader impact a whole range of common operations could be unified by an assign in place (stealing some form that thread) ``` context_var.set(val) # possibly the most glaring place in the standard library where an assign operator would be beautiful lst[:] = new_list # while a common python idiom, this certainly isn't the most obvious syntax and only works on lists dct.clear(); dct.update(new_dict) # to achieve the same thing as above with a dict or set. numpy.copyto(array, new_array) # to achieve the same as above, note array[:] = new_array is an error ``` If we want to extend discussion beyond assign in place to be a write operator we can add to the list ``` coroutine.send(args) process.communicate(args) file.write(arg) ``` Caleb Donovick On Tue, Jun 18, 2019 at 3:43 PM nate lust <natelust@linux.com> wrote:
I have been following this discussion for a long time, and coincidentally I recently started working on a project that could make use of assignment overloading. (As an aside it is a configuration system for a astronomical data analysis pipeline that makes heavy use of descriptors to work around historical decisions and backward compatibility). Our system makes use of nested chains of objects and descriptors and proxy object to manage where state is actually stored. The whole system could collapse down nicely if there were assignment overloading. However, this works OK most of the time, but sometimes at the end of the chain things can become quite complicated. I was new to this code base and tasked with making some additions to it, and wished for an assignment operator, but knew the data binding model of python was incompatible from p.
This got me thinking. I didnt actually need to overload assignment per-say, data binding could stay just how it was, but if there was a magic method that worked similar to how __get__ works for descriptors but would be called on any variable lookup (if the method was defined) it would allow for something akin to assignment. For example:
class Foo: def __init__(self): self.value = 6 self.myself = weakref.ref(self) def important_work(self): print(self.value) def __get_self__(self): return self.myself def __setattr__(self, name, value): self.value = value
foo = Foo() # Create an instance foo # The interpreter would return foo.myself foo.value # The interpreter would return foo.myself.value foo = 19 # The interpreter would run foo.myself = 6 which would invoke foo.__setattr__('myself', 19)
I am being naive is some way I am sure, possibly to how the interpreter could be made to do this chaining, but I figured I would weight in in case this message could spark some thought.
On Tue, Jun 18, 2019 at 5:41 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Tue, Jun 18, 2019 at 10:57 AM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Maybe you'll persuade enough committers without examples. Maybe the problem will be solved en passant if the "issubclass needs an operator" thread succeeds (I've already suggested to Yanghao offlist that Guido's suggested spelling of "<:" seems usable for "update", even though in that thread it's a comparison operator). But both would require a lot of luck IMO.
I must have overlooked it ... <: seems good to me. I do agree with you this needs more materialized evidence, I am working on it, in a few areas more than just DSL/HDL.
For now I have abandoned my local change to cpython and settled with list assignment signal[:] = thing. This in most case does not conflict with numeric operations, nor list operations. (HDL signals are both numbers and a list of individual signals). And it aligns with what it means with the general python list.
Though, I am really looking forward to the success of <: operator as well ;-) _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/VB4ETT... Code of Conduct: http://python.org/psf/codeofconduct/
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/SK2DFX... Code of Conduct: http://python.org/psf/codeofconduct/
On Jun 18, 2019, at 12:43, nate lust <natelust@linux.com> wrote:
I have been following this discussion for a long time, and coincidentally I recently started working on a project that could make use of assignment overloading. (As an aside it is a configuration system for a astronomical data analysis pipeline that makes heavy use of descriptors to work around historical decisions and backward compatibility). Our system makes use of nested chains of objects and descriptors and proxy object to manage where state is actually stored. The whole system could collapse down nicely if there were assignment overloading. However, this works OK most of the time, but sometimes at the end of the chain things can become quite complicated. I was new to this code base and tasked with making some additions to it, and wished for an assignment operator, but knew the data binding model of python was incompatible from p.
This got me thinking. I didnt actually need to overload assignment per-say, data binding could stay just how it was, but if there was a magic method that worked similar to how __get__ works for descriptors but would be called on any variable lookup (if the method was defined) it would allow for something akin to assignment.
What counts as “variable lookup”? In particular:
For example:
class Foo: def __init__(self): self.value = 6 self.myself = weakref.ref(self) def important_work(self): print(self.value)
… why doesn’t every one of those “self” lookups call self.__get_self__()? It’s a local variable being looked up by name, just like your “foo” below, and it finds the same value, which has the same __get_self__ method on its type. The only viable answer seems to that it does. So, to avoid infinite circularity, your class needs to use the same kind of workaround used for attribute lookup in classes that define __getattribute__ and/or __setattr__:
def important_work(self): print(object.__get_self__(self).value)
def __get_self__(self): return object.__get_self__(self).myself
But even that won’t work here, because you still have to look up self to call the superclass method on it. I think it would require some new syntax, or at least something horrible involving locals(), to allow you to write the appropriate methods.
def __get_self__(self): return self.myself
Besides recursively calling itself for that “self” lookup, why doesn’t this also call weakref.ref.__get_self__ for that “myself” lookup? It’s an attribute lookup rather than a local namespace lookup, but surely you need that to work too, or as soon as you store a Foo instance in another object it stops overloading. For this case there’s at least an obvious answer: because weakref.ref doesn’t override that method, the variable lookup doesn’t get intercepted. But notice that this means every single value access in Python now has to do an extra special-method lookup that almost always does nothing, which is going to be very expensive.
def __setattr__(self, name, value): self.value = value
You can’t write __setattr__ methods this way. That assignment statement just calls self.__setattr__(‘value’, value), which will endlessly recurse. That’s why you need something like the object method call to break the circularity. Also, this will take over the attribute assignments in your __init__ method. And, because it ignores the name and always sets the value attribute, it means that self.myself = is just going to override value rather than setting myself. To solve both of these problems, you want a standard __setattr__ body here:
def __setattr__(self, name, value): object.__setattr__(self, name, value)
But that immediately makes it obvious that your __setattr__ isn’t actually doing anything, and could just be left out entirely.
foo = Foo() # Create an instance foo # The interpreter would return foo.myself foo.value # The interpreter would return foo.myself.value foo = 19 # The interpreter would run foo.myself = 6 which would invoke foo.__setattr__('myself', 19)
For this last one, why would it do that? There’s no lookup here at all, only an assignment. The only way to make this work would be for the interpreter to lookup the current value of the target on every assignment before assigning to it, so that lookup could be overloaded. If that were doable, then assignment would already be overloadable, and this whole discussion wouldn’t exist. But, even if you did add that, __get_self__ is just returning the value self.myself, not some kind of reference to it. How can the interpreter figure out that the weakref.ref value it got came from looking up the name “myself” on the Foo instance? (This is the same reason __getattr__ can’t help you override attribute setting, and a separate method __setattr__ is needed.) To make this work, you’d need a __set_self__ to go along with __get_self__. Otherwise, your changes not only don’t provide a way to do assignment overloading, they’d break assignment overloading if it existed. Also, all of the extra stuff you’re trying to add on top of assignment overloading can already be done today. You just want a transparent proxy: a class whose instances act like a reference to some other object, and delegate all methods (and maybe attribute lookups and assignments) to it. This is already pretty easy; you can define __getattr__ (and __setattr__) to do it dynamically, or you can do some clever stuff to create static delegating methods (and properties) explicitly at object-creation or class-creation time. Then foo.value returns foo.myself.value, foo.important_work() calls the Foo method but foo.__str__() calls foo.myself.__str__(), you can even make it pass isinstance checks if you want. The only thing it can’t do is overload assignment. I think the real problem here is that you’re thinking about references to variables rather than values, and overloading operators on variables rather than values, and neither of those makes sense in Python. Looking up, or assigning to, a local variable named “foo” is not an operation on “the foo variable”, because there is no such thing; it’s an operation on the locals namespace.
On Wed, Jun 12, 2019 at 11:27 PM Chris Angelico <rosuav@gmail.com> wrote:
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex).
Yep ... I just tried to use MacroPy3 to handle this but failed, seems MacroPy3 does expect valid Python syntax in the first place for anything else to happen. I also tried the raw ast module which seems also the case. So it means if one want to use python ast module to parse user input, the user input has to be valid python syntax (e.g. no <==) at the first place. Seems this is a chicken-egg problem.
On Thu, Jun 13, 2019 at 12:52 AM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 12, 2019 at 11:27 PM Chris Angelico <rosuav@gmail.com> wrote:
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex).
Yep ... I just tried to use MacroPy3 to handle this but failed, seems MacroPy3 does expect valid Python syntax in the first place for anything else to happen. I also tried the raw ast module which seems also the case. So it means if one want to use python ast module to parse user input, the user input has to be valid python syntax (e.g. no <==) at the first place. Seems this is a chicken-egg problem.
Attaching the trace: Traceback (most recent call last): File "tt.py", line 34, in <module> main() File "tt.py", line 8, in main tree = ast.parse(source.read()) File "/usr/lib/python3.6/ast.py", line 35, in parse return compile(source, filename, mode, PyCF_ONLY_AST) File "<unknown>", line 1 x <== 3 ^ SyntaxError: invalid syntax
On Wed, Jun 12, 2019 at 7:56 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 12, 2019 at 11:27 PM Chris Angelico <rosuav@gmail.com> wrote:
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex).
Yep ... I just tried to use MacroPy3 to handle this but failed, seems MacroPy3 does expect valid Python syntax in the first place for anything else to happen. I also tried the raw ast module which seems also the case. So it means if one want to use python ast module to parse user input, the user input has to be valid python syntax (e.g. no <==) at the first place. Seems this is a chicken-egg problem.
As I mentioned very early on in this seemingly-never-ending discussion, there is a way to do things similar to what you want to do, by doing transformations on the source code prior to execution (prior to constructing an AST). Here's the link to the set of examples which I have done as demonstrations of this technique: https://github.com/aroberge/experimental/tree/master/experimental/transforme... André Roberge
_______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-leave@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/Z3FVIL... Code of Conduct: http://python.org/psf/codeofconduct/
On Thu, Jun 13, 2019 at 1:02 AM Andre Roberge <andre.roberge@gmail.com> wrote:
On Wed, Jun 12, 2019 at 7:56 PM Yanghao Hua <yanghao.py@gmail.com> wrote:
On Wed, Jun 12, 2019 at 11:27 PM Chris Angelico <rosuav@gmail.com> wrote:
Yes, you would need some sort of syntactic parser. There are a couple of ways to go about it. One is to make use of Python's own tools, like the ast module; the other is to mandate that your specific syntax be "tidier" than the rest of the Python code, which would permit you to use a more naive and simplistic parser (even a regex).
Yep ... I just tried to use MacroPy3 to handle this but failed, seems MacroPy3 does expect valid Python syntax in the first place for anything else to happen. I also tried the raw ast module which seems also the case. So it means if one want to use python ast module to parse user input, the user input has to be valid python syntax (e.g. no <==) at the first place. Seems this is a chicken-egg problem.
As I mentioned very early on in this seemingly-never-ending discussion, there is a way to do things similar to what you want to do, by doing transformations on the source code prior to execution (prior to constructing an AST).
Here's the link to the set of examples which I have done as demonstrations of this technique: https://github.com/aroberge/experimental/tree/master/experimental/transforme...
I think your readme file self explains why already, I do have a solution (multiple solution actually), this is the quest for the final miles to make it look concise, intuitive and ***easy to implement***. Given two mathematical terms or expressions a and b, they can occur: - on a single line - immediately following an assert keyword - immediately following an if keyword However, in the current implementation, anything else will fail.
Yanghao Hua wrote:
You are absolutely right in that writing a new parser might not be a bad idea, but it is just soooo close for enabling Python to be used as the DSL language.
You seem to be arguing for this as an enabler for using Python for DSLs in general, but you're really only talking about one particular DSL, i.e. your HDL. Do you have any evidence that this a single missing piece that will be of great benefit to other DSLs? Or will the next person who wants a DSL be asking for yet another piece of new syntax to support their particular application, etc? -- Greg
On Wed, Jun 12, 2019 at 10:10 AM Greg Ewing <greg.ewing@canterbury.ac.nz> wrote:
Yanghao Hua wrote:
You are absolutely right in that writing a new parser might not be a bad idea, but it is just soooo close for enabling Python to be used as the DSL language.
You seem to be arguing for this as an enabler for using Python for DSLs in general, but you're really only talking about one particular DSL, i.e. your HDL.
Do you have any evidence that this a single missing piece that will be of great benefit to other DSLs? Or will the next person who wants a DSL be asking for yet another piece of new syntax to support their particular application, etc?
I believe the "<==" operator has more use than just for HDL design (e.g. for replacing descriptors as described earlier ... but it also has its cons, e.g. it is no longer transparent for the normal assignment "="). This might help other DSLs but definitely not making Python feature complete for all DSLs. Just like @ operator pushed into Python, once there are proven use cases, I believe Python will keep adding more? This feels painful but ... what can we do? Waiting more operators to come or try to allow users to define new operators in python itself?
On 04/06/2019 13:38, Steven D'Aprano wrote:
(2) Because things which act different should look different, and things which act similar should look similar. Yanghao Hua wants an operator which suggests a kind of assignment. Out of the remaining set of operators, which one do you think suggests assignment?
You're a step ahead of me, Steven :-) I still haven't been convinced that a new operator is appropriate. -- Rhodri James *-* Kynesim Ltd
On Tue, Jun 4, 2019 at 7:21 AM Rhodri James <rhodri@kynesim.co.uk> wrote:
On 04/06/2019 11:06, Yanghao Hua wrote:
[...] what I needed is an operator that does not collide with all existing number/matrix operators.
Why?
That's the question that in all your thousands of words of argument you still haven't answered beyond "because I want it."
All existing operators are actually already meaningful for HDLs, so using one for assignment would mean it wouldn't be able to be used for its "natural" operation. As Yanghao mentioned, the likeliest to use would be the in-place matmul operator (@=) but there are use cases where matrix-multiplication of signals would actually be useful too.
Cody Piersall writes:
would be the in-place matmul operator (@=) but there are use cases where matrix-multiplication of signals would actually be useful too.
If I recall correctly, the problem that the numeric community faced was that there are multiple "multiplication" operations that matrices "want to" support with operator notation because they're all frequently used in more or less complex expressions, not that matrix algebra needs to spell its multiplication operator differently from "*". According to the OP, signals are "just integers". Integers do not need to support matrix multiplication because they *can't*. There may be matrices of signals that do want to support multiplication, but that will be a different type, and presumably multiplication of signal matrices will be supported by "*". Can you say that signal matrices will have more than one frequently needed "multiplication" operation?
On Tue, Jun 4, 2019 at 7:28 PM Stephen J. Turnbull <turnbull.stephen.fw@u.tsukuba.ac.jp> wrote:
Cody Piersall writes:
would be the in-place matmul operator (@=) but there are use cases where matrix-multiplication of signals would actually be useful too.
If I recall correctly, the problem that the numeric community faced was that there are multiple "multiplication" operations that matrices "want to" support with operator notation because they're all frequently used in more or less complex expressions, not that matrix algebra needs to spell its multiplication operator differently from "*".
According to the OP, signals are "just integers". Integers do not need to support matrix multiplication because they *can't*. There may be matrices of signals that do want to support multiplication, but that will be a different type, and presumably multiplication of signal matrices will be supported by "*". Can you say that signal matrices will have more than one frequently needed "multiplication" operation?
Your statement about the history is absolutely correct, but please notice two matrix dot-multiplication 1xN @ Nx1 ==> a single value, so something like below holds: signal_result <== [sig sig sig ...] @ [sig, sig, sig, ...] (',' used to make the second a Nx1 for example). And now imagine you want to mix up @= with @, in one place @= means signal assignment, in another it means matrix. What's more, when start to use matrix to drive signal matrix, I assume a lot of matrix ops including the @= is definitely going to be very often used to produce the stimuli (e.g. using normal number matrix ops to generate the desired result matrix to drive on signal matrix), that's why I am refrained from using @= entirely.
Stephen J. Turnbull wrote:
There may be matrices of signals that do want to support multiplication, but that will be a different type, and presumably multiplication of signal matrices will be supported by "*".
Then you lose the ability for '*' to represent elementwise multiplication of signal arrays. It's the same problem that numpy faced. -- Greg
Cody Piersall wrote:
As Yanghao mentioned, the likeliest to use would be the in-place matmul operator (@=) but there are use cases where matrix-multiplication of signals would actually be useful too.
My question on that is whether matrix multiplication of signals is likely to be used so heavily that it would be a hardship to represent it some other way, such as a function. -- Greg
participants (19)
-
Andre Roberge
-
Andrew Barnert
-
Angus Hollands
-
Barry Scott
-
Caleb Donovick
-
Chris Angelico
-
Cody Piersall
-
Eric V. Smith
-
Greg Ewing
-
Jeroen Demeyer
-
nate lust
-
Paul Moore
-
Rhodri James
-
Ricky Teachey
-
Ryan Gonzalez
-
Stephen J. Turnbull
-
Steven D'Aprano
-
Terry Reedy
-
Yanghao Hua