Hi everyone, I have been following several discussions that have been going on in the past couple of weeks concering the speed of certain parts of the code and the size of the the code base. While I think the sentiments about the different things are absolutely right, I think that the limited resources that we have require us to make choices in what we do and when we do it. My views are that: - Apart from a few libraries, like socket() and I/O, it doesn't matter that the library code is extremely slow, for the time being. We want to find some application where we can make a big difference early, rather than try to be everything to everyone from the start. Things like zlib and cryptography can be very slow, as long as they are correct. - For core language functionality, speed matters a lot. Apart from the JIT work, which still is a long term project, I can see a couple of useful avenues for improving speed. One is optimising calls, which Samuele has more thoughts about. The other one is to dig into the list operation microbenchmarks, where our performance is a lot worse than CPython 2.4. List operations are a very large part of most real Python applications (like all the zillion web frameworks) and improving them may increase our chances of adoption before the JIT work is usable. - I agree that the size of the project in lines of code is getting problematic, but I think it would be a mistake to start doing anything about it at this point in time. We are still not doing well enough for completeness and correctness to start simplifying the code. we need to tackle one aspect at a time. Otherwise we won't be able to see if we are making progress or just making things worse. Our planned work to make Django run is very much about completeness and correctness. Let's keep our focus there for a while. Jacob
participants (1)
-
Jacob Hallén