Hello everyone, is it possible to create(or tell me its name) a command to evaluate compute length of a command/procedure? (in number of: -processor operations(for 32 and 64 bit processors) -RAM read and write operations -hard disk write and read operations -eventual graphic operations ) this may be difficult, but it ables users to optimise their programs.
On Sat, Feb 22, 2014 at 4:37 AM, Liam Marsh
Hello everyone, is it possible to create(or tell me its name) a command to evaluate compute length of a command/procedure? (in number of: -processor operations(for 32 and 64 bit processors) -RAM read and write operations -hard disk write and read operations -eventual graphic operations ) this may be difficult, but it ables users to optimise their programs.
Here's a first-cut timing routine: def how_fast_is_this(func): return "Fast enough." Trust me, that's accurate enough for most cases. For anything else, you need to be timing it in your actual code. The forms of measurement you're asking for make no sense for most Python functions. The best you could do would probably be to look at the size of the compiled byte-code, but that's not going to be particularly accurate anyway. No, the only way to profile your code is to put timing points in your actual code. You may want to try the timeit module for some help with that, but the simplest is to just pepper your code with calls to time.time(). I hope there never is a function for counting processor instructions required for a particular Python function. Apart from being nearly impossible to calculate, it'd be almost never correctly used. People would warp their code around using "the one with the smaller number", when high level languages these days should be written primarily with a view to being readable by a human. Make your code look right and act right, and worry about how fast it is only when you have evidence that it really isn't fast enough. Incidentally, the newer Python versions will tend to be faster than older ones, because the developers of Python itself care about performance. A bit of time spent optimizing CPython will improve execution time of every Python script, but a bit of time spent optimizing your one script improves only that one script. For further help with optimizing scripts, ask on python-list. We can help with back-of-the-envelope calculations (if you're concerned that your server can't handle X network requests a second, first ascertain whether your server's network connection can feed it that many a second - that exact question came up on the list a few months ago), and also with tips and tricks when you come to the optimizing itself. And who knows, maybe we can save you a huge amount of time... programmer time, which is usually more expensive than processor time :) ChrisA
On Fri, Feb 21, 2014 at 06:37:58PM +0100, Liam Marsh wrote:
Hello everyone, is it possible to create(or tell me its name) a command to evaluate compute length of a command/procedure? (in number of: -processor operations(for 32 and 64 bit processors) -RAM read and write operations -hard disk write and read operations -eventual graphic operations ) this may be difficult, but it ables users to optimise their programs.
What should this is_this_expensive() of this function return? def func(): if random.random() < 0.5: return "done" else: while True: pass If you can write an algorithm for deciding: (1) what does "expensive" mean? (2) how do you calculate it in advance? then maybe somebody can create a Python function to perform it. But I suspect that it cannot be calculated in advance, there's no single meaningful measurement for "expense", and even if there was, it would be unhelpful and misleading for optimizing programs. Instead of trying to predict in advance how expensive a procedure will be, why don't you run the procedure and measure how expensive it actually is? See the profile module for ways to measure how much time is used by different parts of the code. -- Steven
On 21Feb2014 18:37, Liam Marsh
is it possible to create(or tell me its name) a command to evaluate compute length of a command/procedure? (in number of: -processor operations(for 32 and 64 bit processors) -RAM read and write operations -hard disk write and read operations -eventual graphic operations ) this may be difficult, but it ables users to optimise their programs.
By inspection of the code, without running the procedure?
In general, probably not; I think that's equivalent to the halting
problem and that is known to not have a universal solution.
I think your question needs refining. Please provide more context.
A skillful human can look at a function and, often, evaluate its
performance in terms of its input. Not always, it depends on the
inputs, and it also depends on what aspects of the operations are
themselves expensive. Not all functions will terminate for all
inputs, either (back to the halting problem again), so how expensive
will you call such a function?
Even supposing you have a suitable function (one you can inspect
and decide on how it will behave), your choices of measurement are
all a little vague and variable:
For each of your 4 items above, they are highly dependent on the
system architecture: different processors have different instruction
sets, different compilers have different optimisation possibilities
(and those possibilities depend on the specific CPU, too), RAM read
and write operations depend on both the CPU and the memory architecture,
hard disk write and read operations depend on both the language I/O
library support and the OS buffering systems, and the cost of a
"hard disk write and read operation" depends also on the hard disc
hardware (SSD? on-disc buffering? RAID? ...) and graphics operations
are also very variable.
Yes, a human can often look at a human-written function and give
an opinion about its cost.
Cheers,
--
Cameron Simpson
participants (4)
-
Cameron Simpson
-
Chris Angelico
-
Liam Marsh
-
Steven D'Aprano