[Flask] uWSGI and Flask - Slow and varying response times

Ziirish ziirish at ziirish.info
Thu Jul 7 02:42:27 EDT 2016


Hello,

I think you have your answer right with the last piece of information you gave us.

Your CPU clock is two times faster than your servers one.

Now your code is probably not multi threaded, it means your server will perform at least two times slower than your development machine to execute the same piece of code.

The x3 you are measuring is probably due to some context switch or something (your kernel is probably spitting the task in several pieces to let other processes perform well so these pieces of processes then need to be synced or something)

In conclusion, the delay you are observing is due to a lower CPU clock on the server. So unless you can multi thread your application there's not much to do.
In the other hand, because you have a lot of cores, your server will be able to serve (at least) 8 simultaneous clients within the same time range.

You can also have a look at PyPy as already suggested to try to optimize the generated "bytecode". It can probably save you a few CPU cycles and thus decrease your whole compute time.




Le 7 juillet 2016 05:40:30 GMT+02:00, Tim van der Linden <tim at shisaa.jp> a écrit :
>On Wed, 06 Jul 2016 18:28:15 +0800
>Unai Rodriguez <unai at sysbible.org> wrote:
>
>> Hi Tim,
>
>Hi Unai
>
>> For uwsgi/Python you could do the same, build a simple page and a
>more
>> complicated one and compare (no database queries to isolate).
>
>First a simple page on uWSGI using Flask and Jinja. The template only
>consists out of the bare minimals for a HTML page and a single string
>variable to be printed and the whole application is a single Python
>file with only one view. The timings are done using the Werkzeug
>profiler:
>
>Local: 7 ms for initial request, 0.5ms for all following requests
>Server: 160 ms for initial request, 1ms for all following requests
>
>I will count that as identical. The Snakeviz graphs also look identical
>for both.
>
>Then I added a few range(1, 1000) loops to be printed in Jinja2 and a
>load of simple text:
>
>Local: 9 ms for initial request, 3 ms for all following requests
>Server: 24 ms for initial request, 10 ms for all following requests
>
>Slowly a difference between the two start to form.
>
>I added some more elements to the view and template such as a WTF form,
>but the timings did not went up much. The difference stayed however:
>server being three times slower then local.
>
>I also added a couple of (fast) database queries in the mix, opening
>and closing both cursors and connections per query:
>
>Local: 16 ms for initial request, 11 ms for all following requests
>Server: 51 ms for initial request, 33 ms for all following requests
>
>> The server/os will have to be looked at some point as well. 
>
>This was interesting but maybe less important since this is testing
>pure CPU throughput:, I yanked a primes Python script of the interwebs
>to test the raw processing speed and ran it on both machines.
>
>Local: 3.3879199028 s
>Server: 9.26460504532 s
>
>This is a script without any extras, one Python function tested with
>Timeit and ran directly via the Python CLI. My local machine is three
>times faster than the server.
>
>Installing Sysbench and doing a primers test (20000) outside of Python:
>
>Local: 18.1605 seconds
>Server: 42.1942 seconds
>
>CPU info:
>
>Local: 1 x i74790K (4 Ghz)
>Server: 8 x Xeon E7-4820 v2 (2 Ghz)
>
>> -- unai
>
>Cheers,
>Tim
>_______________________________________________
>Flask mailing list
>Flask at python.org
>https://mail.python.org/mailman/listinfo/flask



More information about the Flask mailing list