[Chicago] desktop development

Tal Liron tal.liron at threecrickets.com
Mon Dec 6 18:26:47 CET 2010


Serialization is often not terrible, but still utterly unnecessary if 
you are using threads. And there are many other costs to using multiple 
processes: multiple instances of your virtual machine (memory use, 
inability to share JIT optimizations between them, inability to share 
resource pools such as database connections, high costs for negotiating 
access to shared resources, etc. etc.).

I guess IPC could be an interesting panel, but it really shouldn't be. 
:) More interesting to me would be a panel on strategies to deal with 
threads on the small scale and in massively multi-core environments, 
possibly with a comparison to single-threaded strategies, such as 
non-blocking I/O. Unfortunately, most of this discussion would leave 
Python behind. Python does a lot of things elegantly and well, but 
threading isn't one of them.

An interesting panel might be about strategies and principles in desktop 
application responsiveness. Too many programmers don't the user 
experience seriously enough and don't plan well for failure. What if the 
resource you are trying to access is not responding? A hanging 
application is terrible. The alternative, though, is to make your 
operation asynchronous, which adds numerous challenges to the 
programmer: threads (or processes), status notifications to the user, 
throttling, etc. We could possibly offer various solutions and give 
examples in various GUI frameworks.

Oh, and I can't make it this week.

-t

On 12/06/2010 10:29 AM, sheila miguez wrote:
> How expensive have you found serializing data between processes to be?
> It hasn't been too bad for us. How large are your payloads? On the
> tail end we have a few MB but those are very infrequent. I think the
> network hops are the most expensive part in that. At some point it was
> nice to compress but you have to see if the trade off for compression
> time is worth it.
>
> Iirc Garrett gave a talk on things like thrift, protocol buffers, etc?
> Or maybe it was only thrift. We use protocol buffers. Very nice
> serialization time and size compared to java serialization.
>
> I have not done any desktop stuff in forever. What would it be
> communicating with? a bunch of distributed services?
>
> Maybe this would make an interesting panel, but I don't know if
> everyone could make it. I'd like to hear Bryan and you discuss pros
> and cons.
>
> Ps. I am trawling for talks since we have a meeting this week.
>
> On Sat, Dec 4, 2010 at 1:41 AM, Tal Liron<tal.liron at threecrickets.com>  wrote:
>> Threading isn't always necessary, true. But not all processes can be easily
>> split into tiny tasks that you can insert into the event loop. And any
>> application which deals with outside libraries and resources is likely one
>> of these.
>>
>>
>> It's true that spawning a separate process is cheap, but it's also true that
>> you can't share memory with it. The overhead of serializing data between
>> processes can be more than that of synchronizing thread access. Another
>> problem (rare in desktop applications, but standard in server applications)
>> is that you would might need a lot of concurrent tasks. If each task is a
>> processes, it means you'll take up a lot of memory. I think spawning tasks
>> should be avoided like the plague! It's the easy programming solution, but
>> the wasteful resource solution.
>>
>>
>> It seems to me that it's exactly in dynamic languages like Python, where
>> data hiding is frowned upon and all classes are open, you would want the
>> same openness for inter-task communication. And yet, I get this
>> anti-threading sentiment all the time in the Python community. Which is why
>> CPython has a GIL, why threading tools are almost non-existent in the
>> standard library, and why most Python platforms are such a problematic
>> choice for server platforms. This is something that Python culture should
>> embrace more, but I don't see it happening, as this sentiment seems
>> ingrained at the very top.
>>
>>
>> -Tal
>>
>>
>>
>> On 12/03/2010 09:02 PM, Bryan Oakley wrote:
>>
>>> On Fri, Dec 3, 2010 at 7:50 PM, Tal Liron<tal.liron at threecrickets.com>
>>>   wrote:
>>>> Any GUI that needs to run "something" in the background would need an
>>>> extra
>>>> thread, if you want to keep the app responsive and have a good user
>>>> experience (asynchronous notifications, progress bars, etc.)
>>> That's not necessarily true. All GUIs have an event loop that is
>>> typically 90%+ idle. All those idle cycles can be used to do
>>> processing. If you take the time to divide up your work into small
>>> chunks you can easily do that work when the main thread is otherwise
>>> idle. Progress bars and asynchronous I/O are perfect cases for that
>>> (except in performance critical operations, of course).
>>>
>>> It's also possible (and IMHO more desireable) to push this heavy
>>> lifting into a separate process. That to me is a better solution than
>>> threads because separate processes are less complex than multiple
>>> threads. And in this day and age (certainly not true a dozen years
>>> ago!) the overhead of spawning another process is negligible.
>>>
>>> Sometimes, yes, a second thread is useful. I don't dispute that
>>> sometimes they are necessary. I just don't agree they are always
>>> necessary. Obviously, YMMV. To me, multiple threads are like modal
>>> dialog boxes: they should be avoided like the plague, but sometimes
>>> they are the exact right tool for the job.
>>> _______________________________________________
>>> Chicago mailing list
>>> Chicago at python.org
>>> http://mail.python.org/mailman/listinfo/chicago
>> _______________________________________________
>> Chicago mailing list
>> Chicago at python.org
>> http://mail.python.org/mailman/listinfo/chicago
>>
>
>


More information about the Chicago mailing list