multi-core software

Arved Sandstrom dcest61 at hotmail.com
Wed Jun 10 19:55:00 EDT 2009


Jon Harrop wrote:
> Arved Sandstrom wrote:
>> Jon Harrop wrote:
>>> Arved Sandstrom wrote:
>>>> Jon Harrop wrote:
>>>>> No. Concurrent programming is about interleaving computations in order
>>>>> to reduce latency. Nothing to do with parallelism.
>>>> Jon, I do concurrent programming all the time, as do most of my peers.
>>>> Way down on the list of why we do it is the reduction of latency.
>>> What is higher on the list?
>> Correctness.
>>
>> I'm not being facetious. I write applications that run on application
>> servers, and from time to time I have had to write various special
>> purpose servers. This kind of programming is all about managing
>> concurrent execution of computations. The overarching concern is 
>> reliability and correct function. For many corporate situations, even
>> with hundreds of users, the actual load at any instant is low enough
>> that the various servers involved are nowhere close to being stressed
>> out - performance is a secondary issue.
> 
> In other words, without concurrency the latency would be so high that you
> would consider the program to be wrong. However you cut it, the real reason
> is latency.

For a certain group of applications and user loads I would concede that 
point, yes.

For quite a few other situations, you could queue up user requests and 
execute them in order, finishing each before proceeding to the next, and 
users wouldn't even notice. I wrote a J2SE server a few months ago, to 
solve a very specific problem associated with an application running on 
a J2EE server, that could handle dozens of users per second using this 
strategy. It didn't write it that way because doing so is perverse, but 
I could have.

AHS



More information about the Python-list mailing list