
Talin wrote:
My overall goal here is to be able to continue writing programs in Python 10 years from now, not just as a hobby, but as part of my professional work. If Python is able to leverage the power of the CPUs that are being created at that time, I will be able to make a strong case for doing so. On the other hand, if I have a 128-core CPU on my desk, and Python is only able to utilize 1/128th of that power without resorting to tedious calculations of race conditions and deadlocks, then its likely that my Python programming will be relegated to the role of a hobby.
-- Talin
A Very nice introduction Talin. I will certainly look at the video tomorrow morning. Thanks. My first thought is it's not quite as bad as it seems, because any third party extensions will be able to use the remaining 127/128th of the power. ;-) You would need to subtract any CPU's used by the OS and other concurrent processes. (These would probably continue to use more resources as well.) I wonder if some of cpu's would be definable for special purposes or not? Maybe 64 of them set aside for simultaneous parallel calculations? There may be a way to express to the OS that a particular process on a data structure be carried out as 'Wide' as possible. ('Narrow' as possible being on a single CPU in a single Thread.) It might be vary nice for these 'wide' structures to have their own API as well. All speculation of course, ;-) Cheers, Ron