[IPython-dev] ipcluster and PBS engines

Antonio González Peña antgonza at gmail.com
Mon Nov 3 15:11:04 EST 2014


Just looking at qstat as I was expecting to have 10 actual jobs
running. Expanding on that, what I was expecting is that if I have,
let's say, 32 cores (4 nodes/workers) in all my cluster and I specify
--n 10 that I have 10 engines and they are distributed between the 4
nodes. Is this possible?

On Mon, Nov 3, 2014 at 12:35 PM, MinRK <benjaminrk at gmail.com> wrote:
> It should create one PBS job array that starts ten engines. Are you only
> looking at qstat, or are you looking at the actual number of engines that
> end up connected via a parallel.Client?
>
> -MinRK
>
>
> On Mon, Nov 3, 2014 at 10:17 AM, Antonio González Peña <antgonza at gmail.com>
> wrote:
>>
>> Hi,
>>
>> Let's say that I have a system with n nodes and m cores and I will
>> like to start 10 engines using PBS. Note that n*m>10 and all my
>> filesystems are shared so that's not a problem. My current issue is
>> that if do:
>> ipcluster start --n 10
>> and/or have in my ipcluster_config.py
>> c.IPClusterEngines.n = 10
>> I always get 1 controller and 1 engine vs. having 1 controller and 10
>> engines submitted to PBS.
>>
>> Other option is that I'm misunderstanding how this works.
>>
>> Any help will be greatly appreciated.
>>
>> --
>> Antonio
>> _______________________________________________
>> IPython-dev mailing list
>> IPython-dev at scipy.org
>> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
>
> _______________________________________________
> IPython-dev mailing list
> IPython-dev at scipy.org
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>



-- 
Antonio



More information about the IPython-dev mailing list