Hi Nathan,

Thank you for answering,

1. Does it take hours to load these files on a laptop?
# Ans: Nope, It take 10 sec. 
# It takes us lots of time on the cluster when we parallel the jobs, above 20 yt.load calls toghter.
# We toke a look at our GPFS, it looks like there are many IOP that flood the GPFS, (it is currently a serial reading at our GPFS)

2. What sort of data are you working with? How big are the data files?
# Ans: We are working on Vela sims, it's ART simulations, 5GB per snapshot ruffly.

3. When you say ?load? the data, is that just doing yt.load or are you
doing any I/O?
# Ans: load=yt.load and then creating a sphere obj and reading gas particles from it.

Thank,
Tomer

On Tue, Nov 28, 2017 at 6:31 PM, <yt-users-request@lists.spacepope.org> wrote:
Send yt-users mailing list submissions to
        yt-users@lists.spacepope.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
or, via email, send a message with subject or body 'help' to
        yt-users-request@lists.spacepope.org

You can reach the person managing the list at
        yt-users-owner@lists.spacepope.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of yt-users digest..."


Today's Topics:

   1. Fwd: Cluster best configurations for YT (Tomer Nussbaum)
   2. Re: Fwd: Cluster best configurations for YT (Nathan Goldbaum)


----------------------------------------------------------------------

Message: 1
Date: Tue, 28 Nov 2017 11:55:56 +0200
From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il>
To: yt-users@lists.spacepope.org
Subject: [yt-users] Fwd: Cluster best configurations for YT
Message-ID:
        <CAOGC2jACH1Q4QLfFNi8h7p=XhqEFvnp-6g17FUv6aDN2vSvqWg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

---------- Forwarded message ----------
From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il>
Date: Thu, Nov 23, 2017 at 6:08 PM
Subject: Cluster best configurations for YT
To: yt-users@lists.spacepope.org



Hi,

We have updated our cluster lately, but our YT platform works very slow
(loads one snapshot an hour, when couple of people run together...).
I wanted to ask you if you can help solving this issue from your experience.

We use infiniband, and we see that the main problem is lots of request in
random access to our hard drives,
so the problem can be in fine tuning YT on the Nodes, or fine tuning the
file system

This brings up this couple of issues (maybe more, if you have another idea,
I would be thankful to know):

   1. *IO readings - *Is there a way to optimize the io readings requests
   to the file server?
   2. *YT configurations -* Are there specific parameters in the YT
   configuration for this status?
   3. *File server behavior -* Can we set reading server functions (we use
   hard drives, serial reading status)?
   4. *Metadata cache -* Does using metadata cache will solve the issue?
   5. *zlib usage - *How can I check if this feature is activated, how much
   is it important in the YT platform?


I will really appreciate any help with it,
Thanx,
Tomer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20171128/cde38586/attachment-0001.html>

------------------------------

Message: 2
Date: Tue, 28 Nov 2017 12:42:40 +0000
From: Nathan Goldbaum <nathan12343@gmail.com>
To: Discussion of the yt analysis package
        <yt-users@lists.spacepope.org>
Subject: Re: [yt-users] Fwd: Cluster best configurations for YT
Message-ID:
        <CAJXewOk9tCL-TCTQef9Up-mTj-qgVFJp6yHZXmwQ8SReoV4h8A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi Tomer,

A couple questions:

1. Does it take hours to load these files on a laptop?

2. What sort of data are you working with? How big are the data files?

3. When you say ?load? the data, is that just doing yt.load or are you
doing any I/O?

Nathan

On Tue, Nov 28, 2017 at 4:56 AM Tomer Nussbaum <
tomer.nussbaum@mail.huji.ac.il> wrote:

>
> ---------- Forwarded message ----------
> From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il>
> Date: Thu, Nov 23, 2017 at 6:08 PM
> Subject: Cluster best configurations for YT
> To: yt-users@lists.spacepope.org
>
>
>
> Hi,
>
> We have updated our cluster lately, but our YT platform works very slow
> (loads one snapshot an hour, when couple of people run together...).
> I wanted to ask you if you can help solving this issue from your
> experience.
>
> We use infiniband, and we see that the main problem is lots of request in
> random access to our hard drives,
> so the problem can be in fine tuning YT on the Nodes, or fine tuning the
> file system
>
> This brings up this couple of issues (maybe more, if you have another
> idea, I would be thankful to know):
>
>    1. *IO readings - *Is there a way to optimize the io readings requests
>    to the file server?
>    2. *YT configurations -* Are there specific parameters in the YT
>    configuration for this status?
>    3. *File server behavior -* Can we set reading server functions (we
>    use hard drives, serial reading status)?
>    4. *Metadata cache -* Does using metadata cache will solve the issue?
>    5. *zlib usage - *How can I check if this feature is activated, how
>    much is it important in the YT platform?
>
>
> I will really appreciate any help with it,
> Thanx,
> Tomer
>
>
> _______________________________________________
> yt-users mailing list
> yt-users@lists.spacepope.org
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.org/attachments/20171128/854b8a3d/attachment-0001.html>

------------------------------

Subject: Digest Footer

_______________________________________________
yt-users mailing list
yt-users@lists.spacepope.org
http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org


------------------------------

End of yt-users Digest, Vol 117, Issue 31
*****************************************