Hi, Thank for answering, My question is what are you doing now in order to have fast computations, without changing any YT code. Is there something That can be solved by change in the YT configurations? (Do I understand from your answer that no?) How are you handling many group members using the same files together? Do you use SSD drives? Do you copy each snapshot to the node and then compute? Do you do some of what I asked about? Do you have any other special tricks? Thank, Tomer On Tue, Nov 28, 2017 at 7:53 PM, <yt-users-request@lists.spacepope.org> wrote:
Send yt-users mailing list submissions to yt-users@lists.spacepope.org
To subscribe or unsubscribe via the World Wide Web, visit http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org or, via email, send a message with subject or body 'help' to yt-users-request@lists.spacepope.org
You can reach the person managing the list at yt-users-owner@lists.spacepope.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of yt-users digest..."
Today's Topics:
1. Re: yt-users Digest, Vol 117, Issue 31 (Matthew Turk)
----------------------------------------------------------------------
Message: 1 Date: Tue, 28 Nov 2017 11:52:57 -0600 From: Matthew Turk <matthewturk@gmail.com> To: Discussion of the yt analysis package <yt-users@lists.spacepope.org> Subject: Re: [yt-users] yt-users Digest, Vol 117, Issue 31 Message-ID: <CALO3=5Hu9VJLYA9yAZEPdACLUi+RfYwhABD78du+vsacJW=wXA@mail. gmail.com> Content-Type: text/plain; charset="utf-8"
Hi Tomer,
Both the ARTIO frontend and the NMSU-ART frontends could definitely be improved in their number of disk accesses, but like Kacper said, getting a handle on where they are occurring would be the best first step, I think. Then we can take it from there.
On Tue, Nov 28, 2017 at 11:25 AM, Kacper Kowalik <xarthisius.kk@gmail.com> wrote:
Hi Tomer, unfortunately we don't have any means of fine tuning yt's i/o on a low level. Short of profiling ART frontend and trying to decrease the number of iops, I'm afraid there's not much to be done.
Having said that, I'd try to debug your GPFS too with something different than yt and simpler. Maybe a single hdf5 file with a large array and simultaneous access to a random row/column with each process?
20 concurrent reads sounds like something that GPFS should handle without degradation of performance from 10s to hours.
Cheers, Kacper
On 11/28/2017 11:01 AM, Tomer Nussbaum wrote:
Hi Nathan,
Thank you for answering,
1. Does it take hours to load these files on a laptop? # Ans: Nope, It take 10 sec. # It takes us lots of time on the cluster when we parallel the jobs, above 20 yt.load calls toghter. # We toke a look at our GPFS, it looks like there are many IOP that flood the GPFS, (it is currently a serial reading at our GPFS)
2. What sort of data are you working with? How big are the data files? # Ans: We are working on Vela sims, it's ART simulations, 5GB per snapshot ruffly.
3. When you say ?load? the data, is that just doing yt.load or are you doing any I/O? # Ans: load=yt.load and then creating a sphere obj and reading gas particles from it.
Thank, Tomer
On Tue, Nov 28, 2017 at 6:31 PM, <yt-users-request@lists.spacepope.org <mailto:yt-users-request@lists.spacepope.org>> wrote:
Send yt-users mailing list submissions to yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org <http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org> or, via email, send a message with subject or body 'help' to yt-users-request@lists.spacepope.org <mailto:yt-users-request@lists.spacepope.org>
You can reach the person managing the list at yt-users-owner@lists.spacepope.org <mailto:yt-users-owner@lists.spacepope.org>
When replying, please edit your Subject line so it is more specific than "Re: Contents of yt-users digest..."
Today's Topics:
1. Fwd: Cluster best configurations for YT (Tomer Nussbaum) 2. Re: Fwd: Cluster best configurations for YT (Nathan Goldbaum)
------------------------------------------------------------
Message: 1 Date: Tue, 28 Nov 2017 11:55:56 +0200 From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il <mailto:tomer.nussbaum@mail.huji.ac.il>> To: yt-users@lists.spacepope.org <mailto:yt-users@lists.
spacepope.org>
Subject: [yt-users] Fwd: Cluster best configurations for YT Message-ID:
<CAOGC2jACH1Q4QLfFNi8h7p=XhqEFvnp-6g17FUv6aDN2vSvqWg@mail.
gmail.com
<mailto:XhqEFvnp-6g17FUv6aDN2vSvqWg@mail.gmail.com>> Content-Type: text/plain; charset="utf-8"
---------- Forwarded message ---------- From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il <mailto:tomer.nussbaum@mail.huji.ac.il>> Date: Thu, Nov 23, 2017 at 6:08 PM Subject: Cluster best configurations for YT To: yt-users@lists.spacepope.org <mailto:yt-users@lists.
spacepope.org>
Hi,
We have updated our cluster lately, but our YT platform works very
slow
(loads one snapshot an hour, when couple of people run
together...).
I wanted to ask you if you can help solving this issue from your experience.
We use infiniband, and we see that the main problem is lots of request in random access to our hard drives, so the problem can be in fine tuning YT on the Nodes, or fine
tuning the
file system
This brings up this couple of issues (maybe more, if you have another idea, I would be thankful to know):
1. *IO readings - *Is there a way to optimize the io readings requests to the file server? 2. *YT configurations -* Are there specific parameters in the YT configuration for this status? 3. *File server behavior -* Can we set reading server functions (we use hard drives, serial reading status)? 4. *Metadata cache -* Does using metadata cache will solve the
issue?
5. *zlib usage - *How can I check if this feature is activated, how much is it important in the YT platform?
I will really appreciate any help with it, Thanx, Tomer -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.
org/attachments/20171128/cde38586/attachment-0001.html
org/attachments/20171128/cde38586/attachment-0001.html>>
------------------------------
Message: 2 Date: Tue, 28 Nov 2017 12:42:40 +0000 From: Nathan Goldbaum <nathan12343@gmail.com <mailto:nathan12343@gmail.com>> To: Discussion of the yt analysis package <yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org>> Subject: Re: [yt-users] Fwd: Cluster best configurations for YT Message-ID:
<CAJXewOk9tCL-TCTQef9Up-mTj-qgVFJp6yHZXmwQ8SReoV4h8A@mail.
gmail.com
<mailto:CAJXewOk9tCL-TCTQef9Up-mTj-qgVFJp6yHZXmwQ8SReoV4h8A@mail.
gmail.com>>
Content-Type: text/plain; charset="utf-8"
Hi Tomer,
A couple questions:
1. Does it take hours to load these files on a laptop?
2. What sort of data are you working with? How big are the data
files?
3. When you say ?load? the data, is that just doing yt.load or are
you
doing any I/O?
Nathan
On Tue, Nov 28, 2017 at 4:56 AM Tomer Nussbaum < tomer.nussbaum@mail.huji.ac.il <mailto:tomer.nussbaum@mail.huji.ac.il>> wrote:
> > ---------- Forwarded message ---------- > From: Tomer Nussbaum <tomer.nussbaum@mail.huji.ac.il <mailto:tomer.nussbaum@mail.huji.ac.il>> > Date: Thu, Nov 23, 2017 at 6:08 PM > Subject: Cluster best configurations for YT > To: yt-users@lists.spacepope.org <mailto:yt-users@lists.
spacepope.org>
> > > > Hi, > > We have updated our cluster lately, but our YT platform works
very
slow > (loads one snapshot an hour, when couple of people run
together...).
> I wanted to ask you if you can help solving this issue from your > experience. > > We use infiniband, and we see that the main problem is lots of request in > random access to our hard drives, > so the problem can be in fine tuning YT on the Nodes, or fine tuning the > file system > > This brings up this couple of issues (maybe more, if you have
another
> idea, I would be thankful to know): > > 1. *IO readings - *Is there a way to optimize the io readings requests > to the file server? > 2. *YT configurations -* Are there specific parameters in the
YT
> configuration for this status? > 3. *File server behavior -* Can we set reading server
functions (we
> use hard drives, serial reading status)? > 4. *Metadata cache -* Does using metadata cache will solve the issue? > 5. *zlib usage - *How can I check if this feature is
activated, how
> much is it important in the YT platform? > > > I will really appreciate any help with it, > Thanx, > Tomer > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org <mailto:yt-users@lists.
spacepope.org>
> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org <http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope.
org/attachments/20171128/854b8a3d/attachment-0001.html
org/attachments/20171128/854b8a3d/attachment-0001.html>>
------------------------------
Subject: Digest Footer
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org <mailto:yt-users@lists.spacepope.org> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org <http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org>
------------------------------
End of yt-users Digest, Vol 117, Issue 31 *****************************************
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.spacepope.org/pipermail/yt-users-spacepope. org/attachments/20171128/1708bea9/attachment.html>
------------------------------
Subject: Digest Footer
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
------------------------------
End of yt-users Digest, Vol 117, Issue 33 *****************************************