Hi, Everybody!
Does anyone out there have a technique for getting the variance out of
a profile object? A profile object is good at getting <X> vs. B, I'd
then like to get < (X - <X>)^2 > vs B. Matt and I had spittballed the
possibility some time ago, but I was wondering if anyone out there had
successfully done it.
Thanks,
d.
--
Sent from my computer.
Dear yt
Can current yt calculate 3-D Mass power spectra? I checked the website but
I didn't find any information. I think calculating 3-D Mass power
spectra is a very useful for cosmological simulations. So I guess maybe yt
supports this function now....?
Thanks in advance
Dear yt:
Is there a way to annotate sphere to a plot? I used annotate_sphere but it
plots an outline of a circle but I would like a "filled circle" to
represent a star?
I have a python script (see below) to create a sphere, but not sure how to
integrate it with yt
Any suggestions
Thank you in advance
#--------------------------------------------------------------------------
def create_sphere_coords(radius=10):
"""
function just returns a set of x,y,z coordinates for generating
a sphere.
"""
r = radius
pi = np.pi
cos = np.cos
sin = np.sin
phi, theta = np.mgrid[0:pi:101j, 0:2 * pi:101j]
x = r * sin(phi) * cos(theta)
y = r * sin(phi) * sin(theta)
z = r * cos(phi)
return (x,y,z)
#--------------------------------------------------------------------------
On Tue, Jul 31, 2018 at 9:35 AM <yt-users-request(a)python.org> wrote:
> Send yt-users mailing list submissions to
> yt-users(a)python.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://mail.python.org/mm3/mailman3/lists/yt-users.python.org/
> or, via email, send a message with subject or body 'help' to
> yt-users-request(a)python.org
>
> You can reach the person managing the list at
> yt-users-owner(a)python.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of yt-users digest..."
>
> Today's Topics:
>
> 1. 3D integration (James Cook)
> 2. Re: Loading Illustris-1 data (qinyuxian(a)163.com)
> 3. parallelising derived fields (Rajika Kuruwita)
> 4. Re: 3D integration (Nathan Goldbaum)
> 5. Re: parallelising derived fields (Nathan Goldbaum)
>
>
> ----------------------------------------------------------------------
>
> Date: Tue, 31 Jul 2018 11:21:35 +0100
> From: James Cook <jamescook.106(a)gmail.com>
> Subject: [yt-users] 3D integration
> To: yt-users(a)python.org
> Message-ID: <9EA0503A-4A9C-47B0-AD1B-FFF1DCC2C015(a)gmail.com>
> Content-Type: text/plain; charset=utf-8
>
> Hi All
>
> I have been searching the documentation and can’t seem to figure out how
> to do this! Essentially I have a star defined via its density, and I define
> the edge of the star as to where the density has fallen to 5% of the max
> value. I calculate this radius using rays. The star has a different ‘edge’
> depending on what direction (x,y,z) you take the ray as it is the result of
> a collision. I want to take in the 3 values for the edges, and sketch an
> ellipsoid that covers this definition. I then want to integrate this
> ellipsoid to find the total value of density. I use amr so total_quantity
> won’t work.
>
> So I essentially want a weighted variable sum for an ellipsoid.
>
> Can anyone help?
>
> Thanks
>
> James
>
>
> ------------------------------
>
> Date: Tue, 31 Jul 2018 10:28:51 -0000
> From: qinyuxian(a)163.com
> Subject: [yt-users] Re: Loading Illustris-1 data
> To: yt-users(a)python.org
> Message-ID: <153303293184.19124.13333046111382833804(a)mail.python.org>
> Content-Type: text/plain; charset="utf-8"
>
> Creating a X-ray mock with Illustris data is my urgent task. So I wish I
> have a powerful tool for this.
>
> ------------------------------
>
> Date: Tue, 31 Jul 2018 10:43:25 -0000
> From: "Rajika Kuruwita" <rajika.kuruwita(a)anu.edu.au>
> Subject: [yt-users] parallelising derived fields
> To: yt-users(a)python.org
> Message-ID: <153303380530.1883.12874042131100432760(a)mail.python.org>
> Content-Type: text/plain; charset="utf-8"
>
> Over my years of using yt I have created many derived fields that are
> dependant on other derived fields and have various scripts that use them.
> So I have compiled all the definitions of fields and the yt.add_field()
> lines into one script which is now a module. One problem I have encountered
> is that, it doesn't seem that the derivation of these fields has been
> parallelised, as made evident by the fact that the time for
> ds.derived_field_list to run is independent of the number of processors
> available, even with yt.enable_parallelism(). Is this something that is
> planned to be implemented in the future?
>
> This problem is further aggravated by the fact that after loading a file
> and attempting to obtain one of the fields (e.g. dd['Corrected_val_x'])
> seems to actually force the calculation of every possible field added to
> yt.
>
> Has anyone determined a faster way of loading multiple derived fields?
>
> ------------------------------
>
> Date: Tue, 31 Jul 2018 09:24:44 -0500
> From: Nathan Goldbaum <nathan12343(a)gmail.com>
> Subject: [yt-users] Re: 3D integration
> To: Discussion of the yt analysis package <yt-users(a)python.org>
> Message-ID:
> <
> CAJXewOkvcYSo9fbmFnAjjjKkd-MDxcGPB56VrV83xBCFKwVFbA(a)mail.gmail.com>
> Content-Type: multipart/alternative;
> boundary="0000000000001ae10c05724c58d6"
>
> --0000000000001ae10c05724c58d6
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> Hi James,
>
> Did you know there's an ellipsoid data object? You could do something like
> this:
>
> In [14]: el =3D ds.ellipsoid(ds.domain_center, 0.5, 0.25, 0.1, np.array([1,
> 1, 1]), np.pi/4)
>
> In [15]: el['cell_mass']
> Out[15]:
> YTArray([4.45777680e+38, 4.45778381e+38, 4.45778677e+38, ...,
> 6.03437074e+36, 8.52994081e+36, 5.87109032e+37]) g
>
> In [16]: el['cell_volume'].to('cm**3')
> Out[16]:
> YTArray([8.96887209e+68, 8.96887209e+68, 8.96887209e+68, ...,
> 5.34586435e+61, 5.34586435e+61, 5.34586435e+61]) cm**3
>
> In [17]: for ax in 'xyz':
> ...: plot =3D yt.SlicePlot(ds, ax, ('gas', 'density'), data_source=
> =3Del)
> ...: plot.save()
>
> (see https://imgur.com/a/nqfl7yi for the resulting images using the enzo
> IsolatedGalaxy dataset from yt-project.org/data)
>
>
> I think you could just use the same script you use with spheres but use an
> ellipsoid instead.
>
> Note that in the example above I'm specifying the parameters of the
> ellipsoid (the first three arguments) in code units, which in Enzo are
> scaled to box size, your dataset might be different (for example if it uses
> CGS units internally).
>
> I don't the ellipsoid data object is as commonly used as the rest of the yt
> data objects, if you notice any weirdness or bugs we'd love to hear about
> it in the form of e-mails here or issues on github.
>
> -Nathan
>
> On Tue, Jul 31, 2018 at 5:21 AM, James Cook <jamescook.106(a)gmail.com>
> wrote=
> :
>
> > Hi All
> >
> > I have been searching the documentation and can=E2=80=99t seem to figure
> =
> out how
> > to do this! Essentially I have a star defined via its density, and I
> defi=
> ne
> > the edge of the star as to where the density has fallen to 5% of the max
> > value. I calculate this radius using rays. The star has a different =E2=
> =80=98edge=E2=80=99
> > depending on what direction (x,y,z) you take the ray as it is the result
> =
> of
> > a collision. I want to take in the 3 values for the edges, and sketch an
> > ellipsoid that covers this definition. I then want to integrate this
> > ellipsoid to find the total value of density. I use amr so total_quantity
> > won=E2=80=99t work.
> >
> > So I essentially want a weighted variable sum for an ellipsoid.
> >
> > Can anyone help?
> >
> > Thanks
> >
> > James
> >
> > _______________________________________________
> > yt-users mailing list -- yt-users(a)python.org
> > To unsubscribe send an email to yt-users-leave(a)python.org
> >
>
> --0000000000001ae10c05724c58d6
> Content-Type: text/html; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> <div dir=3D"ltr"><div>Hi James,</div><div><br></div><div>Did you know
> there=
> 's an ellipsoid data object? You could do something like
> this:</div><di=
> v><br></div><div>In [14]: el =3D ds.ellipsoid(ds.domain_center, 0.5, 0.25,
> =
> 0.1, np.array([1, 1, 1]), np.pi/4)<br><br>In [15]:
> el['cell_mass']<=
> br>Out[15]:<br>YTArray([4.45777680e+38, 4.45778381e+38, 4.45778677e+38,
> ...=
> ,<br>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 6.03437074e+36,
> 8.529=
> 94081e+36, 5.87109032e+37]) g<br><br>In [16]:
> el['cell_volume'].to(=
> 'cm**3')<br>Out[16]:<br>YTArray([8.96887209e+68, 8.96887209e+68,
> 8.=
> 96887209e+68, ...,<br>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> 5.34=
> 586435e+61, 5.34586435e+61, 5.34586435e+61]) cm**3<br><br>In [17]: for ax
> i=
> n 'xyz':<br>=C2=A0=C2=A0=C2=A0 ...:=C2=A0=C2=A0=C2=A0=C2=A0 plot =
> =3D yt.SlicePlot(ds, ax, ('gas', 'density'),
> data_source=3D=
> el)<br>=C2=A0=C2=A0=C2=A0 ...:=C2=A0=C2=A0=C2=A0=C2=A0 plot.save()<br>=C2=
> =A0</div><div>(see <a href=3D"https://imgur.com/a/nqfl7yi">
> https://imgur.co=
> m/a/nqfl7yi</a> for the resulting images using the enzo IsolatedGalaxy
> data=
> set from <a href=3D"http://yt-project.org/data">yt-project.org/data
> </a>)</d=
> iv><div><br></div><div><div><br></div><div>I think you could just use the
> s=
> ame script you use with spheres but use an ellipsoid
> instead.</div></div><d=
> iv><br></div><div>Note that in the example above I'm specifying the
> par=
> ameters of the ellipsoid (the first three arguments) in code units, which
> i=
> n Enzo are scaled to box size, your dataset might be different (for
> example=
> if it uses CGS units internally).<br></div><div><br></div><div>I
> don't=
> the ellipsoid data object is as commonly used as the rest of the yt data
> o=
> bjects, if you notice any weirdness or bugs we'd love to hear about it
> =
> in the form of e-mails here or issues on
> github.</div><div><br></div><div>-=
> Nathan<br></div></div><div class=3D"gmail_extra"><br><div
> class=3D"gmail_qu=
> ote">On Tue, Jul 31, 2018 at 5:21 AM, James Cook <span dir=3D"ltr"><<a
> h=
> ref=3D"mailto:jamescook.106@gmail.com"
> target=3D"_blank">jamescook.106@gmai=
> l.com</a>></span> wrote:<br><blockquote class=3D"gmail_quote"
> style=3D"m=
> argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi All<br>
> <br>
> I have been searching the documentation and can=E2=80=99t seem to figure
> ou=
> t how to do this! Essentially I have a star defined via its density, and I
> =
> define the edge of the star as to where the density has fallen to 5% of
> the=
> max value. I calculate this radius using rays. The star has a different =
> =E2=80=98edge=E2=80=99 depending on what direction (x,y,z) you take the
> ray=
> as it is the result of a collision. I want to take in the 3 values for
> the=
> edges, and sketch an ellipsoid that covers this definition. I then want
> to=
> integrate this ellipsoid to find the total value of density. I use amr so
> =
> total_quantity won=E2=80=99t work.<br>
> <br>
> So I essentially want a weighted variable sum for an ellipsoid.<br>
> <br>
> Can anyone help?<br>
> <br>
> Thanks<br>
> <br>
> James<br>
> <br>
> ______________________________<wbr>_________________<br>
> yt-users mailing list -- <a href=3D"mailto:yt-users@python.org
> ">yt-users@py=
> thon.org</a><br>
> To unsubscribe send an email to <a href=3D"mailto:
> yt-users-leave(a)python.org=
> ">yt-users-leave(a)python.org</a><br>
> </blockquote></div><br></div>
>
> --0000000000001ae10c05724c58d6--
>
> ------------------------------
>
> Date: Tue, 31 Jul 2018 09:33:32 -0500
> From: Nathan Goldbaum <nathan12343(a)gmail.com>
> Subject: [yt-users] Re: parallelising derived fields
> To: Discussion of the yt analysis package <yt-users(a)python.org>
> Message-ID:
> <
> CAJXewOky3+KeA8nVPRQKa-JuTUvmAgAGo_cm0Brm-u8m0hTEGg(a)mail.gmail.com>
> Content-Type: multipart/alternative;
> boundary="00000000000090223005724c7799"
>
> --00000000000090223005724c7799
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> This is definitely something that we know needs improving. We have plans
> for a significant overhaul of the field system and one of the major goals
> of the overhaul is to reduce the cost of the field detection step when
> loading a dataset. Currently the field system generates the derived field
> graph in a somewhat baroque fashion, relying on Python exception handling
> on chained calls to functions that operate on numpy arrays. This process is
> not as efficient as if we somehow encoded the derived field dependency
> graph symbolically and relied on the graph itself to generate the derived
> field list given a set of available on-disk fields.
>
> This work is ongoing and unfortunately is not ready to be used yet. As you
> noted field detection is not parallelized so I don't think there's much to
> be done architecturally to speed up your workflow right now. Hopefully in a
> year or so we'll be releasing a version of yt that has a much faster field
> detection system such that you won't notice that it's not parallelized
> simply because it's so much quicker!
>
> That doesn't help you right now of course. To be honest I don't normally
> hear from users with workflows where the major overhead is the field
> detection step. We definitely notice when developing yt (we estimate about
> half the time in the unit tests is spent doing field detection over and
> over on different test datasets), which is why we're so gung ho on making
> things faster. If you could share more details about what your derived
> fields look like, either by sharing your code or even better by making a
> reduced minimal example that demonstrates the slowdown you're hitting, one
> of us might be able to suggest a way to speed up field detection for your
> derived fields based on something happening in your scropt, or possibly
> allow us to spot some low hanging fruit for optimization in field system as
> it currently exists in yt if you happen to be hitting an easy-to-fix
> scaling issue we're not aware of yet.
>
> -Nathan
>
>
>
> On Tue, Jul 31, 2018 at 5:43 AM, Rajika Kuruwita <
> rajika.kuruwita(a)anu.edu.au
> > wrote:
>
> > Over my years of using yt I have created many derived fields that are
> > dependant on other derived fields and have various scripts that use them.
> > So I have compiled all the definitions of fields and the yt.add_field()
> > lines into one script which is now a module. One problem I have
> encountered
> > is that, it doesn't seem that the derivation of these fields has been
> > parallelised, as made evident by the fact that the time for
> > ds.derived_field_list to run is independent of the number of processors
> > available, even with yt.enable_parallelism(). Is this something that is
> > planned to be implemented in the future?
> >
> > This problem is further aggravated by the fact that after loading a file
> > and attempting to obtain one of the fields (e.g. dd['Corrected_val_x'])
> > seems to actually force the calculation of every possible field added to
> > yt.
> >
> > Has anyone determined a faster way of loading multiple derived fields?
> > _______________________________________________
> > yt-users mailing list -- yt-users(a)python.org
> > To unsubscribe send an email to yt-users-leave(a)python.org
> >
>
> --00000000000090223005724c7799
> Content-Type: text/html; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> <div dir=3D"ltr"><div>Hi,</div><div><br></div><div>This is definitely
> somet=
> hing that we know needs improving. We have plans for a significant
> overhaul=
> of the field system and one of the major goals of the overhaul is to
> reduc=
> e the cost of the field detection step when loading a dataset. Currently
> th=
> e field system generates the derived field graph in a somewhat baroque
> fash=
> ion, relying on Python exception handling on chained calls to functions
> tha=
> t operate on numpy arrays. This process is not as efficient as if we
> someho=
> w encoded the derived field dependency graph symbolically and relied on
> the=
> graph itself to generate the derived field list given a set of available
> o=
> n-disk fields.</div><div><br></div><div>This work is ongoing and
> unfortunat=
> ely is not ready to be used yet. As you noted field detection is not
> parall=
> elized so I don't think there's much to be done architecturally to
> =
> speed up your workflow right now. Hopefully in a year or so we'll be
> re=
> leasing a version of yt that has a much faster field detection system such
> =
> that you won't notice that it's not parallelized simply because
> it&=
> #39;s so much quicker!</div><div><br></div><div>That doesn't help you
> r=
> ight now of course. To be honest I don't normally hear from users with
> =
> workflows where the major overhead is the field detection step. We
> definite=
> ly notice when developing yt (we estimate about half the time in the unit
> t=
> ests is spent doing field detection over and over on different test
> dataset=
> s), which is why we're so gung ho on making things faster. If you
> could=
> share more details about what your derived fields look like, either by
> sha=
> ring your code or even better by making a reduced minimal example that
> demo=
> nstrates the slowdown you're hitting, one of us might be able to
> sugges=
> t a way to speed up field detection for your derived fields based on
> someth=
> ing happening in your scropt, or possibly allow us to spot some low
> hanging=
> fruit for optimization in field system as it currently exists in yt if
> you=
> happen to be hitting an easy-to-fix scaling issue we're not aware of
> y=
>
> et.</div><div><br></div><div>-Nathan<br></div><div><br></div><div><br></div=
> ></div><div class=3D"gmail_extra"><br><div class=3D"gmail_quote">On Tue,
> Ju=
> l 31, 2018 at 5:43 AM, Rajika Kuruwita <span dir=3D"ltr"><<a
> href=3D"mai=
> lto:rajika.kuruwita@anu.edu.au"
> target=3D"_blank">rajika.kuruwita(a)anu.edu.a=
> u</a>></span> wrote:<br><blockquote class=3D"gmail_quote"
> style=3D"margi=
> n:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Over my years of
> =
> using yt I have created many derived fields that are dependant on other
> der=
> ived fields and have various scripts that use them. So I have compiled all
> =
> the definitions of fields and the yt.add_field() lines into one script
> whic=
> h is now a module. One problem I have encountered is that, it doesn't
> s=
> eem that the derivation of these fields has been parallelised, as made
> evid=
> ent by the fact that the time for ds.derived_field_list to run is
> independe=
> nt of the number of processors available, even with
> yt.enable_parallelism()=
> . Is this something that is planned to be implemented in the future? <br>
> <br>
> This problem is further aggravated by the fact that after loading a file
> an=
> d attempting to obtain one of the fields (e.g.
> dd['Corrected_val_x'=
> ]) seems to actually force the calculation of every possible field added
> to=
> yt. <br>
> <br>
> Has anyone determined a faster way of loading multiple derived fields?<br>
> ______________________________<wbr>_________________<br>
> yt-users mailing list -- <a href=3D"mailto:yt-users@python.org
> ">yt-users@py=
> thon.org</a><br>
> To unsubscribe send an email to <a href=3D"mailto:
> yt-users-leave(a)python.org=
> ">yt-users-leave(a)python.org</a><br>
> </blockquote></div><br></div>
>
> --00000000000090223005724c7799--
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> yt-users mailing list -- yt-users(a)python.org
> To unsubscribe send an email to yt-users-leave(a)python.org
>
>
> ------------------------------
>
> End of yt-users Digest, Vol 125, Issue 17
> *****************************************
>
--
*SK2*
*"**Claiming that something can move faster than light is a good
conversation-stopper in physics. People edge away from you in cocktail
parties; friends never return phone calls. You just don’t mess with Albert
Einstein.**"*
Over my years of using yt I have created many derived fields that are dependant on other derived fields and have various scripts that use them. So I have compiled all the definitions of fields and the yt.add_field() lines into one script which is now a module. One problem I have encountered is that, it doesn't seem that the derivation of these fields has been parallelised, as made evident by the fact that the time for ds.derived_field_list to run is independent of the number of processors available, even with yt.enable_parallelism(). Is this something that is planned to be implemented in the future?
This problem is further aggravated by the fact that after loading a file and attempting to obtain one of the fields (e.g. dd['Corrected_val_x']) seems to actually force the calculation of every possible field added to yt.
Has anyone determined a faster way of loading multiple derived fields?
Hi All
I have been searching the documentation and can’t seem to figure out how to do this! Essentially I have a star defined via its density, and I define the edge of the star as to where the density has fallen to 5% of the max value. I calculate this radius using rays. The star has a different ‘edge’ depending on what direction (x,y,z) you take the ray as it is the result of a collision. I want to take in the 3 values for the edges, and sketch an ellipsoid that covers this definition. I then want to integrate this ellipsoid to find the total value of density. I use amr so total_quantity won’t work.
So I essentially want a weighted variable sum for an ellipsoid.
Can anyone help?
Thanks
James
I'm having some trouble saving pdfs that are transparent using .save() with any of the PlotContainer inherited classes.
Typically when I have access to the entire figure, plt.savefig has an optional argument Transparent=True that returns the desired result. I've tried to get the underlying figure with my SlicePlot for example, but I also have additional annotations (contours) and I'm not sure how to add these to the underlying figure in order to use plt.savefig.
I see that the save function in PlotContainer creates a FigureCanvasPdf object which then calls print_figure (which calls print_pdf). This function does not have an argument for transparency.
Thoughts?
I just noticed an issue that arises when loading an ART simulation when using the yt.load() command. When I use the command
ds = yt.load("VELA07/10MpcBox_csf512_a0.020.d"), I get this response
yt : [INFO ] 2018-07-25 15:59:41,434 Using root level of 14
yt : [INFO ] 2018-07-25 15:59:41,460 Discovered 7 species of particles
yt : [INFO ] 2018-07-25 15:59:41,461 Particle populations: 34930688 4943872 855040 139392 21776 2088309 0
yt : [INFO ] 2018-07-25 15:59:42,098 Max level is 06
yt : [INFO ] 2018-07-25 15:59:42,167 Parameters: current_time = 0.051136094792605884 Gyr
yt : [INFO ] 2018-07-25 15:59:42,168 Parameters: domain_dimensions = [128 128 128]
yt : [INFO ] 2018-07-25 15:59:42,168 Parameters: domain_left_edge = [0. 0. 0.]
yt : [INFO ] 2018-07-25 15:59:42,169 Parameters: domain_right_edge = [1. 1. 1.]
yt : [INFO ] 2018-07-25 15:59:42,170 Parameters: cosmological_simulation = True
yt : [INFO ] 2018-07-25 15:59:42,170 Parameters: current_redshift = 48.70056421389564
yt : [INFO ] 2018-07-25 15:59:42,170 Parameters: omega_lambda = 0.7300000190734863
yt : [INFO ] 2018-07-25 15:59:42,170 Parameters: omega_matter = 0.27000001072883606
yt : [INFO ] 2018-07-25 15:59:42,171 Parameters: hubble_constant = 0.699999988079071
yt : [INFO ] 2018-07-25 15:59:42,350 discovered particle_header:/nobackupp2/sflarkin/VELA07/PMcrda0.410.DAT
yt : [INFO ] 2018-07-25 15:59:42,351 discovered particle_data:/nobackupp2/sflarkin/VELA07/PMcrs0a0.320.DAT
yt : [INFO ] 2018-07-25 15:59:42,352 discovered particle_stars:/nobackupp2/sflarkin/VELA07/stars_a0.390.dat
The bottom three lines show that when I am loading the .020 snapshot, I am getting particle headers and data from entirely different snapshots.
To address this, i used the file_particle_header specification, as listed her on the Loading Data page. http://yt-project.org/doc/examining/loading_data.html
However, it appears that these commands do not support wildcards like * for reading multiple files, as when I changed my code to support this, I got this error.
P002 yt : [ERROR ] 2018-07-25 15:43:39,114 FileNotFoundError: [Errno 2] No such file or directory: '/nobackupp2/sflarkin/VELA07test/PMcrda0*'
Is there a way to specify the correct files that uses wildcards to load the proper data?
I'm having some trouble saving pdfs that are transparent using .save() with
any of the PlotContainer inherited classes.
Typically when I have access to the entire figure, plt.savefig has an
optional argument Transparent=True that returns the desired result. I've
tried to get the underlying figure with my SlicePlot for example, but I
also have additional annotations (contours) and I'm not sure how to add
these to the underlying figure in order to use plt.savefig.
I see that the save function in PlotContainer creates a FigureCanvasPdf
object which then calls print_figure (which calls print_pdf). This function
does not have an argument for transparency.
Thoughts?
We use mpi4py for parallelism:
https://yt-project.org/doc/analyzing/parallel_computation.html
We are experimenting with using dark for some things, but yt predates dask
and has its own capabilities for out of core and parallel computation.
Deeper integration with dask is amethyst g we might explore in the future
but we have no current concrete plans.
Also please direct questions like this to the mailing list (cc’d) rather
than to the mailing list admins.
On Tue, Jul 24, 2018 at 7:07 PM Aaron Chu <xweichu(a)ucsc.edu> wrote:
> Dear teams,
>
> We're working on a project which needs yt lib to access boxlib data such
> as Amex. And we consider to distribute the computation across multiple
> nodes. Is there a way that yt can be used with distributed computation
> framework like dask(https://dask.pydata.org/en/latest/)? Or any features
> are under development to support Dask?
>
> Appreciate your time. Thanks.
>
>
> --
> Best Regards,
> Aaron Chu
>