Dear yt:
Is there a way to annotate sphere to a plot? I used annotate_sphere but it
plots an outline of a circle but I would like a "filled circle" to
represent a star?
I have a python script (see below) to create a sphere, but not sure how to
integrate it with yt
Any suggestions
Thank you in advance
#--------------------------------------------------------------------------
def create_sphere_coords(radius=10):
"""
function just returns a set of x,y,z coordinates for generating
a sphere.
"""
r = radius
pi = np.pi
cos = np.cos
sin = np.sin
phi, theta = np.mgrid[0:pi:101j, 0:2 * pi:101j]
x = r * sin(phi) * cos(theta)
y = r * sin(phi) * sin(theta)
z = r * cos(phi)
return (x,y,z)
#--------------------------------------------------------------------------
On Tue, Jul 31, 2018 at 9:35 AM
Send yt-users mailing list submissions to yt-users@python.org
To subscribe or unsubscribe via the World Wide Web, visit https://mail.python.org/mm3/mailman3/lists/yt-users.python.org/ or, via email, send a message with subject or body 'help' to yt-users-request@python.org
You can reach the person managing the list at yt-users-owner@python.org
When replying, please edit your Subject line so it is more specific than "Re: Contents of yt-users digest..."
Today's Topics:
1. 3D integration (James Cook) 2. Re: Loading Illustris-1 data (qinyuxian@163.com) 3. parallelising derived fields (Rajika Kuruwita) 4. Re: 3D integration (Nathan Goldbaum) 5. Re: parallelising derived fields (Nathan Goldbaum)
----------------------------------------------------------------------
Date: Tue, 31 Jul 2018 11:21:35 +0100 From: James Cook
Subject: [yt-users] 3D integration To: yt-users@python.org Message-ID: <9EA0503A-4A9C-47B0-AD1B-FFF1DCC2C015@gmail.com> Content-Type: text/plain; charset=utf-8 Hi All
I have been searching the documentation and can’t seem to figure out how to do this! Essentially I have a star defined via its density, and I define the edge of the star as to where the density has fallen to 5% of the max value. I calculate this radius using rays. The star has a different ‘edge’ depending on what direction (x,y,z) you take the ray as it is the result of a collision. I want to take in the 3 values for the edges, and sketch an ellipsoid that covers this definition. I then want to integrate this ellipsoid to find the total value of density. I use amr so total_quantity won’t work.
So I essentially want a weighted variable sum for an ellipsoid.
Can anyone help?
Thanks
James
------------------------------
Date: Tue, 31 Jul 2018 10:28:51 -0000 From: qinyuxian@163.com Subject: [yt-users] Re: Loading Illustris-1 data To: yt-users@python.org Message-ID: <153303293184.19124.13333046111382833804@mail.python.org> Content-Type: text/plain; charset="utf-8"
Creating a X-ray mock with Illustris data is my urgent task. So I wish I have a powerful tool for this.
------------------------------
Date: Tue, 31 Jul 2018 10:43:25 -0000 From: "Rajika Kuruwita"
Subject: [yt-users] parallelising derived fields To: yt-users@python.org Message-ID: <153303380530.1883.12874042131100432760@mail.python.org> Content-Type: text/plain; charset="utf-8" Over my years of using yt I have created many derived fields that are dependant on other derived fields and have various scripts that use them. So I have compiled all the definitions of fields and the yt.add_field() lines into one script which is now a module. One problem I have encountered is that, it doesn't seem that the derivation of these fields has been parallelised, as made evident by the fact that the time for ds.derived_field_list to run is independent of the number of processors available, even with yt.enable_parallelism(). Is this something that is planned to be implemented in the future?
This problem is further aggravated by the fact that after loading a file and attempting to obtain one of the fields (e.g. dd['Corrected_val_x']) seems to actually force the calculation of every possible field added to yt.
Has anyone determined a faster way of loading multiple derived fields?
------------------------------
Date: Tue, 31 Jul 2018 09:24:44 -0500 From: Nathan Goldbaum
Subject: [yt-users] Re: 3D integration To: Discussion of the yt analysis package Message-ID: < CAJXewOkvcYSo9fbmFnAjjjKkd-MDxcGPB56VrV83xBCFKwVFbA@mail.gmail.com> Content-Type: multipart/alternative; boundary="0000000000001ae10c05724c58d6" --0000000000001ae10c05724c58d6 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi James,
Did you know there's an ellipsoid data object? You could do something like this:
In [14]: el =3D ds.ellipsoid(ds.domain_center, 0.5, 0.25, 0.1, np.array([1, 1, 1]), np.pi/4)
In [15]: el['cell_mass'] Out[15]: YTArray([4.45777680e+38, 4.45778381e+38, 4.45778677e+38, ..., 6.03437074e+36, 8.52994081e+36, 5.87109032e+37]) g
In [16]: el['cell_volume'].to('cm**3') Out[16]: YTArray([8.96887209e+68, 8.96887209e+68, 8.96887209e+68, ..., 5.34586435e+61, 5.34586435e+61, 5.34586435e+61]) cm**3
In [17]: for ax in 'xyz': ...: plot =3D yt.SlicePlot(ds, ax, ('gas', 'density'), data_source= =3Del) ...: plot.save()
(see https://imgur.com/a/nqfl7yi for the resulting images using the enzo IsolatedGalaxy dataset from yt-project.org/data)
I think you could just use the same script you use with spheres but use an ellipsoid instead.
Note that in the example above I'm specifying the parameters of the ellipsoid (the first three arguments) in code units, which in Enzo are scaled to box size, your dataset might be different (for example if it uses CGS units internally).
I don't the ellipsoid data object is as commonly used as the rest of the yt data objects, if you notice any weirdness or bugs we'd love to hear about it in the form of e-mails here or issues on github.
-Nathan
On Tue, Jul 31, 2018 at 5:21 AM, James Cook
wrote= : Hi All
I have been searching the documentation and can=E2=80=99t seem to figure = out how to do this! Essentially I have a star defined via its density, and I defi= ne the edge of the star as to where the density has fallen to 5% of the max value. I calculate this radius using rays. The star has a different =E2= =80=98edge=E2=80=99 depending on what direction (x,y,z) you take the ray as it is the result = of a collision. I want to take in the 3 values for the edges, and sketch an ellipsoid that covers this definition. I then want to integrate this ellipsoid to find the total value of density. I use amr so total_quantity won=E2=80=99t work.
So I essentially want a weighted variable sum for an ellipsoid.
Can anyone help?
Thanks
James
_______________________________________________ yt-users mailing list -- yt-users@python.org To unsubscribe send an email to yt-users-leave@python.org
--0000000000001ae10c05724c58d6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
<div>Hi James,</div><div><br></div><div>Did you know there= 's an ellipsoid data object? You could do something like this:</div><br></div><div>In [14]: el =3D ds.ellipsoid(ds.domain_center, 0.5, 0.25, = 0.1, np.array([1, 1, 1]), np.pi/4)<br><br>In [15]: el['cell_mass']<= br>Out[15]:<br>YTArray([4.45777680e+38, 4.45778381e+38, 4.45778677e+38, ...= ,<br>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 6.03437074e+36, 8.529= 94081e+36, 5.87109032e+37]) g<br><br>In [16]: el['cell_volume'].to(= 'cm**3')<br>Out[16]:<br>YTArray([8.96887209e+68, 8.96887209e+68, 8.= 96887209e+68, ...,<br>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5.34= 586435e+61, 5.34586435e+61, 5.34586435e+61]) cm**3<br><br>In [17]: for ax i= n 'xyz':<br>=C2=A0=C2=A0=C2=A0 ...:=C2=A0=C2=A0=C2=A0=C2=A0 plot = =3D yt.SlicePlot(ds, ax, ('gas', 'density'), data_source=3D= el)<br>=C2=A0=C2=A0=C2=A0 ...:=C2=A0=C2=A0=C2=A0=C2=A0 plot.save()<br>=C2= =A0</div><div>(see https://imgur.com/a/nqfl7yi"> https://imgur.co= m/a/nqfl7yi</a> for the resulting images using the enzo IsolatedGalaxy data= set from http://yt-project.org/data">yt-project.org/data </a>)<div><br></div><div><div><br></div><div>I think you could just use the s= ame script you use with spheres but use an ellipsoid instead.</div></div> <br></div><div>Note that in the example above I'm specifying the par= ameters of the ellipsoid (the first three arguments) in code units, which i= n Enzo are scaled to box size, your dataset might be different (for example= if it uses CGS units internally).<br></div><div><br></div><div>I don't= the ellipsoid data object is as commonly used as the rest of the yt data o= bjects, if you notice any weirdness or bugs we'd love to hear about it = in the form of e-mails here or issues on github.</div><div><br></div><div>-= Nathan<br></div></div> <br>On Tue, Jul 31, 2018 at 5:21 AM, James Cook <jamescook.106@gmai= l.com</a>></span> wrote:<br>Hi All<br> <br> I have been searching the documentation and can=E2=80=99t seem to figure ou= t how to do this! Essentially I have a star defined via its density, and I = define the edge of the star as to where the density has fallen to 5% of the= max value. I calculate this radius using rays. The star has a different = =E2=80=98edge=E2=80=99 depending on what direction (x,y,z) you take the ray= as it is the result of a collision. I want to take in the 3 values for the= edges, and sketch an ellipsoid that covers this definition. I then want to= integrate this ellipsoid to find the total value of density. I use amr so = total_quantity won=E2=80=99t work.<br> <br> So I essentially want a weighted variable sum for an ellipsoid.<br> <br> Can anyone help?<br> <br> Thanks<br> <br> James<br> <br> ______________________________<wbr>_________________<br> yt-users mailing list -- yt-users@py= thon.org</a><br> To unsubscribe send an email to yt-users-leave@python.org</a><br> </blockquote></div><br></div>--0000000000001ae10c05724c58d6--
------------------------------
Date: Tue, 31 Jul 2018 09:33:32 -0500 From: Nathan Goldbaum
Subject: [yt-users] Re: parallelising derived fields To: Discussion of the yt analysis package Message-ID: < CAJXewOky3+KeA8nVPRQKa-JuTUvmAgAGo_cm0Brm-u8m0hTEGg@mail.gmail.com> Content-Type: multipart/alternative; boundary="00000000000090223005724c7799" --00000000000090223005724c7799 Content-Type: text/plain; charset="UTF-8"
Hi,
This is definitely something that we know needs improving. We have plans for a significant overhaul of the field system and one of the major goals of the overhaul is to reduce the cost of the field detection step when loading a dataset. Currently the field system generates the derived field graph in a somewhat baroque fashion, relying on Python exception handling on chained calls to functions that operate on numpy arrays. This process is not as efficient as if we somehow encoded the derived field dependency graph symbolically and relied on the graph itself to generate the derived field list given a set of available on-disk fields.
This work is ongoing and unfortunately is not ready to be used yet. As you noted field detection is not parallelized so I don't think there's much to be done architecturally to speed up your workflow right now. Hopefully in a year or so we'll be releasing a version of yt that has a much faster field detection system such that you won't notice that it's not parallelized simply because it's so much quicker!
That doesn't help you right now of course. To be honest I don't normally hear from users with workflows where the major overhead is the field detection step. We definitely notice when developing yt (we estimate about half the time in the unit tests is spent doing field detection over and over on different test datasets), which is why we're so gung ho on making things faster. If you could share more details about what your derived fields look like, either by sharing your code or even better by making a reduced minimal example that demonstrates the slowdown you're hitting, one of us might be able to suggest a way to speed up field detection for your derived fields based on something happening in your scropt, or possibly allow us to spot some low hanging fruit for optimization in field system as it currently exists in yt if you happen to be hitting an easy-to-fix scaling issue we're not aware of yet.
-Nathan
On Tue, Jul 31, 2018 at 5:43 AM, Rajika Kuruwita < rajika.kuruwita@anu.edu.au
wrote:
Over my years of using yt I have created many derived fields that are dependant on other derived fields and have various scripts that use them. So I have compiled all the definitions of fields and the yt.add_field() lines into one script which is now a module. One problem I have encountered is that, it doesn't seem that the derivation of these fields has been parallelised, as made evident by the fact that the time for ds.derived_field_list to run is independent of the number of processors available, even with yt.enable_parallelism(). Is this something that is planned to be implemented in the future?
This problem is further aggravated by the fact that after loading a file and attempting to obtain one of the fields (e.g. dd['Corrected_val_x']) seems to actually force the calculation of every possible field added to yt.
Has anyone determined a faster way of loading multiple derived fields? _______________________________________________ yt-users mailing list -- yt-users@python.org To unsubscribe send an email to yt-users-leave@python.org
--00000000000090223005724c7799 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
<div>Hi,</div><div><br></div><div>This is definitely somet= hing that we know needs improving. We have plans for a significant overhaul= of the field system and one of the major goals of the overhaul is to reduc= e the cost of the field detection step when loading a dataset. Currently th= e field system generates the derived field graph in a somewhat baroque fash= ion, relying on Python exception handling on chained calls to functions tha= t operate on numpy arrays. This process is not as efficient as if we someho= w encoded the derived field dependency graph symbolically and relied on the= graph itself to generate the derived field list given a set of available o= n-disk fields.</div><div><br></div><div>This work is ongoing and unfortunat= ely is not ready to be used yet. As you noted field detection is not parall= elized so I don't think there's much to be done architecturally to = speed up your workflow right now. Hopefully in a year or so we'll be re= leasing a version of yt that has a much faster field detection system such = that you won't notice that it's not parallelized simply because it&= #39;s so much quicker!</div><div><br></div><div>That doesn't help you r= ight now of course. To be honest I don't normally hear from users with = workflows where the major overhead is the field detection step. We definite= ly notice when developing yt (we estimate about half the time in the unit t= ests is spent doing field detection over and over on different test dataset= s), which is why we're so gung ho on making things faster. If you could= share more details about what your derived fields look like, either by sha= ring your code or even better by making a reduced minimal example that demo= nstrates the slowdown you're hitting, one of us might be able to sugges= t a way to speed up field detection for your derived fields based on someth= ing happening in your scropt, or possibly allow us to spot some low hanging= fruit for optimization in field system as it currently exists in yt if you= happen to be hitting an easy-to-fix scaling issue we're not aware of y=</div>
<br>On Tue, Ju= l 31, 2018 at 5:43 AM, Rajika Kuruwita <rajika.kuruwita@anu.edu.a= u</a>></span> wrote:<br>et.</div><div><br></div><div>-Nathan<br></div><div><br></div><div><br>
Over my years of = using yt I have created many derived fields that are dependant on other der= ived fields and have various scripts that use them. So I have compiled all = the definitions of fields and the yt.add_field() lines into one script whic= h is now a module. One problem I have encountered is that, it doesn't s= eem that the derivation of these fields has been parallelised, as made evid= ent by the fact that the time for ds.derived_field_list to run is independe= nt of the number of processors available, even with yt.enable_parallelism()= . Is this something that is planned to be implemented in the future? <br> <br> This problem is further aggravated by the fact that after loading a file an= d attempting to obtain one of the fields (e.g. dd['Corrected_val_x'= ]) seems to actually force the calculation of every possible field added to= yt. <br> <br> Has anyone determined a faster way of loading multiple derived fields?<br> ______________________________<wbr>_________________<br> yt-users mailing list -- yt-users@py= thon.org</a><br> To unsubscribe send an email to yt-users-leave@python.org</a><br> </blockquote></div><br></div>--00000000000090223005724c7799--
------------------------------
Subject: Digest Footer
_______________________________________________ yt-users mailing list -- yt-users@python.org To unsubscribe send an email to yt-users-leave@python.org
------------------------------
End of yt-users Digest, Vol 125, Issue 17 *****************************************
-- *SK2* *"**Claiming that something can move faster than light is a good conversation-stopper in physics. People edge away from you in cocktail parties; friends never return phone calls. You just don’t mess with Albert Einstein.**"*
Download2088Age (days ago)2088Last active (days ago)
1 comments2 participantsparticipants (2)
Nathan Goldbaum Sushilkumar