Hi all,
Currently to verify file integrity with the install script we download
sidecar md5 file from yt-project.org. These aren't versioned, and md5
itself is largely deprecated for this.
I've created a new sha512-using version of the install script, which
uses sha512 hashes stored *in the install script* to verify file
integrity. This way any time these change we will be notified of
them. I'd like it if a few people could test this -- I have -- on
pristine systems. It changes both how the files are downloaded and it
now uses sha512sum, which should be available on most systems
(according to Kacper :).
You can do this by:
wget https://bitbucket.org/MatthewTurk/yt/raw/367ea3bfff2e/doc/install_script.sh
and then running it, but maybe supplying an alternate directory rather
than the default.
Thanks for any feedback.
-Matt
Hi all,
The yt workshop last week in Chicago (
http://yt-project.org/workshop2012/ ) was an enormous success. On
behalf of the organizing and technical committees, I'd like to
specifically thank the FLASH Center, particularly Don Lamb, Mila
Kuntu, Carrie Eder, for their hospitality; the venue was outstanding
and their hospitality touching. Additionally, we're very grateful to
the Adler Planetarium's Doug Roberts and Mark SubbaRao for hosting us
on Wednesday evening -- seeing the planetarium show as well as volume
renderings made by yt users up on the dome was so much fun. The yt
workshop was supported by NSF Grant 1214147. Thanks to everyone who
attended -- your energy and excitement helped make it a success.
Thanks also to the organizing and technical committees: Britton
Smith, John ZuHone, Brian O'Shea, Jeff Oishi, Stephen Skory, Sam
Skillman, and Cameron Hummels. All talks have been recorded, and you
can clone a unified repository of talk slides and worked examples:
hg clone https://bitbucket.org/yt_analysis/workshop2012/
A few photos have been put up online, too:
http://goo.gl/g02uP
As I am able to edit and upload talks, they'll appear on the yt
youtube channel as well as on the yt homepage:
http://www.youtube.com/ytanalysis
Thanks again, and wow, what a week!
Matt
Hi all.
I'm a bit confused about the unit mechanics in yt right now. I noticed
there's a convert_function for fields, but there are also the units,
time_units, and conversion_factors attributes for StaticOutput.
For the Nyx frontend, I have added convert_function's for each field to
convert the cosmological units into CGS. This works in my fork now and
everything is in cgs and plots fine. Now I know I need to update the
_set_units method of NyxStaticOutput, but I'm not sure what to do with
those attributes and how they are used later.
Best,
Casey
Hi all,
New mpi4py -- pretty cool stuff. The lowercase irecv means
non-blocking receive of a Python object, not just a buffer.
-Matt
---------- Forwarded message ----------
From: Lisandro Dalcin <dalcinl(a)gmail.com>
Date: Fri, Jan 20, 2012 at 2:40 PM
Subject: [mpi4py] [ANN] mpi4py release 1.3
To: mpi4py(a)googlegroups.com
Release 1.3 [2012-01-20]
========================
* Now ``Comm.recv()`` accept a buffer to receive the message.
* Add ``Comm.irecv()`` and ``Request.{wait|test}[any|all]()``.
* Add ``Intracomm.Spawn_multiple()``.
* Better buffer handling for PEP 3118 and legacy buffer interfaces.
* Add support for attribute attribute caching on communicators,
datatypes and windows.
* Install MPI-enabled Python interpreter as
``<path>/mpi4py/bin/python-mpi``.
* Windows: Support for building with Open MPI.
--
Lisandro Dalcin
---------------
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora RN 168 Km 472, Paraje El Pozo
3000 Santa Fe, Argentina
Tel: +54-342-4511594 (ext 1011)
Tel/Fax: +54-342-4511169
--
You received this message because you are subscribed to the Google
Groups "mpi4py" group.
To post to this group, send email to mpi4py(a)googlegroups.com.
To unsubscribe from this group, send email to
mpi4py+unsubscribe(a)googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/mpi4py?hl=en.
Hi all,
I put up a PR for much improved ghost zone filling (particularly for
FLASH data!) that I'd appreciate a set of eyes on:
https://bitbucket.org/yt_analysis/yt/pull-request/54/improve-ghost-zone-gen…
Sam tested and sees that for some particular hierarchies there's a
slight slowdown as a result, but I think the improvement where it
applies will be worth that.
Thanks,
Matt
For a long time we have been running into this very problem. I think
it would be appropriate to utilize this code on Kraken, Ranger, etc.
My implementation suggestion would be to put this in the new
startup_tasks, where we determine parallelism. As noted in the
docstring it will have to be modified to use mpi4py.
Britton or Stephen, this sounds like it's directly up your alley as
you run on Kraken the most often. Would one of you be willing to test
it out? My feeling is that we could simply suggest that on these
systems we use this idiom at the top of scripts (where we assume we
distribute this script with yt):
from yt.mpi_importer import mpi_import
with mpi_import():
from yt.mods import *
I think it should recursively watch all the imports. An alternate
option would be to insert some of its logic into yt.mods, or even have
a second mods file that handles it seamlessly, like:
from yt.pmods import *
Ideas?
-Matt
---------- Forwarded message ----------
From: Dag Sverre Seljebotn <d.s.seljebotn(a)astro.uio.no>
Date: Fri, Jan 13, 2012 at 3:51 AM
Subject: [mpi4py] Fwd: [Numpy-discussion] Improving Python+MPI import
performance
To: mpi4py(a)googlegroups.com
Cc: Chris Kees <cekees(a)gmail.com>
This looks very interesting,
Dag
-------- Original Message --------
Subject: [Numpy-discussion] Improving Python+MPI import performance
Date: Thu, 12 Jan 2012 17:13:41 -0800
From: Asher Langton <langton2(a)llnl.gov>
Reply-To: Discussion of Numerical Python <numpy-discussion(a)scipy.org>
To: numpy-discussion(a)scipy.org
Hi all,
(I originally posted this to the BayPIGgies list, where Fernando Perez
suggested I send it to the NumPy list as well. My apologies if you're
receiving this email twice.)
I work on a Python/C++ scientific code that runs as a number of
independent Python processes communicating via MPI. Unfortunately, as
some of you may have experienced, module importing does not scale well
in Python/MPI applications. For 32k processes on BlueGene/P, importing
100 trivial C-extension modules takes 5.5 hours, compared to 35
minutes for all other interpreter loading and initialization. We
developed a simple pure-Python module (based on knee.py, a
hierarchical import example) that cuts the import time from 5.5 hours
to 6 minutes.
The code is available here:
https://github.com/langton/MPI_Import
Usage, implementation details, and limitations are described in a
docstring at the beginning of the file (just after the mandatory
legalese).
I've talked with a few people who've faced the same problem and heard
about a variety of approaches, which range from putting all necessary
files in one directory to hacking the interpreter itself so it
distributes the module-loading over MPI. Last summer, I had a student
intern try a few of these approaches. It turned out that the problem
wasn't so much the simultaneous module loads, but rather the huge
number of failed open() calls (ENOENT) as the interpreter tries to
find the module files. In the MPI_Import module, we have rank 0
perform the module lookups and then broadcast the locations to the
rest of the processes. For our real-world scientific applications
written in Python and C++, this has meant that we can start a problem
and actually make computational progress before the batch allocation
ends.
If you try out the code, I'd appreciate any feedback you have:
performance results, bugfixes/feature-additions, or alternate
approaches to solving this problem. Thanks!
-Asher
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion(a)scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
--
You received this message because you are subscribed to the Google
Groups "mpi4py" group.
To post to this group, send email to mpi4py(a)googlegroups.com.
To unsubscribe from this group, send email to
mpi4py+unsubscribe(a)googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/mpi4py?hl=en.
Hi--
Is there a way to get a profile to accumulate from [bin: high bin],
instead of [low bin: bin]? I poked through the source and I only see
"accumulation = True, False", but not "\pm 1, 0".
Thanks,
d.
--
Sent from my computer.
Hi all,
I'm running into an issue with translation dictionaries for the GDF
frontend.
On the tip, if I load up a gdf dataset, and do:
pf.h.find_max('Density')
I get:
http://paste.yt-project.org/show/2008/
If I instead do pf.h.find_max('density'), I get what looks like the same
error.
If I replace line 56 in yt/frontends/gdf/fields.py
55 KnownGDFFields = FieldInfoContainer()
56 add_gdf_field = KnownGDFFields.add_field
with
57 add_gdf_field = GDFFieldInfo.add_field
It works fine.
Alternatively, if I replace
96 for f,v in log_translation_dict.items():
97 add_field(f, function=TranslationFunc(v), take_log=True)
98
99 for f,v in translation_dict.items():
100 add_field(f, function=TranslationFunc(v), take_log=False)
with
102 def _generate_translation(mine, theirs, take_log=False):
103 add_field(theirs, function=lambda a, b: b[mine], take_log=take_log)
104
105
106 for f,v in log_translation_dict.items():
107 if v not in GDFFieldInfo:
108 add_field(v, function=lambda a,b: None, take_log=True,
109 validators = [ValidateDataField(v)])
110 #print "Setting up translator from %s to %s" % (v, f)
111 _generate_translation(v, f, take_log=True)
112
113
114 for f,v in translation_dict.items():
115 if v not in GDFFieldInfo:
116 add_field(v, function=lambda a,b: None, take_log=False,
117 validators = [ValidateDataField(v)])
118 #print "Setting up translator from %s to %s" % (v, f)
119 _generate_translation(v, f, take_log=False)
as is done in the orion reader, it works fine.
Anyways, if someone sees what is going on here, let me know. I'll also be
on IRC for a bit longer tonight and all tomorrow.
Thanks,
Sam