Hello yt users, I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference: from yt.mods import * pf = load("plt01120") dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc N = 512 cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf cam.snapshot(fn) Thanks, Andrew Myers
Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the
segfault? You might do this by setting your core dump ulimit to
unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file
bt
should tell us where in the code it died. Sam Skillman should have a
better idea about any possible memory issues, but the segfault to me
feels like maybe there's a roundoff that's putting it outside a grid
data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt
#0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0,
__pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670,
__pyx_v_rgba=0x7fffd8a94f60,
__pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705
#1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer
(__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413,
__pyx_v_dvs=0x50e9d670,
__pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at
yt/utilities/amr_utils.c:14285
#2 0x00002b5e0a62c464 in
__pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values
(__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>,
__pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702,
__pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>,
__pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at
yt/utilities/amr_utils.c:17719
#3 0x00002b5e0a62ce16 in
__pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray
(__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0,
__pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60,
__pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386
#4 0x00002b5e0a624876 in
__pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane
(__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>,
__pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199
#5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value
optimized out>) at Python/ceval.c:3706
#6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at
Python/ceval.c:2389
#7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value
optimized out>) at Python/ceval.c:3792
#8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at
Python/ceval.c:2389
#9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value
optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2,
kws=0xb62c48,
kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at
Python/ceval.c:2968
#10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value
optimized out>) at Python/ceval.c:3802
#11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at
Python/ceval.c:2389
#12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288,
globals=<value optimized out>, locals=<value optimized out>, args=0x0,
argcount=0, kws=0x0, kwcount=0,
defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968
#13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670,
locals=0x72f1270) at Python/ceval.c:522
#14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4
"vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190,
closeit=1,
flags=0x7fffd8a958d0) at Python/pythonrun.c:1335
#15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py",
start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1,
flags=0x7fffd8a958d0)
at Python/pythonrun.c:1321
#16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>,
filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at
Python/pythonrun.c:931
#17 0x0000000000413e4f in Py_Main (argc=<value optimized out>,
argv=0x7fffd8a959f8) at Modules/main.c:599
#18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6
#19 0x00000000004130b9 in _start ()
Thanks,
Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data
value that it's trying to traverse. In the past this has been caused
by NaNs, usually from a zero value that has been logged, as I believe
other values should be handled correctly. Can you try this code for
me?
for g in pf.h.grids:
if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a
good way to handle NaNs; obviously a segfault is a pretty poor
strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids:
if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000 grids and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Actually, and I probably should have mentioned this earlier, but I remember
now that once I upgraded to yt-2.0 I started seeing warning messages like:
Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the
negative value theory. I assume that these values were always there, but
yt-2.0 started warning me about them. Probably the center of the domain does
not include the bad value(s), so it only chokes the volume renderer if I
make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote: Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: Hello yt users,
I'm trying to volume render an Orion simulation with about 6,000
grids
and 100 million cells, and I think I'm running out of memory. I don't know if this is large compared to other simulations people have volume rendered before, but if I set the width of my field of view to be 0.02 pc (20 times smaller than the entire domain), the following code works fine. If I set it to 0.04 pc or anything larger, the code segfaults, which I assume means I'm running out of memory. This happens no matter how many cores I run on - running in parallel seems to be speed up the calculation, but not increase the size of the domain I can render. Am I doing something wrong? Or do I just need to find a machine with more memory to do this on? The one I'm using now has 3 gigs per core, which strikes me as pretty solid. I'm using the trunk version of yt-2.0. Here's the script for reference:
from yt.mods import *
pf = load("plt01120")
dd = pf.h.all_data() mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the edges
tf = ColorTransferFunction((mi, ma)) tf.add_layers(8, w=0.01) c = na.array([0.0,0.0,0.0]) L = na.array([1.0, 1.0, 1.0]) W = 6.17e+16 # 0.02 pc
N = 512
cam = Camera(c, L, W, (N,N), tf, pf=pf) fn = "%s_image.png" % pf
cam.snapshot(fn)
Thanks, Andrew Myers
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Andrew,
This is I think a spurious issue -- it comes from the calculation of
the radius when detecting which derived fields can be used by yt. I
would actually guess that you see this whenever the hierarchy is
instantiated:
pf.h
and not only when the PlotCollection goes out. I tried kind of hard
to eliminate this issue but I couldn't figure out a way; I think it's
cosmetic and unrelated, though. Can you have a go at the checking for
NaNs code?
-Matt
On Wed, Feb 2, 2011 at 12:51 PM, Andrew Myers
Actually, and I probably should have mentioned this earlier, but I remember now that once I upgraded to yt-2.0 I started seeing warning messages like:
Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the negative value theory. I assume that these values were always there, but yt-2.0 started warning me about them. Probably the center of the domain does not include the bad value(s), so it only chokes the volume renderer if I make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
wrote: Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote: Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: > Hello yt users, > > I'm trying to volume render an Orion simulation with about 6,000 > grids > and > 100 million cells, and I think I'm running out of memory. I don't > know > if > this is large compared to other simulations people have volume > rendered > before, but if I set the width of my field of view to be 0.02 pc > (20 > times > smaller than the entire domain), the following code works fine. If > I set > it > to 0.04 pc or anything larger, the code segfaults, which I assume > means > I'm > running out of memory. This happens no matter how many cores I run > on - > running in parallel seems to be speed up the calculation, but not > increase > the size of the domain I can render. Am I doing something wrong? Or > do I > just need to find a machine with more memory to do this on? The one > I'm > using now has 3 gigs per core, which strikes me as pretty solid. > I'm > using > the trunk version of yt-2.0. Here's the script for reference: > > from yt.mods import * > > pf = load("plt01120") > > dd = pf.h.all_data() > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the > edges > > tf = ColorTransferFunction((mi, ma)) > tf.add_layers(8, w=0.01) > c = na.array([0.0,0.0,0.0]) > L = na.array([1.0, 1.0, 1.0]) > W = 6.17e+16 # 0.02 > pc > > N = 512 > > cam = Camera(c, L, W, (N,N), tf, pf=pf) > fn = "%s_image.png" % pf > > cam.snapshot(fn) > > Thanks, > Andrew Myers > > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi guys,
Actually, the code:
for g in pf.h.grids:
if na.any(na.isnan(na.log10(g["Density"]))): raise RuntimeError
does seem to proceed to completion without raising an exception.
Andrew M
On Wed, Feb 2, 2011 at 9:55 AM, Matthew Turk
Hi Andrew,
This is I think a spurious issue -- it comes from the calculation of the radius when detecting which derived fields can be used by yt. I would actually guess that you see this whenever the hierarchy is instantiated:
pf.h
and not only when the PlotCollection goes out. I tried kind of hard to eliminate this issue but I couldn't figure out a way; I think it's cosmetic and unrelated, though. Can you have a go at the checking for NaNs code?
-Matt
Actually, and I probably should have mentioned this earlier, but I remember now that once I upgraded to yt-2.0 I started seeing warning messages
On Wed, Feb 2, 2011 at 12:51 PM, Andrew Myers
wrote: like: Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the negative value theory. I assume that these values were always there, but yt-2.0 started warning me about them. Probably the center of the domain
not include the bad value(s), so it only chokes the volume renderer if I make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
wrote: Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote: Hi Matt,
Could it not also be from a negative value that has had its log()
taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1
__pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer
(__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260,
optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0,
does throwflag=
optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: > > Hi Andrew, > > That's an odd bug! Do you think you could get a backtrace from the > segfault? You might do this by setting your core dump ulimit to > unlimited: > > [in base] > > ulimit -c unlimited > > [in csh] > > limit coredumpsize unlimited > > and then running again. When the core dump gets spit out, > > gdb python2.6 -c that_core_file > bt > > should tell us where in the code it died. Sam Skillman should have a > better idea about any possible memory issues, but the segfault to me > feels like maybe there's a roundoff that's putting it outside a grid > data array space or something. > > Sorry for the trouble, > > Matt > > On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers < atmyers@berkeley.edu> > wrote: > > Hello yt users, > > > > I'm trying to volume render an Orion simulation with about 6,000 > > grids > > and > > 100 million cells, and I think I'm running out of memory. I don't > > know > > if > > this is large compared to other simulations people have volume > > rendered > > before, but if I set the width of my field of view to be 0.02 pc > > (20 > > times > > smaller than the entire domain), the following code works fine. If > > I set > > it > > to 0.04 pc or anything larger, the code segfaults, which I assume > > means > > I'm > > running out of memory. This happens no matter how many cores I run > > on - > > running in parallel seems to be speed up the calculation, but not > > increase > > the size of the domain I can render. Am I doing something wrong? Or > > do I > > just need to find a machine with more memory to do this on? The one > > I'm > > using now has 3 gigs per core, which strikes me as pretty solid. > > I'm > > using > > the trunk version of yt-2.0. Here's the script for reference: > > > > from yt.mods import * > > > > pf = load("plt01120") > > > > dd = pf.h.all_data() > > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) > > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the > > edges > > > > tf = ColorTransferFunction((mi, ma)) > > tf.add_layers(8, w=0.01) > > c = na.array([0.0,0.0,0.0]) > > L = na.array([1.0, 1.0, 1.0]) > > W = 6.17e+16 # 0.02 > > pc > > > > N = 512 > > > > cam = Camera(c, L, W, (N,N), tf, pf=pf) > > fn = "%s_image.png" % pf > > > > cam.snapshot(fn) > > > > Thanks, > > Andrew Myers > > > > > > > > _______________________________________________ > > yt-users mailing list > > yt-users@lists.spacepope.org > > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Andrew,
Okay. In the absence of this, I've attempted a workaround. If you
are on the development branch, you should be able to run:
yt instinfo -u
and get the latest version, which includes an explicit check on the
bin_id access. If you're on stable, contact me off-list and I'll send
you a modified version to make this check.
Best,
Matt
On Wed, Feb 2, 2011 at 1:03 PM, Andrew Myers
Hi guys,
Actually, the code:
for g in pf.h.grids: if na.any(na.isnan(na.log10(g["Density"]))): raise RuntimeError
does seem to proceed to completion without raising an exception.
Andrew M
On Wed, Feb 2, 2011 at 9:55 AM, Matthew Turk
wrote: Hi Andrew,
This is I think a spurious issue -- it comes from the calculation of the radius when detecting which derived fields can be used by yt. I would actually guess that you see this whenever the hierarchy is instantiated:
pf.h
and not only when the PlotCollection goes out. I tried kind of hard to eliminate this issue but I couldn't figure out a way; I think it's cosmetic and unrelated, though. Can you have a go at the checking for NaNs code?
-Matt
On Wed, Feb 2, 2011 at 12:51 PM, Andrew Myers
wrote: Actually, and I probably should have mentioned this earlier, but I remember now that once I upgraded to yt-2.0 I started seeing warning messages like:
Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the negative value theory. I assume that these values were always there, but yt-2.0 started warning me about them. Probably the center of the domain does not include the bad value(s), so it only chokes the volume renderer if I make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
wrote: Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote: Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: > Hi Matt, > > Thanks for the help. This is the outcome of the "bt" command in > gdb: > > (gdb) bt > #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value > (__pyx_v_self=0x87ab9c0, > __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, > __pyx_v_rgba=0x7fffd8a94f60, > __pyx_v_grad=<value optimized out>) at > yt/utilities/amr_utils.c:13705 > #1 > > __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer > (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, > __pyx_v_dvs=0x50e9d670, > __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized > out>) > at > yt/utilities/amr_utils.c:14285 > #2 0x00002b5e0a62c464 in > __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values > (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, > __pyx_v_v_dir=<value optimized out>, > __pyx_v_enter_t=23.346866210722702, > __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized > out>, > __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at > yt/utilities/amr_utils.c:17719 > #3 0x00002b5e0a62ce16 in > __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray > (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, > __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, > __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 > #4 0x00002b5e0a624876 in > __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane > (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, > __pyx_kwds=<value optimized out>) at > yt/utilities/amr_utils.c:16199 > #5 0x0000000000495124 in call_function (f=0x5a7ce490, > throwflag=<value > optimized out>) at Python/ceval.c:3706 > #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized > out>) > at > Python/ceval.c:2389 > #7 0x00000000004943ff in call_function (f=0x87aa260, > throwflag=<value > optimized out>) at Python/ceval.c:3792 > #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized > out>) > at > Python/ceval.c:2389 > #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, > globals=<value > optimized out>, locals=<value optimized out>, args=0xb62c38, > argcount=2, > kws=0xb62c48, > kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at > Python/ceval.c:2968 > #10 0x0000000000493c79 in call_function (f=0xb62ac0, > throwflag=<value > optimized out>) at Python/ceval.c:3802 > #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized > out>) > at > Python/ceval.c:2389 > #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, > globals=<value optimized out>, locals=<value optimized out>, > args=0x0, > argcount=0, kws=0x0, kwcount=0, > defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 > #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, > globals=0x50e9d670, > locals=0x72f1270) at Python/ceval.c:522 > #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, > filename=0x7fffd8a965a4 > "vr.py", start=<value optimized out>, globals=0xb03190, > locals=0xb03190, > closeit=1, > flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 > #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 > "vr.py", > start=<value optimized out>, globals=0xb03190, locals=0xb03190, > closeit=1, > flags=0x7fffd8a958d0) > at Python/pythonrun.c:1321 > #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value > optimized > out>, > filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) > at > Python/pythonrun.c:931 > #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, > argv=0x7fffd8a959f8) at Modules/main.c:599 > #18 0x00002b5e0259a994 in __libc_start_main () from > /lib64/libc.so.6 > #19 0x00000000004130b9 in _start () > > Thanks, > Andrew > > > On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk > > wrote: >> >> Hi Andrew, >> >> That's an odd bug! Do you think you could get a backtrace from >> the >> segfault? You might do this by setting your core dump ulimit to >> unlimited: >> >> [in base] >> >> ulimit -c unlimited >> >> [in csh] >> >> limit coredumpsize unlimited >> >> and then running again. When the core dump gets spit out, >> >> gdb python2.6 -c that_core_file >> bt >> >> should tell us where in the code it died. Sam Skillman should >> have a >> better idea about any possible memory issues, but the segfault to >> me >> feels like maybe there's a roundoff that's putting it outside a >> grid >> data array space or something. >> >> Sorry for the trouble, >> >> Matt >> >> On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers >> >> wrote: >> > Hello yt users, >> > >> > I'm trying to volume render an Orion simulation with about 6,000 >> > grids >> > and >> > 100 million cells, and I think I'm running out of memory. I >> > don't >> > know >> > if >> > this is large compared to other simulations people have volume >> > rendered >> > before, but if I set the width of my field of view to be 0.02 pc >> > (20 >> > times >> > smaller than the entire domain), the following code works fine. >> > If >> > I set >> > it >> > to 0.04 pc or anything larger, the code segfaults, which I >> > assume >> > means >> > I'm >> > running out of memory. This happens no matter how many cores I >> > run >> > on - >> > running in parallel seems to be speed up the calculation, but >> > not >> > increase >> > the size of the domain I can render. Am I doing something wrong? >> > Or >> > do I >> > just need to find a machine with more memory to do this on? The >> > one >> > I'm >> > using now has 3 gigs per core, which strikes me as pretty solid. >> > I'm >> > using >> > the trunk version of yt-2.0. Here's the script for reference: >> > >> > from yt.mods import * >> > >> > pf = load("plt01120") >> > >> > dd = pf.h.all_data() >> > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) >> > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the >> > edges >> > >> > tf = ColorTransferFunction((mi, ma)) >> > tf.add_layers(8, w=0.01) >> > c = na.array([0.0,0.0,0.0]) >> > L = na.array([1.0, 1.0, 1.0]) >> > W = 6.17e+16 # 0.02 >> > pc >> > >> > N = 512 >> > >> > cam = Camera(c, L, W, (N,N), tf, pf=pf) >> > fn = "%s_image.png" % pf >> > >> > cam.snapshot(fn) >> > >> > Thanks, >> > Andrew Myers >> > >> > >> > >> > _______________________________________________ >> > yt-users mailing list >> > yt-users@lists.spacepope.org >> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> > >> > >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Thanks, Matt. That appears to do the trick.
~Andrew
On Wed, Feb 2, 2011 at 10:22 AM, Matthew Turk
Hi Andrew,
Okay. In the absence of this, I've attempted a workaround. If you are on the development branch, you should be able to run:
yt instinfo -u
and get the latest version, which includes an explicit check on the bin_id access. If you're on stable, contact me off-list and I'll send you a modified version to make this check.
Best,
Matt
Hi guys,
Actually, the code:
for g in pf.h.grids: if na.any(na.isnan(na.log10(g["Density"]))): raise RuntimeError
does seem to proceed to completion without raising an exception.
Andrew M
On Wed, Feb 2, 2011 at 9:55 AM, Matthew Turk
wrote: Hi Andrew,
This is I think a spurious issue -- it comes from the calculation of the radius when detecting which derived fields can be used by yt. I would actually guess that you see this whenever the hierarchy is instantiated:
pf.h
and not only when the PlotCollection goes out. I tried kind of hard to eliminate this issue but I couldn't figure out a way; I think it's cosmetic and unrelated, though. Can you have a go at the checking for NaNs code?
-Matt
On Wed, Feb 2, 2011 at 12:51 PM, Andrew Myers
wrote: Actually, and I probably should have mentioned this earlier, but I remember now that once I upgraded to yt-2.0 I started seeing warning messages like:
Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the negative value theory. I assume that these values were always there, but yt-2.0 started warning me about them. Probably the center of the
domain
does not include the bad value(s), so it only chokes the volume renderer if I make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
wrote: Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote:
Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk < matthewturk@gmail.com> wrote: > Hi Andrew, > > Okay, I have seen this before. I think it is related to a bad data > value that it's trying to traverse. In the past this has been > caused > by NaNs, usually from a zero value that has been logged, as I > believe > other values should be handled correctly. Can you try this code for > me? > > for g in pf.h.grids: > if na.any(grid["Density"] == 0): raise RuntimeError > > and see if it proceeds to completion? I will see if I can think of > a > good way to handle NaNs; obviously a segfault is a pretty poor > strategy. > > -Matt > > On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers < atmyers@berkeley.edu> > wrote: >> Hi Matt, >> >> Thanks for the help. This is the outcome of the "bt" command in >> gdb: >> >> (gdb) bt >> #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value >> (__pyx_v_self=0x87ab9c0, >> __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, >> __pyx_v_rgba=0x7fffd8a94f60, >> __pyx_v_grad=<value optimized out>) at >> yt/utilities/amr_utils.c:13705 >> #1 >> >> __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer >> (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, >> __pyx_v_dvs=0x50e9d670, >> __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized >> out>) >> at >> yt/utilities/amr_utils.c:14285 >> #2 0x00002b5e0a62c464 in >> __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values >> (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, >> __pyx_v_v_dir=<value optimized out>, >> __pyx_v_enter_t=23.346866210722702, >> __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized >> out>, >> __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at >> yt/utilities/amr_utils.c:17719 >> #3 0x00002b5e0a62ce16 in >> __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray >> (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, >> __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, >> __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 >> #4 0x00002b5e0a624876 in >> __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane >> (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, >> __pyx_kwds=<value optimized out>) at >> yt/utilities/amr_utils.c:16199 >> #5 0x0000000000495124 in call_function (f=0x5a7ce490, >> throwflag=<value >> optimized out>) at Python/ceval.c:3706 >> #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized >> out>) >> at >> Python/ceval.c:2389 >> #7 0x00000000004943ff in call_function (f=0x87aa260, >> throwflag=<value >> optimized out>) at Python/ceval.c:3792 >> #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized >> out>) >> at >> Python/ceval.c:2389 >> #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, >> globals=<value >> optimized out>, locals=<value optimized out>, args=0xb62c38, >> argcount=2, >> kws=0xb62c48, >> kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at >> Python/ceval.c:2968 >> #10 0x0000000000493c79 in call_function (f=0xb62ac0, >> throwflag=<value >> optimized out>) at Python/ceval.c:3802 >> #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized >> out>) >> at >> Python/ceval.c:2389 >> #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, >> globals=<value optimized out>, locals=<value optimized out>, >> args=0x0, >> argcount=0, kws=0x0, kwcount=0, >> defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 >> #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, >> globals=0x50e9d670, >> locals=0x72f1270) at Python/ceval.c:522 >> #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, >> filename=0x7fffd8a965a4 >> "vr.py", start=<value optimized out>, globals=0xb03190, >> locals=0xb03190, >> closeit=1, >> flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 >> #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 >> "vr.py", >> start=<value optimized out>, globals=0xb03190, locals=0xb03190, >> closeit=1, >> flags=0x7fffd8a958d0) >> at Python/pythonrun.c:1321 >> #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value >> optimized >> out>, >> filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) >> at >> Python/pythonrun.c:931 >> #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, >> argv=0x7fffd8a959f8) at Modules/main.c:599 >> #18 0x00002b5e0259a994 in __libc_start_main () from >> /lib64/libc.so.6 >> #19 0x00000000004130b9 in _start () >> >> Thanks, >> Andrew >> >> >> On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk >>
>> wrote: >>> >>> Hi Andrew, >>> >>> That's an odd bug! Do you think you could get a backtrace from >>> the >>> segfault? You might do this by setting your core dump ulimit to >>> unlimited: >>> >>> [in base] >>> >>> ulimit -c unlimited >>> >>> [in csh] >>> >>> limit coredumpsize unlimited >>> >>> and then running again. When the core dump gets spit out, >>> >>> gdb python2.6 -c that_core_file >>> bt >>> >>> should tell us where in the code it died. Sam Skillman should >>> have a >>> better idea about any possible memory issues, but the segfault to >>> me >>> feels like maybe there's a roundoff that's putting it outside a >>> grid >>> data array space or something. >>> >>> Sorry for the trouble, >>> >>> Matt >>> >>> On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers >>> >>> wrote: >>> > Hello yt users, >>> > >>> > I'm trying to volume render an Orion simulation with about 6,000 >>> > grids >>> > and >>> > 100 million cells, and I think I'm running out of memory. I >>> > don't >>> > know >>> > if >>> > this is large compared to other simulations people have volume >>> > rendered >>> > before, but if I set the width of my field of view to be 0.02 On Wed, Feb 2, 2011 at 1:03 PM, Andrew Myers
wrote: pc >>> > (20 >>> > times >>> > smaller than the entire domain), the following code works fine. >>> > If >>> > I set >>> > it >>> > to 0.04 pc or anything larger, the code segfaults, which I >>> > assume >>> > means >>> > I'm >>> > running out of memory. This happens no matter how many cores I >>> > run >>> > on - >>> > running in parallel seems to be speed up the calculation, but >>> > not >>> > increase >>> > the size of the domain I can render. Am I doing something wrong? >>> > Or >>> > do I >>> > just need to find a machine with more memory to do this on? The >>> > one >>> > I'm >>> > using now has 3 gigs per core, which strikes me as pretty solid. >>> > I'm >>> > using >>> > the trunk version of yt-2.0. Here's the script for reference: >>> > >>> > from yt.mods import * >>> > >>> > pf = load("plt01120") >>> > >>> > dd = pf.h.all_data() >>> > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) >>> > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the >>> > edges >>> > >>> > tf = ColorTransferFunction((mi, ma)) >>> > tf.add_layers(8, w=0.01) >>> > c = na.array([0.0,0.0,0.0]) >>> > L = na.array([1.0, 1.0, 1.0]) >>> > W = 6.17e+16 # 0.02 >>> > pc >>> > >>> > N = 512 >>> > >>> > cam = Camera(c, L, W, (N,N), tf, pf=pf) >>> > fn = "%s_image.png" % pf >>> > >>> > cam.snapshot(fn) >>> > >>> > Thanks, >>> > Andrew Myers >>> > >>> > >>> > >>> > _______________________________________________ >>> > yt-users mailing list >>> > yt-users@lists.spacepope.org >>> > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >>> > >>> > >>> _______________________________________________ >>> yt-users mailing list >>> yt-users@lists.spacepope.org >>> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> >> >> _______________________________________________ >> yt-users mailing list >> yt-users@lists.spacepope.org >> http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org >> >> > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
Hi Andrew,
Could you run that test anyway? It's related to something we've been
seeing, and it would really help us out if you could send us the
results.
thanks,
j
On Wed, Feb 2, 2011 at 9:51 AM, Andrew Myers
Actually, and I probably should have mentioned this earlier, but I remember now that once I upgraded to yt-2.0 I started seeing warning messages like:
Warning: invalid value encountered in sqrt
when I create a plot collection from this dataset, which supports the negative value theory. I assume that these values were always there, but yt-2.0 started warning me about them. Probably the center of the domain does not include the bad value(s), so it only chokes the volume renderer if I make the domain large enough. Thanks for the help, both of you!
~Andrew
On Wed, Feb 2, 2011 at 9:42 AM, Matthew Turk
wrote: Jeff, you are absolutely right. Maybe a better solution would be:
for g in pf.h.grids: if na.any(na.isnan.na.log10(grid["Density"]))): raise RuntimeError
That should be a more true-to-life method.
-Matt
On Wed, Feb 2, 2011 at 12:40 PM, j s oishi
wrote: Hi Matt,
Could it not also be from a negative value that has had its log() taken?
j
On Wed, Feb 2, 2011 at 9:38 AM, Matthew Turk
wrote: Hi Andrew,
Okay, I have seen this before. I think it is related to a bad data value that it's trying to traverse. In the past this has been caused by NaNs, usually from a zero value that has been logged, as I believe other values should be handled correctly. Can you try this code for me?
for g in pf.h.grids: if na.any(grid["Density"] == 0): raise RuntimeError
and see if it proceeds to completion? I will see if I can think of a good way to handle NaNs; obviously a segfault is a pretty poor strategy.
-Matt
On Wed, Feb 2, 2011 at 12:32 PM, Andrew Myers
wrote: Hi Matt,
Thanks for the help. This is the outcome of the "bt" command in gdb:
(gdb) bt #0 __pyx_f_2yt_9utilities_9amr_utils_FIT_get_value (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:13705 #1 __pyx_f_2yt_9utilities_9amr_utils_21TransferFunctionProxy_eval_transfer (__pyx_v_self=0x87ab9c0, __pyx_v_dt=0.00024472523295100413, __pyx_v_dvs=0x50e9d670, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_grad=<value optimized out>) at yt/utilities/amr_utils.c:14285 #2 0x00002b5e0a62c464 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_sample_values (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=<value optimized out>, __pyx_v_v_dir=<value optimized out>, __pyx_v_enter_t=23.346866210722702, __pyx_v_exit_t=<value optimized out>, __pyx_v_ci=<value optimized out>, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17719 #3 0x00002b5e0a62ce16 in __pyx_f_2yt_9utilities_9amr_utils_15PartitionedGrid_integrate_ray (__pyx_v_self=0x50e9d610, __pyx_v_v_pos=0x7fffd8a94fd0, __pyx_v_v_dir=0x45457d0, __pyx_v_rgba=0x7fffd8a94f60, __pyx_v_tf=0x87ab9c0) at yt/utilities/amr_utils.c:17386 #4 0x00002b5e0a624876 in __pyx_pf_2yt_9utilities_9amr_utils_15PartitionedGrid_2cast_plane (__pyx_v_self=0x50e9d610, __pyx_args=<value optimized out>, __pyx_kwds=<value optimized out>) at yt/utilities/amr_utils.c:16199 #5 0x0000000000495124 in call_function (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:3706 #6 PyEval_EvalFrameEx (f=0x5a7ce490, throwflag=<value optimized out>) at Python/ceval.c:2389 #7 0x00000000004943ff in call_function (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:3792 #8 PyEval_EvalFrameEx (f=0x87aa260, throwflag=<value optimized out>) at Python/ceval.c:2389 #9 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x24286c0, globals=<value optimized out>, locals=<value optimized out>, args=0xb62c38, argcount=2, kws=0xb62c48, kwcount=0, defs=0x242a2a8, defcount=1, closure=0x0) at Python/ceval.c:2968 #10 0x0000000000493c79 in call_function (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:3802 #11 PyEval_EvalFrameEx (f=0xb62ac0, throwflag=<value optimized out>) at Python/ceval.c:2389 #12 0x0000000000495d6d in PyEval_EvalCodeEx (co=0x2b5e01aed288, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2968 #13 0x0000000000495db2 in PyEval_EvalCode (co=0x87ab9c0, globals=0x50e9d670, locals=0x72f1270) at Python/ceval.c:522 #14 0x00000000004b7ee1 in run_mod (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1335 #15 PyRun_FileExFlags (fp=0xb54ed0, filename=0x7fffd8a965a4 "vr.py", start=<value optimized out>, globals=0xb03190, locals=0xb03190, closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:1321 #16 0x00000000004b8198 in PyRun_SimpleFileExFlags (fp=<value optimized out>, filename=0x7fffd8a965a4 "vr.py", closeit=1, flags=0x7fffd8a958d0) at Python/pythonrun.c:931 #17 0x0000000000413e4f in Py_Main (argc=<value optimized out>, argv=0x7fffd8a959f8) at Modules/main.c:599 #18 0x00002b5e0259a994 in __libc_start_main () from /lib64/libc.so.6 #19 0x00000000004130b9 in _start ()
Thanks, Andrew
On Wed, Feb 2, 2011 at 5:25 AM, Matthew Turk
wrote: Hi Andrew,
That's an odd bug! Do you think you could get a backtrace from the segfault? You might do this by setting your core dump ulimit to unlimited:
[in base]
ulimit -c unlimited
[in csh]
limit coredumpsize unlimited
and then running again. When the core dump gets spit out,
gdb python2.6 -c that_core_file bt
should tell us where in the code it died. Sam Skillman should have a better idea about any possible memory issues, but the segfault to me feels like maybe there's a roundoff that's putting it outside a grid data array space or something.
Sorry for the trouble,
Matt
On Tue, Feb 1, 2011 at 11:13 PM, Andrew Myers
wrote: > Hello yt users, > > I'm trying to volume render an Orion simulation with about 6,000 > grids > and > 100 million cells, and I think I'm running out of memory. I don't > know > if > this is large compared to other simulations people have volume > rendered > before, but if I set the width of my field of view to be 0.02 pc > (20 > times > smaller than the entire domain), the following code works fine. If > I set > it > to 0.04 pc or anything larger, the code segfaults, which I assume > means > I'm > running out of memory. This happens no matter how many cores I run > on - > running in parallel seems to be speed up the calculation, but not > increase > the size of the domain I can render. Am I doing something wrong? Or > do I > just need to find a machine with more memory to do this on? The one > I'm > using now has 3 gigs per core, which strikes me as pretty solid. > I'm > using > the trunk version of yt-2.0. Here's the script for reference: > > from yt.mods import * > > pf = load("plt01120") > > dd = pf.h.all_data() > mi, ma = na.log10(dd.quantities["Extrema"]("Density")[0]) > mi -= 0.1 ; ma += 0.1 # To allow a bit of room at the > edges > > tf = ColorTransferFunction((mi, ma)) > tf.add_layers(8, w=0.01) > c = na.array([0.0,0.0,0.0]) > L = na.array([1.0, 1.0, 1.0]) > W = 6.17e+16 # 0.02 > pc > > N = 512 > > cam = Camera(c, L, W, (N,N), tf, pf=pf) > fn = "%s_image.png" % pf > > cam.snapshot(fn) > > Thanks, > Andrew Myers > > > > _______________________________________________ > yt-users mailing list > yt-users@lists.spacepope.org > http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org > > _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org _______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
_______________________________________________ yt-users mailing list yt-users@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-users-spacepope.org
participants (3)
-
Andrew Myers
-
j s oishi
-
Matthew Turk