Quick question about CUDA and GPUs
Hi guys, Can I get a show-of-hands -- 1. How many of you have access to GPUs that support CUDA? 2. How many of you have installed or CAN install PyCUDA? Thanks! -Matt
1. me me me!2. i have not yet, but i'm sure I can. On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
-- Samuel W. Skillman Graduate Research Assistant Center for Astrophysics and Space Astronomy University of Colorado at Boulder samuel.skillman[at]colorado.edu
1. I'm working on it 2. Nope not even close Jennifer On Jul 16, 2009, at 10:02 AM, Sam Skillman wrote:
1. me me me! 2. i have not yet, but i'm sure I can.
On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk <matthewturk@gmail.com> wrote: Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
-- Samuel W. Skillman Graduate Research Assistant Center for Astrophysics and Space Astronomy University of Colorado at Boulder samuel.skillman[at]colorado.edu _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Unfortunately, 1. no 2. yes j.s. On Thu, Jul 16, 2009 at 7:08 AM, Jennifer Jones<westbr39@msu.edu> wrote:
1. I'm working on it 2. Nope not even close Jennifer On Jul 16, 2009, at 10:02 AM, Sam Skillman wrote:
1. me me me! 2. i have not yet, but i'm sure I can. On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk <matthewturk@gmail.com> wrote:
Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
-- Samuel W. Skillman Graduate Research Assistant Center for Astrophysics and Space Astronomy University of Colorado at Boulder samuel.skillman[at]colorado.edu _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
-- ---------------- i am dot org: www.jsoishi.org
Hi Matt, 1. <hand up> 2. <hand up> Additionally, several nodes of Spur (the viz machine at TACC) have Nvidia GPU boards and NCSA's Tesla system has several as well. Those ought to be able to use PyCUDA. What's on your mind, Matt? --Brian On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk<matthewturk@gmail.com> wrote:
Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Hi Brian, Well, good question. It sounds like we are approaching critical mass; and certainly, I think we are justified in attempting to learn the fastest and best way to approach fast-computation with limited memory bandwidth. To that end, I'm thinking about a couple things -- 1. Separating our arrays into two chunks; fast and slow. We already have a namespace for array generation -- this would suggest that we can write an abstraction module that will help us to separate arrays that need to be long lived and *fast* from arrays that are okay to be slow (only a few operations) and that will be short-lived. 2. Moving projections and other heavy, already C-based, operations onto the GPU. Or at least, duplicating our procedures in both. The advantage of doing projections on the GPU happen to be that, in theory, we should become completely IO limited. The projections are already integer based; furthermore, 32-bit integers gets us surprisingly far. The lightcone, for instance, can be fully addressed in GPU-space. 3. Ray-tracing and post-processing rad transfer, even optically thin. Right now, field generation can take some time -- but by constructing special (for instance) X-ray fields, we can move 100% of the computation onto the GPU and speed it up substantially, so that again the projection is the dominant portion of the computation, rather than the interpolation. I've spoken with Sam Skillman and a couple other people, and this idea seems to get them a bit jazzed up, so perhaps it's something worth exploring, particularly as Lincoln is coming online (or already is?) and can be used as a deployment and runtime platform. -Matt On Thu, Jul 16, 2009 at 10:07 AM, Brian O'Shea<bwoshea@gmail.com> wrote:
Hi Matt,
1. <hand up> 2. <hand up>
Additionally, several nodes of Spur (the viz machine at TACC) have Nvidia GPU boards and NCSA's Tesla system has several as well. Those ought to be able to use PyCUDA.
What's on your mind, Matt?
--Brian
On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk<matthewturk@gmail.com> wrote:
Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
Hi guys, As an update, on OSX Boost doesn't like linking against a framework build -- so PyCUDA might not work when you install it. The library (I used 1.38.0 but I believe it should be identical with 1.39.0) can be fixed with: sudo install_name_tool -change /System/Library/Frameworks/Python.framework/Versions/2.5/Python /Library/Frameworks/Python.framework/Versions/Current/Python libboost_python-xgcc40-mt.dylib You might have to change the specifics based on which version of gcc, etc, you use. Other than this painful step of figuring things out, boost was trivial to install on OSX. -Matt On Fri, Jul 17, 2009 at 11:14 AM, Matthew Turk<matthewturk@gmail.com> wrote:
Hi Brian,
Well, good question. It sounds like we are approaching critical mass; and certainly, I think we are justified in attempting to learn the fastest and best way to approach fast-computation with limited memory bandwidth. To that end, I'm thinking about a couple things --
1. Separating our arrays into two chunks; fast and slow. We already have a namespace for array generation -- this would suggest that we can write an abstraction module that will help us to separate arrays that need to be long lived and *fast* from arrays that are okay to be slow (only a few operations) and that will be short-lived.
2. Moving projections and other heavy, already C-based, operations onto the GPU. Or at least, duplicating our procedures in both. The advantage of doing projections on the GPU happen to be that, in theory, we should become completely IO limited. The projections are already integer based; furthermore, 32-bit integers gets us surprisingly far. The lightcone, for instance, can be fully addressed in GPU-space.
3. Ray-tracing and post-processing rad transfer, even optically thin. Right now, field generation can take some time -- but by constructing special (for instance) X-ray fields, we can move 100% of the computation onto the GPU and speed it up substantially, so that again the projection is the dominant portion of the computation, rather than the interpolation.
I've spoken with Sam Skillman and a couple other people, and this idea seems to get them a bit jazzed up, so perhaps it's something worth exploring, particularly as Lincoln is coming online (or already is?) and can be used as a deployment and runtime platform.
-Matt
On Thu, Jul 16, 2009 at 10:07 AM, Brian O'Shea<bwoshea@gmail.com> wrote:
Hi Matt,
1. <hand up> 2. <hand up>
Additionally, several nodes of Spur (the viz machine at TACC) have Nvidia GPU boards and NCSA's Tesla system has several as well. Those ought to be able to use PyCUDA.
What's on your mind, Matt?
--Brian
On Thu, Jul 16, 2009 at 9:55 AM, Matthew Turk<matthewturk@gmail.com> wrote:
Hi guys,
Can I get a show-of-hands --
1. How many of you have access to GPUs that support CUDA?
2. How many of you have installed or CAN install PyCUDA?
Thanks!
-Matt _______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
_______________________________________________ Yt-dev mailing list Yt-dev@lists.spacepope.org http://lists.spacepope.org/listinfo.cgi/yt-dev-spacepope.org
1. How many of you have access to GPUs that support CUDA?
I don't think I do, besides the machines Brian mentioned, of which I haven't actually ever logged into.
2. How many of you have installed or CAN install PyCUDA?
I haven't ever tried for the above reason. _______________________________________________________ sskory@physics.ucsd.edu o__ Stephen Skory http://physics.ucsd.edu/~sskory/ _.>/ _Graduate Student ________________________________(_)_\(_)_______________
participants (6)
-
Brian O'Shea
-
j s oishi
-
Jennifer Jones
-
Matthew Turk
-
Sam Skillman
-
Stephen Skory