From e.antero.tammi at gmail.com Tue Mar 1 02:31:13 2011 From: e.antero.tammi at gmail.com (eat) Date: Tue, 1 Mar 2011 09:31:13 +0200 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> Message-ID: Hi James, On Mon, Feb 28, 2011 at 5:25 PM, James McCormac wrote: > Hi eat, > you sent me a suggestion for faster 1d interpolations using matrices a few > weeks back but I cannot find the email anywhere when I looked for it > today. > > Here is a better explanation of what I am trying to do. For example I have > a 1d array of 500 elements. I want to interpolate them quadratically so > each array becomes 10 values, 50,000 in total. > > I have 500x500 pixels and I want to get 0.01 pixel resolution. > > code snipet: > # collapse an image in the x direction > ref_xproj=np.sum(refarray,axis=0) > > # make an array for the 1d spectra > x = np.linspace(0, (x_2-x_1), (x_2-x_1)) > > # interpolation > f2_xr = interp1d(x, ref_xproj, kind='quadratic') > > # new x array for interpolated data > xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) > > # FFT of interpolated spectra > F_ref_xproj = fftpack.fft(f2_xr(xnew)) > > Can I do this type of interpolation faster using the method you described > before? > I'll misinterpreted your original question and the method I suggested there is not applicable. To better understand your situation, few questions: - what you described above; it does work for you in technical sense? - if so, then the problem is with the execution performance? - what are your current timings? - how much you'll need to enhance them? Regards, eat > > Cheers > James > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmccormac01 at qub.ac.uk Tue Mar 1 06:01:07 2011 From: jmccormac01 at qub.ac.uk (James McCormac) Date: Tue, 1 Mar 2011 11:01:07 +0000 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> Message-ID: <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> Hi eat, Yes this code works fine, its not actually that bad on a 2x500 arrays ~5 but the 500 length arrays are the shortest I can run with. I am analyzing CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000 1d arrays after collapsing the spectra. This takes >20 sec per image which is much too long. Ideally the id like it to run as fast as possible (depending on how much accuracy I can maintain). Yes the code works fine its just a little slow, I've put timers in and 98% of the time is taken up by the interpolation. Any improvement in performance would be great. I've slimmed down the rest of the body as much as possible already. Cheers James On 1 Mar 2011, at 07:31, eat wrote: > Hi James, > > On Mon, Feb 28, 2011 at 5:25 PM, James McCormac > wrote: > Hi eat, > you sent me a suggestion for faster 1d interpolations using matrices > a few > weeks back but I cannot find the email anywhere when I looked for it > today. > > Here is a better explanation of what I am trying to do. For example > I have > a 1d array of 500 elements. I want to interpolate them quadratically > so > each array becomes 10 values, 50,000 in total. > > I have 500x500 pixels and I want to get 0.01 pixel resolution. > > code snipet: > # collapse an image in the x direction > ref_xproj=np.sum(refarray,axis=0) > > # make an array for the 1d spectra > x = np.linspace(0, (x_2-x_1), (x_2-x_1)) > > # interpolation > f2_xr = interp1d(x, ref_xproj, kind='quadratic') > > # new x array for interpolated data > xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) > > # FFT of interpolated spectra > F_ref_xproj = fftpack.fft(f2_xr(xnew)) > > Can I do this type of interpolation faster using the method you > described > before? > I'll misinterpreted your original question and the method I > suggested there is not applicable. > > To better understand your situation, few questions: > - what you described above; it does work for you in technical sense? > - if so, then the problem is with the execution performance? > - what are your current timings? > - how much you'll need to enhance them? > > Regards, > eat > > Cheers > James > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user ------------------------------------------------- James McCormac jmccormac01 at qub.ac.uk Astrophysics Research Centre School of Mathematics & Physics Queens University Belfast University Road, Belfast, U.K BT7 1NN, TEL: 028 90973509 -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.combs at berkeley.edu Tue Mar 1 06:51:04 2011 From: peter.combs at berkeley.edu (Peter Combs) Date: Tue, 1 Mar 2011 03:51:04 -0800 Subject: [SciPy-User] [SciPy-user] mgrid format from unstructured data In-Reply-To: <30993544.post@talk.nabble.com> References: <30993544.post@talk.nabble.com> Message-ID: On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote: > I have found some examples that looks like this > x,y = np.mgrid[-1:1:10j,-1:1:10j] > z = (x+y)*np.exp(-6.0*(x*x+y*y)) > xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j] > tck = interpolate.bisplrep(x,y,z,s=0) > znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) > > > So my question really is how to sort/convert my input to a format that can > be used by the interpolate function? I use the LSQBivariateSpline functions: import numpy as np import scipy.interpolate as interp num_knots = int(floor(sqrt(len(z)))) xknots = np.linspace(xmin, xmax, n) yknots = np.linspace(ymin, ymax, n) interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots) znew = interpolator.ev(xnew, ynew) The object orientation is useful for my applications, for reasons that I no longer quite remember. Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about the order that the points are in. You might try something like: xknots = list(set(x)) yknots = list(set(y)) tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots) but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you already know the answer. Peter Combs peter.combs at berkeley.edu From opossumnano at gmail.com Tue Mar 1 08:41:32 2011 From: opossumnano at gmail.com (Tiziano Zito) Date: Tue, 1 Mar 2011 14:41:32 +0100 (CET) Subject: [SciPy-User] =?utf-8?q?=5BANN=5D_Summer_School_=22Advanced_Scient?= =?utf-8?q?ific_Programming_in_Python=22_in_St_Andrews=2C_UK?= Message-ID: <20110301134132.660872498CC@mail.bccn-berlin> ?Advanced Scientific Programming in Python ========================================= a Summer School by the G-Node and the School of Psychology, University of St Andrews Scientists spend more and more time writing, maintaining, and debugging software. While techniques for doing this efficiently have evolved, only few scientists actually use them. As a result, instead of doing their research, they spend far too much time writing deficient code and reinventing the wheel. In this course we will present a selection of advanced programming techniques, incorporating theoretical lectures and practical exercises tailored to the needs of a programming scientist. New skills will be tested in a real programming project: we will team up to develop an entertaining scientific computer game. We use the Python programming language for the entire course. Python works as a simple programming language for beginners, but more importantly, it also works great in scientific simulations and data analysis. We show how clean language design, ease of extensibility, and the great wealth of open source libraries for scientific computing and data visualization are driving Python to become a standard tool for the programming scientist. This school is targeted at PhD students and Post-docs from all areas of science. Competence in Python or in another language such as Java, C/C++, MATLAB, or Mathematica is absolutely required. Basic knowledge of Python is assumed. Participants without any prior experience with Python should work through the proposed introductory materials before the course. Date and Location ================= September 11?16, 2011. St Andrews, UK. Preliminary Program =================== Day 0 (Sun Sept 11) ? Best Programming Practices - Agile development & Extreme Programming - Advanced Python: decorators, generators, context managers - Version control with git Day 1 (Mon Sept 12) ? Software Carpentry - Object-oriented programming & design patterns - Test-driven development, unit testing & quality assurance - Debugging, profiling and benchmarking techniques - Programming in teams Day 2 (Tue Sept 13) ? Scientific Tools for Python - Advanced NumPy - The Quest for Speed (intro): Interfacing to C with Cython - Best practices in data visualization Day 3 (Wed Sept 14) ? The Quest for Speed - Writing parallel applications in Python - Programming project Day 4 (Thu Sept 15) ? Efficient Memory Management - When parallelization does not help: the starving CPUs problem - Data serialization: from pickle to databases - Programming project Day 5 (Fri Sept 16) ? Practical Software Development - Programming project - The Pac-Man Tournament Every evening we will have the tutors' consultation hour: Tutors will answer your questions and give suggestions for your own projects. Applications ============ You can apply on-line at http://python.g-node.org Applications must be submitted before May 29, 2011. Notifications of acceptance will be sent by June 19, 2011. No fee is charged but participants should take care of travel, living, and accommodation expenses. Candidates will be selected on the basis of their profile. Places are limited: acceptance rate in past editions was around 30%. Prerequisites: You are supposed to know the basics of Python to participate in the lectures. Please consult the website for a list of introductory material. Faculty ======= - Francesc Alted, author of PyTables, Castell? de la Plana, Spain - Pietro Berkes, Volen Center for Complex Systems, Brandeis University, USA - Valentin Haenel, Berlin Institute of Technology and Bernstein Center for Computational Neuroscience Berlin, Germany - Zbigniew J?drzejewski-Szmek, Faculty of Physics, University of Warsaw, Poland - Eilif Muller, The Blue Brain Project, Ecole Polytechnique F?d?rale de Lausanne, Switzerland - Emanuele Olivetti, NeuroInformatics Laboratory, Fondazione Bruno Kessler and University of Trento, Italy - Rike-Benjamin Schuppner, Bernstein Center for Computational Neuroscience Berlin, Germany - Bartosz Tele?czuk, Institute for Theoretical Biology, Humboldt-Universit?t zu Berlin, Germany - Bastian Venthur, Berlin Institute of Technology and Bernstein Focus: Neurotechnology, Germany - Pauli Virtanen, Institute for Theoretical Physics and Astrophysics, University of W?rzburg, Germany - Tiziano Zito, Berlin Institute of Technology and Bernstein Center for Computational Neuroscience Berlin, Germany Organized by Katharina Maria Zeiner and Manuel Spitschan of the School of Psychology, University of St Andrews, and by Zbigniew J?drzejewski-Szmek and Tiziano Zito for the German Neuroinformatics Node of the INCF. Website: http://python.g-node.org Contact: python-info at g-node.org From e.antero.tammi at gmail.com Tue Mar 1 09:03:34 2011 From: e.antero.tammi at gmail.com (eat) Date: Tue, 1 Mar 2011 16:03:34 +0200 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> Message-ID: Hi, On Tue, Mar 1, 2011 at 1:01 PM, James McCormac wrote: > Hi eat, > Yes this code works fine, its not actually that bad on a 2x500 arrays ~5 > but the 500 length arrays are the shortest I can run with. I am analyzing > CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000 > 1d arrays after collapsing the spectra. This takes >20 sec per image which > is much too long. Ideally the id like it to run as fast as possible > (depending on how much accuracy I can maintain). > > Yes the code works fine its just a little slow, I've put timers in and 98% > of the time is taken up by the interpolation. Any improvement in > performance would be great. I've slimmed down the rest of the body as much > as possible already. > Can you provide a minimal working code example, which demonstrates the problem? At least you'll get better idea how it performs on some other machine. Regards, eat > > Cheers > James > > > On 1 Mar 2011, at 07:31, eat wrote: > > Hi James, > > On Mon, Feb 28, 2011 at 5:25 PM, James McCormac wrote: > >> Hi eat, >> you sent me a suggestion for faster 1d interpolations using matrices a few >> weeks back but I cannot find the email anywhere when I looked for it >> today. >> >> Here is a better explanation of what I am trying to do. For example I have >> a 1d array of 500 elements. I want to interpolate them quadratically so >> each array becomes 10 values, 50,000 in total. >> >> I have 500x500 pixels and I want to get 0.01 pixel resolution. >> >> code snipet: >> # collapse an image in the x direction >> ref_xproj=np.sum(refarray,axis=0) >> >> # make an array for the 1d spectra >> x = np.linspace(0, (x_2-x_1), (x_2-x_1)) >> >> # interpolation >> f2_xr = interp1d(x, ref_xproj, kind='quadratic') >> >> # new x array for interpolated data >> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) >> >> # FFT of interpolated spectra >> F_ref_xproj = fftpack.fft(f2_xr(xnew)) >> >> Can I do this type of interpolation faster using the method you described >> before? >> > I'll misinterpreted your original question and the method I suggested there > is not applicable. > > To better understand your situation, few questions: > - what you described above; it does work for you in technical sense? > - if so, then the problem is with the execution performance? > - what are your current timings? > - how much you'll need to enhance them? > > Regards, > eat > >> >> Cheers >> James >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > ------------------------------------------------- > James McCormac > jmccormac01 at qub.ac.uk > Astrophysics Research Centre > School of Mathematics & Physics > Queens University Belfast > University Road, > Belfast, U.K > BT7 1NN, > TEL: 028 90973509 > > > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Tue Mar 1 14:53:18 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Tue, 1 Mar 2011 20:53:18 +0100 Subject: [SciPy-User] [SciPy-user] mgrid format from unstructured data In-Reply-To: References: <30993544.post@talk.nabble.com> Message-ID: Hi, In relation to this topic: does anybody know of a scipy implementation for multivariate splines? bye Nicky On 1 March 2011 12:51, Peter Combs wrote: > On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote: >> I have found some examples that looks like this >> x,y = np.mgrid[-1:1:10j,-1:1:10j] >> z = (x+y)*np.exp(-6.0*(x*x+y*y)) >> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j] >> tck = interpolate.bisplrep(x,y,z,s=0) >> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) >> >> >> So my question really is how to sort/convert my input to a format that can >> be used by the interpolate function? > > I use the LSQBivariateSpline functions: > > import numpy as np > import scipy.interpolate as interp > > num_knots = int(floor(sqrt(len(z)))) > xknots = np.linspace(xmin, xmax, n) > yknots = np.linspace(ymin, ymax, n) > interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots) > znew = interpolator.ev(xnew, ynew) > > The object orientation is useful for my applications, for reasons that I no longer quite remember. ?Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about the order that the points are in. You might try something like: > > xknots = list(set(x)) > yknots = list(set(y)) > tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots) > > but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you already know the answer. > > Peter Combs > peter.combs at berkeley.edu > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From amenity at enthought.com Tue Mar 1 16:31:32 2011 From: amenity at enthought.com (Amenity Applewhite) Date: Tue, 1 Mar 2011 15:31:32 -0600 Subject: [SciPy-User] =?utf-8?q?Webinar_3/4=3A_How_do_I=E2=80=A6_solve_ODE?= =?utf-8?q?s=3F_Part_II?= Message-ID: March EPD Webinar: How do I...solve ODEs? Part II This Friday, Warren Weckesser will present a second installment of his webinars on differential equations. We will explore two Python packages for solving boundary value problems. Both are packaged as scikits: scikits.bvp_solver, written by John Salvatier, is a wrapper of the BVP_SOLVER code by Shampine and Muir; scikits.bvp1lg, written by Pauli Virtanen, is a wrapper of the COLNEW solver developed by Ascher and Bader. Enthought Python Distribution Webinar How do I... solve ODEs? Part II Friday, March 4: 1pm CST/7pm UTC Wait list (for non EPD subscribers): send an email to amenity at enthought.com Thanks! _________________________ Amenity Applewhite Enthought, Inc. Scientific Computing Solutions -------------- next part -------------- An HTML attachment was scrubbed... URL: From sloan.lindsey at gmail.com Wed Mar 2 05:56:58 2011 From: sloan.lindsey at gmail.com (Sloan Lindsey) Date: Wed, 2 Mar 2011 11:56:58 +0100 Subject: [SciPy-User] [SciPy-user] mgrid format from unstructured data In-Reply-To: References: <30993544.post@talk.nabble.com> Message-ID: Hi, on mvsplines: Take a look at http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata There is a linear version too. For the initial question: here is a snippit that works : import scipy.interpolate as inter import numpy as np import matplotlib.pyplot as plt datax,datay,dataz = np.genfromtxt('mydata.blah', skip_header=1, unpack=True) points = np.array([datax,datay]).T nearest = inter.NearestNDInterpolator(points,dataz) linear = inter.LinearNDInterpolator(points,dataz,fill_value=0.0) curvey = inter.CloughTocher2DInterpolator(points,dataz, fill_value = 0.0) #careful about the boundary conditions #now you have 3 interpolants. To determine dataz @ datax,datay value = curvey(datax,datay) #if you want a grid so that you can plot your interpolation: xrange = np.arange(-10.0, 100.0, 0.05) yrange = np.arange(-100.0, 100.0, 0.05) mesh = np.meshgrid(xrange,yrange) a_int_mesh = curvey(mesh) plt.imshow(Zn-Zno) plt.show This works for un ordered data. Sloan On Tue, Mar 1, 2011 at 8:53 PM, nicky van foreest wrote: > Hi, > > In relation to this topic: does anybody know of ?a scipy > implementation for multivariate splines? > > bye > > Nicky > > On 1 March 2011 12:51, Peter Combs wrote: >> On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote: >>> I have found some examples that looks like this >>> x,y = np.mgrid[-1:1:10j,-1:1:10j] >>> z = (x+y)*np.exp(-6.0*(x*x+y*y)) >>> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j] >>> tck = interpolate.bisplrep(x,y,z,s=0) >>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) >>> >>> >>> So my question really is how to sort/convert my input to a format that can >>> be used by the interpolate function? >> >> I use the LSQBivariateSpline functions: >> >> import numpy as np >> import scipy.interpolate as interp >> >> num_knots = int(floor(sqrt(len(z)))) >> xknots = np.linspace(xmin, xmax, n) >> yknots = np.linspace(ymin, ymax, n) >> interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots) >> znew = interpolator.ev(xnew, ynew) >> >> The object orientation is useful for my applications, for reasons that I no longer quite remember. ?Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about the order that the points are in. You might try something like: >> >> xknots = list(set(x)) >> yknots = list(set(y)) >> tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots) >> >> but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you already know the answer. >> >> Peter Combs >> peter.combs at berkeley.edu >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From josef.pktd at gmail.com Wed Mar 2 08:56:27 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Mar 2011 08:56:27 -0500 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> Message-ID: On Tue, Mar 1, 2011 at 9:03 AM, eat wrote: > Hi, > > On Tue, Mar 1, 2011 at 1:01 PM, James McCormac > wrote: >> >> Hi eat, >> Yes this code works fine, its not actually that bad on a 2x500 arrays ~5 >> but the 500 length arrays are the shortest I can run with. I am analyzing >> CCD images which can go up 2000x2000 in length and breadth, meaning 2x2000 >> 1d arrays after collapsing the spectra. This takes >20 sec per image which >> is much too long. Ideally the id like it to run as fast as possible >> (depending on how much accuracy I can maintain). >> Yes the code works fine its just a little slow, I've put timers in and 98% >> of the time is taken up by the interpolation. ?Any improvement in >> performance would be great. I've slimmed ?down the rest of the body as much >> as possible already. > > Can you provide a minimal working code example, which demonstrates the > problem? At least you'll get better idea how it performs on some other > machine. > > Regards, > eat >> >> Cheers >> James >> >> On 1 Mar 2011, at 07:31, eat wrote: >> >> Hi James, >> >> On Mon, Feb 28, 2011 at 5:25 PM, James McCormac >> wrote: >>> >>> Hi eat, >>> you sent me a suggestion for faster 1d interpolations using matrices a >>> few >>> weeks back but I cannot find the email anywhere when I looked for it >>> today. >>> >>> Here is a better explanation of what I am trying to do. For example I >>> have >>> a 1d array of 500 elements. I want to interpolate them quadratically so >>> each array becomes 10 values, 50,000 in total. >>> >>> I have 500x500 pixels and I want to get 0.01 pixel resolution. >>> >>> code snipet: >>> # collapse an image in the x direction >>> ref_xproj=np.sum(refarray,axis=0) >>> >>> # make an array for the 1d spectra >>> x = np.linspace(0, (x_2-x_1), (x_2-x_1)) >>> >>> # interpolation >>> f2_xr = interp1d(x, ref_xproj, kind='quadratic') >>> >>> # new x array for interpolated data >>> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) >>> >>> # FFT of interpolated spectra >>> F_ref_xproj = fftpack.fft(f2_xr(xnew)) >>> >>> Can I do this type of interpolation faster using the method you described >>> before? >> >> I'll misinterpreted your original question and the method I suggested >> there is not?applicable. >> To better understand your situation, few questions: >> - what you described above; it?does?work for you in technical sense? >> - if so, then the problem is with the execution performance? >> - what are your current timings? >> - how much you'll need to enhance them? >> Regards, >> eat >>> >>> Cheers >>> James Just a thought since I don't know the details: using fft interpolation might be faster, e.g. signal.resample >>> t = np.linspace(0,10,25) >>> x = np.sin(t) >>> t2 = np.linspace(0,10,50) >>> x2 = signal.resample(x,50) scipy.ndimage.interpolation should also be faster, if there is something that does what you want. Josef >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> ------------------------------------------------- >> James McCormac >> jmccormac01 at qub.ac.uk >> Astrophysics Research Centre >> School of Mathematics & Physics >> Queens University Belfast >> University Road, >> Belfast, U.K >> BT7 1NN, >> TEL: 028 90973509 >> >> >> >> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From jmccormac01 at qub.ac.uk Wed Mar 2 09:34:19 2011 From: jmccormac01 at qub.ac.uk (James McCormac) Date: Wed, 2 Mar 2011 14:34:19 -0000 (UTC) Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> Message-ID: <62611.161.72.60.164.1299076459.squirrel@star.pst.qub.ac.uk> Hi Josef, Do you mean, create the spectra then resample it afterwards? I can have a look at ndimage.iterpolation. Cheers James On Wed, March 2, 2011 1:56 pm, josef.pktd at gmail.com wrote: > On Tue, Mar 1, 2011 at 9:03 AM, eat wrote: > >> Hi, >> >> >> On Tue, Mar 1, 2011 at 1:01 PM, James McCormac >> wrote: >> >>> >>> Hi eat, >>> Yes this code works fine, its not actually that bad on a 2x500 arrays >>> ~5 >>> but the 500 length arrays are the shortest I can run with. I am >>> analyzing CCD images which can go up 2000x2000 in length and breadth, >>> meaning 2x2000 1d arrays after collapsing the spectra. This takes >20 >>> sec per image which is much too long. Ideally the id like it to run as >>> fast as possible (depending on how much accuracy I can maintain). >>> Yes the code works fine its just a little slow, I've put timers in and >>> 98% >>> of the time is taken up by the interpolation. ?Any improvement in >>> performance would be great. I've slimmed ?down the rest of the body >>> as much as possible already. >> >> Can you provide a minimal working code example, which demonstrates the >> problem? At least you'll get better idea how it performs on some other >> machine. >> >> Regards, >> eat >>> >>> Cheers >>> James >>> >>> >>> On 1 Mar 2011, at 07:31, eat wrote: >>> >>> >>> Hi James, >>> >>> >>> On Mon, Feb 28, 2011 at 5:25 PM, James McCormac >>> >>> wrote: >>> >>>> >>>> Hi eat, >>>> you sent me a suggestion for faster 1d interpolations using matrices >>>> a few weeks back but I cannot find the email anywhere when I looked >>>> for it today. >>>> >>>> Here is a better explanation of what I am trying to do. For example >>>> I >>>> have a 1d array of 500 elements. I want to interpolate them >>>> quadratically so each array becomes 10 values, 50,000 in total. >>>> >>>> I have 500x500 pixels and I want to get 0.01 pixel resolution. >>>> >>>> >>>> code snipet: # collapse an image in the x direction >>>> ref_xproj=np.sum(refarray,axis=0) >>>> >>>> # make an array for the 1d spectra >>>> x = np.linspace(0, (x_2-x_1), (x_2-x_1)) >>>> >>>> # interpolation >>>> f2_xr = interp1d(x, ref_xproj, kind='quadratic') >>>> >>>> # new x array for interpolated data >>>> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) >>>> >>>> # FFT of interpolated spectra >>>> F_ref_xproj = fftpack.fft(f2_xr(xnew)) >>>> >>>> >>>> Can I do this type of interpolation faster using the method you >>>> described before? >>> >>> I'll misinterpreted your original question and the method I suggested >>> there is not?applicable. To better understand your situation, few >>> questions: >>> - what you described above; it?does?work for you in technical sense? >>> - if so, then the problem is with the execution performance? >>> - what are your current timings? >>> - how much you'll need to enhance them? >>> Regards, >>> eat >>>> >>>> Cheers >>>> James >>>> > > Just a thought since I don't know the details: > > > using fft interpolation might be faster, e.g. signal.resample > >>>> t = np.linspace(0,10,25) x = np.sin(t) t2 = np.linspace(0,10,50) x2 = >>>> signal.resample(x,50) > > scipy.ndimage.interpolation should also be faster, if there is something > that does what you want. > > Josef > > > > >>>> >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> ------------------------------------------------- >>> James McCormac >>> jmccormac01 at qub.ac.uk Astrophysics Research Centre >>> School of Mathematics & Physics >>> Queens University Belfast >>> University Road, >>> Belfast, U.K >>> BT7 1NN, >>> TEL: 028 90973509 >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From josef.pktd at gmail.com Wed Mar 2 10:13:28 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Mar 2011 10:13:28 -0500 Subject: [SciPy-User] faster interpolations (interp1d) In-Reply-To: <62611.161.72.60.164.1299076459.squirrel@star.pst.qub.ac.uk> References: <54539.161.72.60.164.1298906724.squirrel@star.pst.qub.ac.uk> <2A4C135F-886A-4FD9-81E7-C83FA025ADC2@qub.ac.uk> <62611.161.72.60.164.1299076459.squirrel@star.pst.qub.ac.uk> Message-ID: On Wed, Mar 2, 2011 at 9:34 AM, James McCormac wrote: > > Hi Josef, > Do you mean, create the spectra then resample it afterwards? that's kind of the idea (I never went through all the numerical details, and I still haven't gone through 10 ways to estimate the spectral density.) you can look at the source of scipy.signal.signaltools which looks faster to me than interpolation in time domain. And if you want to end up with the fft anyway, maybe you can skip the final ifft. But image processing libraries might have the interpolation out of the box. Cheers, Josef > I can have a > look at ndimage.iterpolation. > > Cheers > James > > > > On Wed, March 2, 2011 1:56 pm, josef.pktd at gmail.com wrote: >> On Tue, Mar 1, 2011 at 9:03 AM, eat wrote: >> >>> Hi, >>> >>> >>> On Tue, Mar 1, 2011 at 1:01 PM, James McCormac >>> wrote: >>> >>>> >>>> Hi eat, >>>> Yes this code works fine, its not actually that bad on a 2x500 arrays >>>> ~5 >>>> but the 500 length arrays are the shortest I can run with. I am >>>> analyzing CCD images which can go up 2000x2000 in length and breadth, >>>> meaning 2x2000 1d arrays after collapsing the spectra. This takes >20 >>>> sec per image which is much too long. Ideally the id like it to run as >>>> fast as possible (depending on how much accuracy I can maintain). >>>> Yes the code works fine its just a little slow, I've put timers in and >>>> 98% >>>> of the time is taken up by the interpolation. ?Any improvement in >>>> performance would be great. I've slimmed ?down the rest of the body >>>> as much as possible already. >>> >>> Can you provide a minimal working code example, which demonstrates the >>> problem? At least you'll get better idea how it performs on some other >>> machine. >>> >>> Regards, >>> eat >>>> >>>> Cheers >>>> James >>>> >>>> >>>> On 1 Mar 2011, at 07:31, eat wrote: >>>> >>>> >>>> Hi James, >>>> >>>> >>>> On Mon, Feb 28, 2011 at 5:25 PM, James McCormac >>>> >>>> wrote: >>>> >>>>> >>>>> Hi eat, >>>>> you sent me a suggestion for faster 1d interpolations using matrices >>>>> a few weeks back but I cannot find the email anywhere when I looked >>>>> for it today. >>>>> >>>>> Here is a better explanation of what I am trying to do. For example >>>>> I >>>>> have a 1d array of 500 elements. I want to interpolate them >>>>> quadratically so each array becomes 10 values, 50,000 in total. >>>>> >>>>> I have 500x500 pixels and I want to get 0.01 pixel resolution. >>>>> >>>>> >>>>> code snipet: # collapse an image in the x direction >>>>> ref_xproj=np.sum(refarray,axis=0) >>>>> >>>>> # make an array for the 1d spectra >>>>> x = np.linspace(0, (x_2-x_1), (x_2-x_1)) >>>>> >>>>> # interpolation >>>>> f2_xr = interp1d(x, ref_xproj, kind='quadratic') >>>>> >>>>> # new x array for interpolated data >>>>> xnew = np.linspace(0, (x_2-x_1), (x_2-x_1)*100) >>>>> >>>>> # FFT of interpolated spectra >>>>> F_ref_xproj = fftpack.fft(f2_xr(xnew)) >>>>> >>>>> >>>>> Can I do this type of interpolation faster using the method you >>>>> described before? >>>> >>>> I'll misinterpreted your original question and the method I suggested >>>> ?there is not?applicable. To better understand your situation, few >>>> questions: >>>> - what you described above; it?does?work for you in technical sense? >>>> - if so, then the problem is with the execution performance? >>>> - what are your current timings? >>>> - how much you'll need to enhance them? >>>> Regards, >>>> eat >>>>> >>>>> Cheers >>>>> James >>>>> >> >> Just a thought since I don't know the details: >> >> >> using fft interpolation might be faster, e.g. signal.resample >> >>>>> t = np.linspace(0,10,25) x = np.sin(t) t2 = np.linspace(0,10,50) x2 = >>>>> signal.resample(x,50) >> >> scipy.ndimage.interpolation ?should also be faster, if there is something >> that does what you want. >> >> Josef >> >> >> >> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> SciPy-User mailing list >>>>> SciPy-User at scipy.org >>>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>>> ------------------------------------------------- >>>> James McCormac >>>> jmccormac01 at qub.ac.uk Astrophysics Research Centre >>>> School of Mathematics & Physics >>>> Queens University Belfast >>>> University Road, >>>> Belfast, U.K >>>> BT7 1NN, >>>> TEL: 028 90973509 >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>>> >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From yyc at solvcon.net Wed Mar 2 11:38:41 2011 From: yyc at solvcon.net (Yung-Yu Chen) Date: Wed, 2 Mar 2011 11:38:41 -0500 Subject: [SciPy-User] ANN: SOLVCON 0.0.4 Message-ID: Hello, I am pleased to announce the release of SOLVCON 0.0.4. SOLVCON is a multi-physics, supercomputing software framework for high-fidelity solutions of partial differential equations (PDEs) by hybrid parallelism. The source tarball can be downloaded at https://bitbucket.org/yungyuc/solvcon/downloads . More information about SOLVCON can be found at http://solvcon.net/ . This release enhances pre-procesing and start-up for large-scale simulations. Unstructured meshes using up to 66 million elements have been tested. Two new options to ``solvcon.case.BlockCase`` are added: (i) ``io.domain.with_arrs`` and (ii) ``io.domain.with_whole``. They can be used to turn off arrays in the ``Collective`` object. By omitting those arrays on head node, memory usage is significantly reduced. Available memory on head node will not constrain the size of simulations. Bug-fix: - Issue #12: Order of variables for in situ visualization can be specified to make the order of data arrays of VTK poly data consistent among head and slave nodes. with regards, Yung-Yu Chen -- Yung-Yu Chen PhD candidate of Mechanical Engineering The Ohio State University, Columbus, Ohio +1 (614) 859 2436 http://solvcon.net/yyc/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.raizada at gmail.com Wed Mar 2 13:06:52 2011 From: rajeev.raizada at gmail.com (Rajeev Raizada) Date: Wed, 2 Mar 2011 13:06:52 -0500 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. Message-ID: Dear SciPy users, I have a matrix (or, more strictly speaking, an array), and I want to calculate the correlation between each column and every other column, i.e. to make a standard correlation matrix. In Matlab, this is pretty straightforward, and the results also reflect the mathematical convention that corr(m) is just an abbreviated way of saying corr(m,m). >> m = [ 1 2; -1 3; 0 4] m = ? ?1 ? ? 2 ? -1 ? ? 3 ? ?0 ? ? 4 >> corr(m) ans = 1.0000 -0.5000 -0.5000 1.0000 >> corr(m,m) ans = 1.0000 -0.5000 -0.5000 1.0000 However, the behaviour of SciPy/NumPy is quite different from what I had expected. In those modules, corrcoeff(m) is *not* the same as corrcoeff(m,m). Apparently, corrcoeff(x,y) produces the result corrcoef(vstack(x,y)), which strikes me as rather weird, and inconsistent with standard mathematical usage. Here are some examples, below. Raj ------------------ In [1]: import scipy In [2]: m = scipy.array([[ 1, 2],[ -1, 3],[ 0, 4]]) In [3]: m Out[3]: array([[ 1, 2], [-1, 3], [ 0, 4]]) In [4]: m.T Out[4]: array([[ 1, -1, 0], [ 2, 3, 4]]) In [5]: scipy.corrcoef(m) Out[5]: array([[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]]) In [6]: scipy.corrcoef(m.T) Out[6]: array([[ 1. , -0.5], [-0.5, 1. ]]) # Note from Raj: that answer above, at least, matches what we'd want. # But it still gives a different result from corrcoef(m.T,m.T) ! In [7]: scipy.corrcoef(m,m) Out[7]: array([[ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.]]) In [8]: scipy.corrcoef(m.T,m.T) Out[8]: array([[ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ], [ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ]]) From josef.pktd at gmail.com Wed Mar 2 14:06:23 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Mar 2011 14:06:23 -0500 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: On Wed, Mar 2, 2011 at 1:06 PM, Rajeev Raizada wrote: > Dear SciPy users, > > I have a matrix (or, more strictly speaking, an array), > and I want to calculate the correlation between each column > and every other column, i.e. to make a standard correlation matrix. > > In Matlab, this is pretty straightforward, > and the results also reflect the mathematical convention > that corr(m) is just an abbreviated way of saying corr(m,m). > >>> m = [ 1 2; -1 3; 0 4] > m = > ? ?1 ? ? 2 > ? -1 ? ? 3 > ? ?0 ? ? 4 > >>> corr(m) > ans = > ? 1.0000 ? -0.5000 > ?-0.5000 ? ?1.0000 > >>> corr(m,m) > ans = > ? 1.0000 ? -0.5000 > ?-0.5000 ? ?1.0000 > > However, the behaviour of SciPy/NumPy is quite different > from what I had expected. > In those modules, corrcoeff(m) is *not* the same as corrcoeff(m,m). > Apparently, corrcoeff(x,y) produces the result corrcoef(vstack(x,y)), > which strikes me as rather weird, and inconsistent > with standard mathematical usage. np.cov, np.corrcoef have a rowvar=0 option for the "standard" way if variables are in columns, instead of transposing. I also found it a bit strange that corrcoef(x,y) creates the stacked version. scipy.stats.spearmanr inherits this behavior since I rewrote it. scipy.stats.pearsonr hasn't been rewritten yet. It didn't bug me enough, to figure out whether there is a reason for this stacking behavior or not. Josef > > Here are some examples, below. > > Raj > ------------------ > In [1]: import scipy > > In [2]: m = scipy.array([[ 1, 2],[ -1, 3],[ 0, 4]]) > > In [3]: m > Out[3]: > array([[ 1, ?2], > ? ? ? [-1, ?3], > ? ? ? [ 0, ?4]]) > > In [4]: m.T > Out[4]: > array([[ 1, -1, ?0], > ? ? ? [ 2, ?3, ?4]]) > > In [5]: scipy.corrcoef(m) > Out[5]: > array([[ 1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1.]]) > > In [6]: scipy.corrcoef(m.T) > Out[6]: > array([[ 1. , -0.5], > ? ? ? [-0.5, ?1. ]]) > > # Note from Raj: that answer above, at least, matches what we'd want. > # But it still gives a different result from corrcoef(m.T,m.T) ! > > In [7]: scipy.corrcoef(m,m) > Out[7]: > array([[ 1., ?1., ?1., ?1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1., ?1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1., ?1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1., ?1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1., ?1., ?1., ?1.], > ? ? ? [ 1., ?1., ?1., ?1., ?1., ?1.]]) > > In [8]: scipy.corrcoef(m.T,m.T) > Out[8]: > array([[ 1. , -0.5, ?1. , -0.5], > ? ? ? [-0.5, ?1. , -0.5, ?1. ], > ? ? ? [ 1. , -0.5, ?1. , -0.5], > ? ? ? [-0.5, ?1. , -0.5, ?1. ]]) > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Wed Mar 2 14:28:43 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 2 Mar 2011 19:28:43 +0000 (UTC) Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. References: Message-ID: On Wed, 02 Mar 2011 14:06:23 -0500, josef.pktd wrote: [clip] > I also found it a bit strange that corrcoef(x,y) creates the stacked > version. scipy.stats.spearmanr inherits this behavior since I rewrote > it. scipy.stats.pearsonr hasn't been rewritten yet. > > It didn't bug me enough, to figure out whether there is a reason for > this stacking behavior or not. The Matlab convention corrcoef(x, y) == corrcoef(c_[x.ravel(), y.ravel()]) is actually also a bit peculiar if you haven't seen it before -- how come there are now two variables, if x had variables on the rows (why not bail out with an error?). I don't typically deal with stuff that requires these functions, so I don't have an opinion, but it would have been better to do the same thing even if there is no real reason for it... -- Pauli Virtanen From josef.pktd at gmail.com Wed Mar 2 14:36:18 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 2 Mar 2011 14:36:18 -0500 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: On Wed, Mar 2, 2011 at 2:28 PM, Pauli Virtanen wrote: > On Wed, 02 Mar 2011 14:06:23 -0500, josef.pktd wrote: > [clip] >> I also found it a bit strange that corrcoef(x,y) creates the stacked >> version. scipy.stats.spearmanr inherits this behavior since I rewrote >> it. scipy.stats.pearsonr hasn't been rewritten yet. >> >> It didn't bug me enough, to figure out whether there is a reason for >> this stacking behavior or not. > > The Matlab convention > > ? ? ? ?corrcoef(x, y) == corrcoef(c_[x.ravel(), y.ravel()]) I don't remember matlab exactly, but I don't think there is a ravel, and I think R also does cov(x, y) = np.dot((x-x.mean()).T, y-y.mean()) and normalized for corrcoef. just getting the off-diagonal block of the matrix, x'y, instead of also x'x and y'y Josef > > is actually also a bit peculiar if you haven't seen it before -- how come > there are now two variables, if x had variables on the rows (why not bail > out with an error?). > > I don't typically deal with stuff that requires these functions, so I > don't have an opinion, but it would have been better to do the same thing > even if there is no real reason for it... > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Wed Mar 2 14:37:34 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 2 Mar 2011 13:37:34 -0600 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: Looks like the meaning of the argument 'y' in cov() (which is used by corrcoef()) was changed back in 2006: https://github.com/numpy/numpy/commit/959f36c04ce8ca0b7bc44bb6438bddf162ad2db9#numpy/lib/function_base.py The old behavior appears to have been more like matlab's behavior. Warren On Wed, Mar 2, 2011 at 1:28 PM, Pauli Virtanen wrote: > On Wed, 02 Mar 2011 14:06:23 -0500, josef.pktd wrote: > [clip] > > I also found it a bit strange that corrcoef(x,y) creates the stacked > > version. scipy.stats.spearmanr inherits this behavior since I rewrote > > it. scipy.stats.pearsonr hasn't been rewritten yet. > > > > It didn't bug me enough, to figure out whether there is a reason for > > this stacking behavior or not. > > The Matlab convention > > corrcoef(x, y) == corrcoef(c_[x.ravel(), y.ravel()]) > > is actually also a bit peculiar if you haven't seen it before -- how come > there are now two variables, if x had variables on the rows (why not bail > out with an error?). > > I don't typically deal with stuff that requires these functions, so I > don't have an opinion, but it would have been better to do the same thing > even if there is no real reason for it... > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Wed Mar 2 15:27:36 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 2 Mar 2011 12:27:36 -0800 Subject: [SciPy-User] nanmedian chokes on size zero arrays Message-ID: While fixing Bottleneck functions that can't handle size zero arrays of various shapes, I noticed that scipy.stats.nanmedian chokes on certain size zero arrays and axis combinations: >> from scipy.stats import nanmedian >> a = np.ones((0,2)) >> np.median(a, 1) array([], dtype=float64) >> nanmedian(a, 1) IndexError: invalid index Anyone know a fix? Here's the ticket: http://projects.scipy.org/scipy/ticket/1400 From kwgoodman at gmail.com Wed Mar 2 16:06:37 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Wed, 2 Mar 2011 13:06:37 -0800 Subject: [SciPy-User] nanmedian chokes on size zero arrays In-Reply-To: References: Message-ID: On Wed, Mar 2, 2011 at 12:27 PM, Keith Goodman wrote: > While fixing Bottleneck functions that can't handle size zero arrays > of various shapes, I noticed that scipy.stats.nanmedian chokes on > certain size zero arrays and axis combinations: > >>> from scipy.stats import nanmedian >>> a = np.ones((0,2)) >>> np.median(a, 1) > ? array([], dtype=float64) >>> nanmedian(a, 1) > > IndexError: invalid index > > Anyone know a fix? Here's the ticket: > http://projects.scipy.org/scipy/ticket/1400 I guess the bug is in np.apply_along_axis: >> np.apply_along_axis(np.sum, 1, np.ones((0,2))) IndexError: invalid index But for my use I need a fix in my local copy of scipy.stats.nanmedian, so I'll try something like this: x, axis = _chk_asarray(x, axis) if x.ndim == 0: return float(x.item()) shape = list(x.shape) shape.pop(axis) if 0 in shape: x = np.empty(shape) else: x = x.copy() x = np.apply_along_axis(_nanmedian, axis, x) if x.ndim == 0: x = float(x.item()) return x From guyer at nist.gov Wed Mar 2 17:35:26 2011 From: guyer at nist.gov (Jonathan Guyer) Date: Wed, 2 Mar 2011 14:35:26 -0800 Subject: [SciPy-User] nanmedian chokes on size zero arrays In-Reply-To: References: Message-ID: <74B84F93-7BBF-4AA0-8899-ACD2667A552A@nist.gov> On Mar 2, 2011, at 1:06 PM, Keith Goodman wrote: > On Wed, Mar 2, 2011 at 12:27 PM, Keith Goodman wrote: >> While fixing Bottleneck functions that can't handle size zero arrays >> of various shapes, I noticed that scipy.stats.nanmedian chokes on >> certain size zero arrays and axis combinations: >> >>>> from scipy.stats import nanmedian >>>> a = np.ones((0,2)) >>>> np.median(a, 1) >> array([], dtype=float64) >>>> nanmedian(a, 1) >> >> IndexError: invalid index >> >> Anyone know a fix? Here's the ticket: >> http://projects.scipy.org/scipy/ticket/1400 > > I guess the bug is in np.apply_along_axis: > >>> np.apply_along_axis(np.sum, 1, np.ones((0,2))) > > IndexError: invalid index Might be related to http://projects.scipy.org/numpy/ticket/1171 From philmorefield at yahoo.com Wed Mar 2 20:44:07 2011 From: philmorefield at yahoo.com (Phil Morefield) Date: Wed, 2 Mar 2011 17:44:07 -0800 (PST) Subject: [SciPy-User] ANN: Spyder v2.0.8 In-Reply-To: <4D6AD891.3020304@gmail.com> References: <4D6AD891.3020304@gmail.com> Message-ID: <793561.18940.qm@web161307.mail.bf1.yahoo.com> I'd like to echo Davide's sentiment. For those that don't know Spyder (Scientific PYthon Devlopment EnviRonment) is a phenomenal piece of open source scientific software. And if you really want to see the future of scientific computing, check out Pierre's larger endeavor: Python(x,y). Many thanks, Pierre. ________________________________ From: Davide Lasagna To: SciPy Users List Sent: Sun, February 27, 2011 6:04:49 PM Subject: Re: [SciPy-User] ANN: Spyder v2.0.8 Thanks for your work Pierre! Keep Going! Ciao _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.brett at gmail.com Wed Mar 2 22:22:16 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Wed, 2 Mar 2011 19:22:16 -0800 Subject: [SciPy-User] Problem with loading large MAT files In-Reply-To: References: Message-ID: Hi, On Mon, Feb 21, 2011 at 7:32 AM, Mohammad Abdollahi wrote: > Dear List > > I have a couple of MAT files around 450 MB of size that apparently they are > too large for scipy.io.loadmat function. In deed I keep getting this error : > > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio.py", > line 140, in loadmat > ?? matfile_dict = MR.get_variables() > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 404, in get_variables > ?? hdr, next_position = self.read_var_header() > ?File > "/Library/Frameworks/Python.framework/Versions/6.2/lib/python2.6/site-packages/scipy/io/matlab/mio5.py", > line 356, in read_var_header > ?? stream = StringIO(dcor.decompress(data)) > SystemError: Objects/stringobject.c:4271: bad argument to internal function > > > but everything is fine when I use a file with a size around 360 MB or sth. > So does anybody know how to fix this ? of course without having to subdivide > the original MAT file into samller parts. Further investigation : http://bugs.python.org/issue8571 - so I believe this is a bug in the Python zlib module. As far as I can see it should be fixed by the latest version of python 2.6, and load works on my machine for python 2.7. Can you upgrade somehow and test? Best, Matthew From eskalaiyarasan at gmail.com Thu Mar 3 03:46:33 2011 From: eskalaiyarasan at gmail.com (ESKalaiyarasan) Date: Thu, 3 Mar 2011 14:16:33 +0530 Subject: [SciPy-User] ERROR : while finding size Message-ID: hi, I am Kalaiyarasan.I am doing my graduate degree in anna university chennai.I am using python pylab for my project. in matlab we can get input from user using x=input() command similarly in python we use x=raw_input() command but i want to how to know size of data, in matlab we use size(x) command return size (example 3 X 2). what is command for python? please help me. -Kalaiyarasan -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh.holbrook at gmail.com Thu Mar 3 04:12:45 2011 From: josh.holbrook at gmail.com (Joshua Holbrook) Date: Thu, 3 Mar 2011 00:12:45 -0900 Subject: [SciPy-User] ERROR : while finding size In-Reply-To: References: Message-ID: Ooh, I got this. If you're looking at something like a list or a tuple: > [in]: a = [1, 2, 3] > [in]: len(a) > [out]: 3 However, if you have a numpy array, you'll want to do this instead: > [in]: a = array([1, 2, 3]) > [in]: a.shape > [out]: (3, ) That will give you a tuple with all your dimensions. If you are already familiar with matlab, I would recommend this guy for your basic numpy/scipy questions: http://www.scipy.org/NumPy_for_Matlab_Users Cheers, --Josh On Wed, Mar 2, 2011 at 11:46 PM, ESKalaiyarasan wrote: > hi, > ?? I am Kalaiyarasan.I am doing my graduate degree in anna university > chennai.I am using python pylab for my project. > > in matlab we can get input from user using x=input() command similarly in > python we use x=raw_input() command > > > but i want to how to know size of data, in matlab we use size(x) command > return size (example 3 X 2). what is command for python? > > > please help me. > > > > -Kalaiyarasan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From hector1618 at gmail.com Thu Mar 3 04:17:53 2011 From: hector1618 at gmail.com (Hector) Date: Thu, 3 Mar 2011 14:47:53 +0530 Subject: [SciPy-User] ERROR : while finding size In-Reply-To: References: Message-ID: On Thu, Mar 3, 2011 at 2:16 PM, ESKalaiyarasan wrote: > hi, > I am Kalaiyarasan.I am doing my graduate degree in anna university > chennai.I am using python pylab for my project. > > in matlab we can get input from user using x=input() command similarly in > python we use x=raw_input() command > Hello Kalaiyarasan, This is an information about raw_input function( I strongly recommend you to switch to IPython if you are not using it yet). The point I would like to highlight is that, raw input returns the *string* and hence may not be very useful if you are looking for an matrix input. In [17]: raw_input? Type: builtin_function_or_method Base Class: String Form: Namespace: Python builtin Docstring: raw_input([prompt]) -> string Read a string from standard input. The trailing newline is stripped. If the user hits EOF (Unix: Ctl-D, Windows: Ctl-Z+Return), raise EOFError. On Unix, GNU readline is used if enabled. The prompt string, if given, is printed without a trailing newline before reading. > > > but i want to how to know size of data, in matlab we use size(x) command > return size (example 3 X 2). what is command for python? > > And with the sting you can always use len(a) function to know the length of string. In [18]: a = raw_input() 343 In [19]: type(a) Out[19]: In [20]: len(a) Out[20]: 3 The additional tool I know in this regard is In [21]: a = int(raw_input()) 232 In [22]: type(a) Out[22]: Hope this will help you a bit. But I am not an expert here so there is a good chance that someone else can give you better answer. > > please help me. > > > > -Kalaiyarasan > > For the rest of group, kindly save me from misguiding him if I am not right. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- -Regards Hector Whenever you think you can or you can't, in either way you are right. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Thu Mar 3 04:44:20 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 3 Mar 2011 09:44:20 +0000 (UTC) Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. References: Message-ID: Hi, Wed, 02 Mar 2011 14:36:18 -0500, josef.pktd wrote: [clip] >> The Matlab convention >> >> ? ? ? ?corrcoef(x, y) == corrcoef(c_[x.ravel(), y.ravel()]) > > I don't remember matlab exactly, but I don't think there is a ravel, and > I think R also does > > cov(x, y) = np.dot((x-x.mean()).T, y-y.mean()) > > and normalized for corrcoef. There's a ravel, according to their docs: http://www.mathworks.com/help/techdoc/ref/cov.html """cov(X,Y), where X and Y are matrices with the same number of elements, is equivalent to cov([X(:) Y(:)]).""" X(:) is the matlab notation for raveling. -- Pauli Virtanen From danielstefanmader at googlemail.com Thu Mar 3 10:01:09 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 3 Mar 2011 16:01:09 +0100 Subject: [SciPy-User] ANN: Spyder v2.0.8 In-Reply-To: <4D6AD891.3020304@gmail.com> References: <4D6AD891.3020304@gmail.com> Message-ID: Hi, is there no option to adjust the indentation width in the Spyder editor? We use two spaces here, not four, so I'd be great to make this an option. I'd be fine with a modification in a config file, thought. Thanks in advance, Daniel From rajeev.raizada at gmail.com Tue Mar 1 14:50:29 2011 From: rajeev.raizada at gmail.com (Raj) Date: Tue, 1 Mar 2011 11:50:29 -0800 (PST) Subject: [SciPy-User] Q: How to calculate correlation between columns of a matrix, without looping? Message-ID: Dear SciPy users, I have a matrix (or, more strictly speaking, an array), and I want to calculate the correlation between each column and every other column, i.e. to make a standard correlation matrix. In Matlab, this is pretty straightforward: >> m = [ 1 2; -1 3; 0 4] m = 1 2 -1 3 0 4 >> corr(m,m) ans = 1.0000 -0.5000 -0.5000 1.0000 However, getting this same behaviour out of SciPy/NumPy is proving to be harder than I expected. Below are some attempts, and the output that they give. I also show the results of trying correlations on the transpose of m. The closest to the desired output that I can get is a weird 4x4 matrix made out of stacked copies of the correct correlation matrix. I could loop through the columns of the matrix, and calculate each correlation separately, but that seems like an ugly and inefficient workaround. Any help or advice greatly appreciated, Raj ----------------- In [1]: import scipy In [2]: import numpy In [3]: m = scipy.array([[ 1, 2],[ -1, 3],[ 0, 4]]) In [4]: m Out[4]: array([[ 1, 2], [-1, 3], [ 0, 4]]) In [5]: numpy.corrcoef(m,m) Out[5]: array([[ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.]]) In [6]: m_t = scipy.transpose(m) In [7]: m_t Out[7]: array([[ 1, -1, 0], [ 2, 3, 4]]) In [8]: numpy.corrcoef(m_t,m_t) Out[8]: array([[ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ], [ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ]]) In [9]: scipy.corrcoef(m,m) Out[9]: array([[ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.], [ 1., 1., 1., 1., 1., 1.]]) In [10]: scipy.corrcoef(m_t,m_t) Out[10]: array([[ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ], [ 1. , -0.5, 1. , -0.5], [-0.5, 1. , -0.5, 1. ]]) In [11]: import scipy.stats In [12]: scipy.stats.corrcoef(m,m) [ various error messages, culminating in...] ValueError: objects are not aligned In [13]: scipy.stats.corrcoef(m_t,m_t) [ various error messages, culminating in...] ValueError: objects are not aligned In [14]: scipy.stats.pearsonr(m,m) [ various error messages, culminating in...] ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() In [15]: scipy.stats.pearsonr(m_t,m_t) [ various error messages, culminating in...] ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() From yanghatespam at gmail.com Tue Mar 1 21:00:18 2011 From: yanghatespam at gmail.com (Yang Zhang) Date: Tue, 1 Mar 2011 18:00:18 -0800 Subject: [SciPy-User] What happened with chisquare_contingency? Message-ID: Looked like a useful addition: http://mail.scipy.org/pipermail/scipy-dev/2010-June/014538.html I originally was rolling my own and found the chisquare ddof parameter to be confusing - Googling turned this up. Just wondering if this is still on its way into the codebase. Thanks! From rajeev.raizada at gmail.com Thu Mar 3 12:04:34 2011 From: rajeev.raizada at gmail.com (Raj) Date: Thu, 3 Mar 2011 09:04:34 -0800 (PST) Subject: [SciPy-User] Q: How to calculate correlation between columns of a matrix, without looping? In-Reply-To: References: Message-ID: Sorry, please ignore this post. I actually sent it yesterday, but it somehow got help up in the system and appeared on the mailing list just now (Thurs.12pm,EST). My subsequent e.mail makes this redundant. Sorry again about the double-posting. I'm not sure why this post got stuck in limbo for a day before it finally appeared on the list, but please ignore it! Raj From warren.weckesser at enthought.com Thu Mar 3 13:41:02 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 3 Mar 2011 12:41:02 -0600 Subject: [SciPy-User] What happened with chisquare_contingency? In-Reply-To: References: Message-ID: On Tue, Mar 1, 2011 at 8:00 PM, Yang Zhang wrote: > Looked like a useful addition: > > http://mail.scipy.org/pipermail/scipy-dev/2010-June/014538.html > > I originally was rolling my own and found the chisquare ddof parameter > to be confusing - Googling turned this up. Just wondering if this is > still on its way into the codebase. > I put another version of the code in this ticket: http://projects.scipy.org/scipy/ticket/1203 During SciPy 2010 and later, Anthony Scopatz and I worked on a contingency table class, which is in scipy/stats/contingency_table.py here: https://github.com/scopatz/scipy Then other work (scipy bugs, scipy.signal, and the work that pays the bills) pushed this down in my "to do" list, and it never made it back up to the top. But perhaps it is time to bump this up again--thanks for the reminder! Warren > Thanks! > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Thu Mar 3 14:31:32 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 03 Mar 2011 11:31:32 -0800 Subject: [SciPy-User] ANN: Spyder v2.0.8 In-Reply-To: References: <4D6AD891.3020304@gmail.com> Message-ID: <4D6FEC94.6000003@noaa.gov> On 3/3/11 7:01 AM, Daniel Mader wrote: > We use two spaces here, not four, so I'd be great to make this > an option. I'd be fine with a modification in a config file, thought. While Python allows many options for indenting (including mixed tabs and spaces!), four spaces is a well established standard -- it really is a good idea to stick with it, especially if you are going to be sharing code with anyone, ever. -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From Chris.Barker at noaa.gov Thu Mar 3 14:38:13 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 03 Mar 2011 11:38:13 -0800 Subject: [SciPy-User] ERROR : while finding size In-Reply-To: References: Message-ID: <4D6FEE25.30807@noaa.gov> On 3/3/11 1:17 AM, Hector wrote: > The point I would > like to highlight is that, raw input returns the *string* and hence may > not be very useful if you are looking for an matrix input. indeed. you may be better off with input(), which does evaluate the expression and return the appropriate python object: In [30]: x = raw_input() [1,2,3,4] In [31]: x Out[31]: '[1,2,3,4]' #so x is a string -- probably not what you want here. In [32]: x = input() [1,2,3,4] In [33]: x Out[33]: [1, 2, 3, 4] # now x is a list-- more likely what you want: In [34]: len(x) Out[34]: 4 In [35]: x = input() np.array([1,2,3,4]) In [36]: x Out[36]: array([1, 2, 3, 4]) In [37]: x.shape Out[37]: (4,) now x is a numpy array, most likely what you want, but a bit awkward for users, but there is nopython lieterl notation for arrays -- only lists and tuples. # perhaps you can write it this way: In [38]: x = np.array(input()) [1,2,3,4] In [39]: x Out[39]: array([1, 2, 3, 4]) which gives you an array. You can get 2-d (and higher) arrays this way: In [40]: x = np.array(input()) [[1,2,3], [4,5,6]] In [41]: x Out[41]: array([[1, 2, 3], [4, 5, 6]]) In [42]: x.shape Out[42]: (2, 3) HTH, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From vanforeest at gmail.com Thu Mar 3 15:13:08 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Thu, 3 Mar 2011 21:13:08 +0100 Subject: [SciPy-User] [SciPy-user] mgrid format from unstructured data In-Reply-To: References: <30993544.post@talk.nabble.com> Message-ID: Hi Sloan, Thanks for the hint. bye Nicky On 2 March 2011 11:56, Sloan Lindsey wrote: > Hi, > on mvsplines: > Take a look at http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html#scipy.interpolate.griddata > There is a linear version too. > > For the initial question: here is a snippit that works : > import scipy.interpolate as inter > import numpy as np > import matplotlib.pyplot as plt > datax,datay,dataz = np.genfromtxt('mydata.blah', skip_header=1, unpack=True) > points = np.array([datax,datay]).T > nearest = inter.NearestNDInterpolator(points,dataz) > linear = inter.LinearNDInterpolator(points,dataz,fill_value=0.0) > curvey = inter.CloughTocher2DInterpolator(points,dataz, fill_value = > 0.0) #careful about the boundary conditions > > #now you have 3 interpolants. To determine dataz @ datax,datay > value = curvey(datax,datay) > > #if you want a grid so that you can plot your interpolation: > xrange = np.arange(-10.0, 100.0, 0.05) > yrange = np.arange(-100.0, 100.0, 0.05) > mesh = np.meshgrid(xrange,yrange) > a_int_mesh = curvey(mesh) > plt.imshow(Zn-Zno) > plt.show > > This works for un ordered data. > Sloan > > On Tue, Mar 1, 2011 at 8:53 PM, nicky van foreest wrote: >> Hi, >> >> In relation to this topic: does anybody know of ?a scipy >> implementation for multivariate splines? >> >> bye >> >> Nicky >> >> On 1 March 2011 12:51, Peter Combs wrote: >>> On Feb 23, 2011, at 1:49 AM, Spiffalizer wrote: >>>> I have found some examples that looks like this >>>> x,y = np.mgrid[-1:1:10j,-1:1:10j] >>>> z = (x+y)*np.exp(-6.0*(x*x+y*y)) >>>> xnew,ynew = np.mgrid[-1:1:3j,-1:1:3j] >>>> tck = interpolate.bisplrep(x,y,z,s=0) >>>> znew = interpolate.bisplev(xnew[:,0],ynew[0,:],tck) >>>> >>>> >>>> So my question really is how to sort/convert my input to a format that can >>>> be used by the interpolate function? >>> >>> I use the LSQBivariateSpline functions: >>> >>> import numpy as np >>> import scipy.interpolate as interp >>> >>> num_knots = int(floor(sqrt(len(z)))) >>> xknots = np.linspace(xmin, xmax, n) >>> yknots = np.linspace(ymin, ymax, n) >>> interpolator = interp.LSQBivariateSpline(x, y, z, xknots, yknots) >>> znew = interpolator.ev(xnew, ynew) >>> >>> The object orientation is useful for my applications, for reasons that I no longer quite remember. ?Looking through the documentation for bisplrep, though, it doesn't seem like you need to worry about the order that the points are in. You might try something like: >>> >>> xknots = list(set(x)) >>> yknots = list(set(y)) >>> tck = interpolate.bisplrep(x,y,z, task=-1, tx = xknots, ty=yknots) >>> >>> but my understanding of the bisplrep function is hazy at best, so probably best to check it with data you already know the answer. >>> >>> Peter Combs >>> peter.combs at berkeley.edu >>> >>> >>> _______________________________________________ >>> SciPy-User mailing list >>> SciPy-User at scipy.org >>> http://mail.scipy.org/mailman/listinfo/scipy-user >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From franckkalala at googlemail.com Thu Mar 3 15:13:47 2011 From: franckkalala at googlemail.com (franck kalala) Date: Thu, 3 Mar 2011 20:13:47 +0000 Subject: [SciPy-User] error: command 'swig' failed with exit status 1 Message-ID: Hey all, I was install new release of scipy as describe here http://www.scipy.org/Installing_SciPy/Linux When I run the command python setup.py build I get at the end this error message swig: scipy/sparse/linalg/dsolve/umfpack/umfpack.i swig -python -I/usr/include/suitesparse -o build/src.linux-x86_64-2.6/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c -outdir build/src.linux-x86_64-2.6/scipy/sparse/linalg/dsolve/umfpack scipy/sparse/linalg/dsolve/umfpack/umfpack.i unable to execute swig: No such file or directory error: command 'swig' failed with exit status 1 Any help with this? Cheers Maziba -- ********** ++++ --- * -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Thu Mar 3 15:18:31 2011 From: e.antero.tammi at gmail.com (eat) Date: Thu, 3 Mar 2011 22:18:31 +0200 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: Hi, On Thu, Mar 3, 2011 at 11:44 AM, Pauli Virtanen wrote: > Hi, > > Wed, 02 Mar 2011 14:36:18 -0500, josef.pktd wrote: > [clip] > >> The Matlab convention > >> > >> corrcoef(x, y) == corrcoef(c_[x.ravel(), y.ravel()]) > > > > I don't remember matlab exactly, but I don't think there is a ravel, and > > I think R also does > > > > cov(x, y) = np.dot((x-x.mean()).T, y-y.mean()) > > > > and normalized for corrcoef. > > There's a ravel, according to their docs: > > http://www.mathworks.com/help/techdoc/ref/cov.html > > """cov(X,Y), where X and Y are matrices with the same number of elements, > is equivalent to cov([X(:) Y(:)]).""" > > X(:) is the matlab notation for raveling. > FWIW, please note following matlab/ octave behavior: > X= [1 2 7 3; 2 1 1 2]' X = 1 2 2 1 7 1 3 2 > Y= [4 2 7 1; 9 1 7 3]' Y = 4 9 2 1 7 7 1 3 > *corrcoef([X(:) Y(:)]) %(1* ans = 1.00000 0.26328 0.26328 1.00000 > *corrcoef([X Y]) %(2* ans = 1.00000 -0.54882 0.69462 0.13884 -0.54882 1.00000 -0.43644 0.31623 0.69462 -0.43644 1.00000 0.69007 0.13884 0.31623 0.69007 1.00000 > *corrcoef(X, Y) %(3* ans = 0.69462 0.13884 -0.43644 0.31623 and then equivalent numpy: In []: X= array([[1, 2, 7, 3], [2, 1, 1, 2]]) In []: X Out[]: array([[1, 2, 7, 3], [2, 1, 1, 2]]) In []: Y= array([[4, 2, 7, 1], [9, 1, 7, 3]]) In []: Y Out[]: array([[4, 2, 7, 1], [9, 1, 7, 3]]) In []: *corrcoef(X.ravel(), Y.ravel()) **#(1* Out[]: array([[ 1. , 0.26328398], [ 0.26328398, 1. ]]) In []: *corrcoef(X, Y) #(2* Out[]: array([[ 1. , -0.5488213 , 0.69462323, 0.13884203], [-0.5488213 , 1. , -0.43643578, 0.31622777], [ 0.69462323, -0.43643578, 1. , 0.69006556], [ 0.13884203, 0.31622777, 0.69006556, 1. ]]) > corrcoef(X, Y) %(3 In []: *corrcoef(?) #(3* Out[]: array([[ 0.69462 0.13884], [-0.43644 0.31623]]) So perhaps there does not exist any really simple and straightforward translation (of corrcoef) from matlab to numpy? Just as an example; how would you implement case %(3 properly with numpy? Regards, eat > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.isaac at gmail.com Thu Mar 3 15:51:50 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Thu, 03 Mar 2011 15:51:50 -0500 Subject: [SciPy-User] ANN: Spyder v2.0.8 In-Reply-To: <4D6FEC94.6000003@noaa.gov> References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> Message-ID: <4D6FFF66.4010706@gmail.com> On 3/3/2011 2:31 PM, Christopher Barker wrote: > four spaces is a well established standard ... for the standard library. Individual projects set their own standards. (Unfortunately, PEP 8 came down on the wrong side of tabs vs. spaces.) http://stackoverflow.com/questions/120926/why-does-python-pep-8-strongly-recommend-spaces-over-tabs-for-indentation fwiw, Alan Isaac From Chris.Barker at noaa.gov Thu Mar 3 17:39:21 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 03 Mar 2011 14:39:21 -0800 Subject: [SciPy-User] OT warning! Re: ANN: Spyder v2.0.8 In-Reply-To: <4D6FFF66.4010706@gmail.com> References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> <4D6FFF66.4010706@gmail.com> Message-ID: <4D701899.5090409@noaa.gov> On 3/3/11 12:51 PM, Alan G Isaac wrote: > On 3/3/2011 2:31 PM, Christopher Barker wrote: >> four spaces is a well established standard > > ... for the standard library. Individual projects > set their own standards. OK -- PEP 8 is only _official_ for the standard library, but if you define "standard" as "the way most people do it", then four spaces is it. > (Unfortunately, PEP 8 came > down on the wrong side of tabs vs. spaces.) clearly debatable, but my point is that it is a good idea for all projects to use the same conventions, and the ONLY one that makes any sense at this point in that context is four spaces. Pythons "there should be only one obvious way to do it" philosophy applies here. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From matthew.brett at gmail.com Thu Mar 3 17:52:09 2011 From: matthew.brett at gmail.com (Matthew Brett) Date: Thu, 3 Mar 2011 14:52:09 -0800 Subject: [SciPy-User] OT warning! Re: ANN: Spyder v2.0.8 In-Reply-To: <4D701899.5090409@noaa.gov> References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> <4D6FFF66.4010706@gmail.com> <4D701899.5090409@noaa.gov> Message-ID: Hi, On Thu, Mar 3, 2011 at 2:39 PM, Christopher Barker wrote: > On 3/3/11 12:51 PM, Alan G Isaac wrote: >> On 3/3/2011 2:31 PM, Christopher Barker wrote: >>> four spaces is a well established standard >> >> ... for the standard library. ?Individual projects >> set their own standards. > > OK -- PEP 8 is only _official_ for the standard library, but if you > define "standard" as "the way most people do it", then four spaces is it. > >> ?(Unfortunately, PEP 8 came >> down on the wrong side of tabs vs. spaces.) > > clearly debatable, but my point is that it is a good idea for all > projects to use the same conventions, and the ONLY one that makes any > sense at this point in that context is four spaces. > > Pythons "there should be only one obvious way to do it" philosophy > applies here. I enjoyed this blog post: http://www.artima.com/weblogs/viewpost.jsp?thread=74230 reprinted in: http://www.amazon.com/Best-Software-Writing-Selected-Introduced/dp/1590595009 Quote: Premise 1: For any given language, there are one or a few common coding styles. Premise 2: There is not now, nor will there ever be, a programming style whose benefit is significantly greater than any of the common styles. Premise 3: Approximately a gaboozillion cycles are spent on dealing with coding style variations. Premise 4: For any non-trivial project, a common coding style is a good thing. Conclusion: Thinking of all the code in the entire world as a single "project" with a single style, we would get more value than we do by allowing for variations in style. Best, Matthew From rajeev.raizada at gmail.com Thu Mar 3 18:34:32 2011 From: rajeev.raizada at gmail.com (Raj) Date: Thu, 3 Mar 2011 15:34:32 -0800 (PST) Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: <69be867a-e612-437e-ad2e-0d431e4d880c@a8g2000pri.googlegroups.com> On Mar 3, 3:18?pm, eat wrote: > So perhaps there does not exist any really simple and straightforward > translation > (of corrcoef) from matlab to numpy? Just as an example; how would you > implement case %(3 properly ?with numpy? > Regards, > eat It turns out that Matlab also embodies some confusion on this front, as it turns out that Matlab has two different functions for computing correlation! One is corr(), which is in the Matlab Stats Toolbox. This is the one that I have always used, and it is better-behaved, in my opinion, as I argue below. The other Matlab function is corrcoef(). This is not the Stats Toolbox function, it's in the main code base. I didn't even know that this function existed until this thread! :-) In my view, the Matlab function corr() is the one to emulate. It has the very desirable property that corr(m,m) and corr(m) are the same. Also, its behaviour when correlating two different matrices is very reasonable: http://www.mathworks.com/help/toolbox/stats/corr.html RHO = corr(X,Y) returns a p1-by-p2 matrix containing the pairwise correlation coefficient between each pair of columns in the n-by-p1 and n-by-p2 matrices X and Y. >> m1 = [ 1 2; -1 3; 0 4] m1 = 1 2 -1 3 0 4 >> corr(m1) ans = 1.0000 -0.5000 -0.5000 1.0000 >> corr(m1,m1) ans = 1.0000 -0.5000 -0.5000 1.0000 >> m2 = [ -1 1; 2 -1; -1 3] m2 = -1 1 2 -1 -1 3 >> corr(m1,m2) ans = -0.8660 0.5000 0 0.5000 In contrast, the Matlab corrcoef() does weird things, and is almost as bad as the SciPy corrcoef() function in that regard. >> corrcoef(m1) ans = 1.0000 -0.5000 -0.5000 1.0000 >> corrcoef(m1,m1) ans = 1 1 1 1 >> corrcoef(m1,m2) ans = 1.0000 0.2125 0.2125 1.0000 So, if anything in Matlab is to be taken as a role-model, I would advocate for the Stats Toolbox function corr(). Another argument for this corr() behavior is that the R function cor() behaves the same way. I guess R is the gold-standard for stats computing. Here are the above operations in R: > m1 <- matrix(c(1, -1, 0, 2, 3, 4),nrow=3) > m1 [,1] [,2] [1,] 1 2 [2,] -1 3 [3,] 0 4 > m2 <- matrix(c(-1, 2, -1, 1, -1, 3),nrow=3) > m2 [,1] [,2] [1,] -1 1 [2,] 2 -1 [3,] -1 3 > cor(m1) [,1] [,2] [1,] 1.0 -0.5 [2,] -0.5 1.0 > cor(m1,m1) [,1] [,2] [1,] 1.0 -0.5 [2,] -0.5 1.0 > cor(m1,m2) [,1] [,2] [1,] -0.8660254 0.5 [2,] 0.0000000 0.5 In summary, let's copy R's cor() and Matlab's corr(), not Matlab's corrcoef(). Raj From pav at iki.fi Thu Mar 3 18:56:47 2011 From: pav at iki.fi (Pauli Virtanen) Date: Thu, 3 Mar 2011 23:56:47 +0000 (UTC) Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. References: Message-ID: On Thu, 03 Mar 2011 22:18:31 +0200, eat wrote: >> *corrcoef([X(:) Y(:)]) %(1* > ans = > 1.00000 0.26328 > 0.26328 1.00000 [clip] >> *corrcoef(X, Y) %(3* > ans = > 0.69462 0.13884 > -0.43644 0.31623 You made a mistake here. The two always return the same results (cut and paste): >> X= [1 2 7 3; 2 1 1 2]' X = 1 2 2 1 7 1 3 2 >> Y= [4 2 7 1; 9 1 7 3]' Y = 4 9 2 1 7 7 1 3 >> corrcoef([X(:) Y(:)]) ans = 1.0000 0.2633 0.2633 1.0000 >> corrcoef(X, Y) ans = 1.0000 0.2633 0.2633 1.0000 From josef.pktd at gmail.com Thu Mar 3 19:07:25 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 3 Mar 2011 19:07:25 -0500 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: <69be867a-e612-437e-ad2e-0d431e4d880c@a8g2000pri.googlegroups.com> References: <69be867a-e612-437e-ad2e-0d431e4d880c@a8g2000pri.googlegroups.com> Message-ID: On Thu, Mar 3, 2011 at 6:34 PM, Raj wrote: > On Mar 3, 3:18?pm, eat wrote: >> So perhaps there does not exist any really simple and straightforward >> translation >> (of corrcoef) from matlab to numpy? Just as an example; how would you >> implement case %(3 properly ?with numpy? >> Regards, >> eat > > It turns out that Matlab also embodies some confusion > on this front, as it turns out that Matlab has > two different functions for computing correlation! > > One is corr(), which is in the Matlab Stats Toolbox. > This is the one that I have always used, > and it is better-behaved, in my opinion, > as I argue below. > > The other Matlab function is corrcoef(). > This is not the Stats Toolbox function, it's in the main code base. > I didn't even know that this function existed until this thread! ?:-) > > In my view, the Matlab function corr() is the one to emulate. > It has the very desirable property that corr(m,m) and corr(m) > are the same. > > Also, its behaviour when correlating two different matrices > is very reasonable: > http://www.mathworks.com/help/toolbox/stats/corr.html > RHO = corr(X,Y) returns a p1-by-p2 matrix containing the pairwise > correlation coefficient between each pair of columns in the n-by-p1 > and n-by-p2 matrices X and Y. > >>> m1 = [ 1 2; -1 3; 0 4] > m1 = > ? ? 1 ? ? 2 > ? ?-1 ? ? 3 > ? ? 0 ? ? 4 > >>> corr(m1) > ans = > ? ?1.0000 ? -0.5000 > ? -0.5000 ? ?1.0000 > >>> corr(m1,m1) > ans = > ? ?1.0000 ? -0.5000 > ? -0.5000 ? ?1.0000 > >>> m2 = [ -1 1; 2 -1; -1 3] > m2 = > ? ?-1 ? ? 1 > ? ? 2 ? ?-1 > ? ?-1 ? ? 3 > >>> corr(m1,m2) > ans = > ? -0.8660 ? ?0.5000 > ? ? ? ? 0 ? ?0.5000 > > In contrast, the Matlab corrcoef() does weird things, > and is almost as bad as the SciPy corrcoef() function in that regard. > >>> corrcoef(m1) > ans = > ? ?1.0000 ? -0.5000 > ? -0.5000 ? ?1.0000 > >>> corrcoef(m1,m1) > ans = > ? ? 1 ? ? 1 > ? ? 1 ? ? 1 > >>> corrcoef(m1,m2) > ans = > ? ?1.0000 ? ?0.2125 > ? ?0.2125 ? ?1.0000 > > So, if anything in Matlab is to be taken as a role-model, > I would advocate for the Stats Toolbox function corr(). > > Another argument for this corr() behavior is that > the R function cor() behaves the same way. > I guess R is the gold-standard for stats computing. > > Here are the above operations in R: > >> m1 <- matrix(c(1, -1, 0, 2, 3, 4),nrow=3) >> m1 > ? ? [,1] [,2] > [1,] ? ?1 ? ?2 > [2,] ? -1 ? ?3 > [3,] ? ?0 ? ?4 > >> m2 <- matrix(c(-1, 2, -1, 1, -1, 3),nrow=3) >> m2 > ? ? [,1] [,2] > [1,] ? -1 ? ?1 > [2,] ? ?2 ? -1 > [3,] ? -1 ? ?3 > >> cor(m1) > ? ? [,1] [,2] > [1,] ?1.0 -0.5 > [2,] -0.5 ?1.0 > >> cor(m1,m1) > ? ? [,1] [,2] > [1,] ?1.0 -0.5 > [2,] -0.5 ?1.0 > >> cor(m1,m2) > ? ? ? ? ? [,1] [,2] > [1,] -0.8660254 ?0.5 > [2,] ?0.0000000 ?0.5 > > In summary, let's copy R's cor() and Matlab's corr(), > not Matlab's corrcoef(). that's the difference between stats and numpy/matlab generic, and since corrcoef is in numpy it also follows numpy convention like rowvar which always throws me off. >>> x = np.random.randn(10,3) >>> y = np.random.randn(10,2) >>> from scipy import stats >>> xs = stats.zscore(x) >>> ys = stats.zscore(y) >>> np.dot(xs.T, ys)/xs.shape[0] array([[ 0.44258451, 0.42834949], [-0.22926899, 0.41053462], [-0.03316133, 0.1747719 ]]) >>> np.corrcoef(x,y, rowvar=0)[:3, -2:] array([[ 0.44258451, 0.42834949], [-0.22926899, 0.41053462], [-0.03316133, 0.1747719 ]]) Josef > > Raj > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From stef.mientki at gmail.com Thu Mar 3 19:53:29 2011 From: stef.mientki at gmail.com (Stef Mientki) Date: Fri, 04 Mar 2011 01:53:29 +0100 Subject: [SciPy-User] OT warning! Re: ANN: Spyder v2.0.8 In-Reply-To: <4D701899.5090409@noaa.gov> References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> <4D6FFF66.4010706@gmail.com> <4D701899.5090409@noaa.gov> Message-ID: <4D703809.403@gmail.com> On 03-03-2011 23:39, Christopher Barker wrote: > On 3/3/11 12:51 PM, Alan G Isaac wrote: >> On 3/3/2011 2:31 PM, Christopher Barker wrote: >>> four spaces is a well established standard >> ... for the standard library. Individual projects >> set their own standards. > OK -- PEP 8 is only _official_ for the standard library, but if you > define "standard" as "the way most people do it", then four spaces is it. > >> (Unfortunately, PEP 8 came >> down on the wrong side of tabs vs. spaces.) > clearly debatable, but my point is that it is a good idea for all > projects to use the same conventions, and the ONLY one that makes any > sense at this point in that context is four spaces. Using a standard might be a good idea, but the standard is depending on the environment. Is Python the environment or the set of actually used tools. For me and the people around me, the programs we make, are the environment. We use PHP, Delphi, C, JAL, JS, Matlab, Labview, .... and in all these languages me and my environment uses 2 spaces. So the standard for Python is also 2 spaces. Secondly, the libraries and programs that we put in the open source community, by who will they be (probably) changed and maintained? So it seems to me perfectly legal to use 2 spaces as th? standard. cheers, Stef Mientki > Pythons "there should be only one obvious way to do it" philosophy > applies here. > > -Chris > From ralf.gommers at googlemail.com Thu Mar 3 20:13:40 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Fri, 4 Mar 2011 09:13:40 +0800 Subject: [SciPy-User] error: command 'swig' failed with exit status 1 In-Reply-To: References: Message-ID: On Fri, Mar 4, 2011 at 4:13 AM, franck kalala wrote: > > Hey all, > > I was install new release of scipy as describe here > http://www.scipy.org/Installing_SciPy/Linux > > When I run the command > > python setup.py build > > > I get at the end this error message > > > > swig: scipy/sparse/linalg/dsolve/umfpack/umfpack.i > swig -python -I/usr/include/suitesparse -o > build/src.linux-x86_64-2.6/scipy/sparse/linalg/dsolve/umfpack/_umfpack_wrap.c > -outdir build/src.linux-x86_64-2.6/scipy/sparse/linalg/dsolve/umfpack > scipy/sparse/linalg/dsolve/umfpack/umfpack.i > > > > unable to execute swig: No such file or directory > error: command 'swig' failed with exit status 1 > > Any help with this? > Do you actually need UMFPACK? Some of those instructions are very extensive, but UMFPACK and FFTW are not required for Scipy anymore. You should also try to use pre-built ATLAS binaries from your distribution or a third-party repository if you can. Also, which linux are you using and what other steps did you follow before "python setup.py build"? Cheers, Ralf From e.antero.tammi at gmail.com Fri Mar 4 02:21:51 2011 From: e.antero.tammi at gmail.com (eat) Date: Fri, 4 Mar 2011 09:21:51 +0200 Subject: [SciPy-User] Strange behaviour from corrcoef when calculating correlation-matrix in SciPy/NumPy. In-Reply-To: References: Message-ID: Hi On Fri, Mar 4, 2011 at 1:56 AM, Pauli Virtanen wrote: > On Thu, 03 Mar 2011 22:18:31 +0200, eat wrote: > > >> *corrcoef([X(:) Y(:)]) %(1* > > ans = > > 1.00000 0.26328 > > 0.26328 1.00000 > [clip] > >> *corrcoef(X, Y) %(3* > > ans = > > 0.69462 0.13884 > > -0.43644 0.31623 > > You made a mistake here. The two always return the same results (cut and > paste): > No, no mistake here, it's really the output from octave 3.2.4 (and if I remember correct, versions of matlab around 2005 behaved similar). But matlab seems to be consistent now then. Regards, eat > > >> X= [1 2 7 3; 2 1 1 2]' > > X = > > 1 2 > 2 1 > 7 1 > 3 2 > > >> Y= [4 2 7 1; 9 1 7 3]' > > Y = > > 4 9 > 2 1 > 7 7 > 1 3 > > >> corrcoef([X(:) Y(:)]) > > ans = > > 1.0000 0.2633 > 0.2633 1.0000 > > >> corrcoef(X, Y) > > ans = > > 1.0000 0.2633 > 0.2633 1.0000 > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Mar 4 10:19:07 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Mar 2011 16:19:07 +0100 Subject: [SciPy-User] Parallel processing Message-ID: Hi all, my question is a bit off-topic. However, I hope that I will get an answer I would like to parallelize a number of calls to mycmd nproc denotes the number of processes . for i in range(nproc): try: retcode = call("mycmd" + " myarg", shell=True) if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: print >>sys.stderr, "Child returned", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e How can I manage that with python ? Any pointer would be appreciated. Thanks in advance Nils From pav at iki.fi Fri Mar 4 10:31:17 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 4 Mar 2011 15:31:17 +0000 (UTC) Subject: [SciPy-User] Parallel processing References: Message-ID: Fri, 04 Mar 2011 16:19:07 +0100, Nils Wagner wrote: [clip] > I would like to parallelize a number of calls to mycmd > > nproc denotes the number of processes . > > for i in range(nproc): > > try: > retcode = call("mycmd" + " myarg", shell=True) if retcode < 0: > print >>sys.stderr, "Child was terminated by [clip] > How can I manage that with python ? For example, use `subprocess.Popen` to spawn the processes to the background. -- Pauli Virtanen From matthieu.brucher at gmail.com Fri Mar 4 10:34:18 2011 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 4 Mar 2011 16:34:18 +0100 Subject: [SciPy-User] Parallel processing In-Reply-To: References: Message-ID: Hi Nils, As you are launching subprocesses, you can always do a crude: import subprocess processes = [] for i in range(nproc): processes.push_back(subprocess.Popen(...)) for process in processes: retcode = process.wait() ... Matthieu 2011/3/4 Nils Wagner > Hi all, > > my question is a bit off-topic. However, I hope that I > will get an answer > > I would like to parallelize a number of calls to mycmd > > nproc denotes the number of processes . > > for i in range(nproc): > > try: > retcode = call("mycmd" + " myarg", shell=True) > if retcode < 0: > print >>sys.stderr, "Child was terminated by > signal", -retcode > else: > print >>sys.stderr, "Child returned", retcode > except OSError, e: > print >>sys.stderr, "Execution failed:", e > > > How can I manage that with python ? > > Any pointer would be appreciated. > > Thanks in advance > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn: http://www.linkedin.com/in/matthieubrucher -------------- next part -------------- An HTML attachment was scrubbed... URL: From baker.alexander at gmail.com Fri Mar 4 10:57:44 2011 From: baker.alexander at gmail.com (alexander baker) Date: Fri, 4 Mar 2011 15:57:44 +0000 Subject: [SciPy-User] Parallel processing In-Reply-To: References: Message-ID: Would recommend the multiprocess back-port to 2.5, have been playing with example on link below. Regards Alex Baker http://www.alexfb.com/cgi-bin/twiki/view/PtPhysics/WebHome#Multiprocess_Example Mobile: 07788 872118 Blog: www.alexfb.com -- All science is either physics or stamp collecting. On 4 March 2011 15:34, Matthieu Brucher wrote: > Hi Nils, > > As you are launching subprocesses, you can always do a crude: > > import subprocess > > processes = [] > > for i in range(nproc): > processes.push_back(subprocess.Popen(...)) > > for process in processes: > retcode = process.wait() > ... > > Matthieu > > 2011/3/4 Nils Wagner > > Hi all, >> >> my question is a bit off-topic. However, I hope that I >> will get an answer >> >> I would like to parallelize a number of calls to mycmd >> >> nproc denotes the number of processes . >> >> for i in range(nproc): >> >> try: >> retcode = call("mycmd" + " myarg", shell=True) >> if retcode < 0: >> print >>sys.stderr, "Child was terminated by >> signal", -retcode >> else: >> print >>sys.stderr, "Child returned", retcode >> except OSError, e: >> print >>sys.stderr, "Execution failed:", e >> >> >> How can I manage that with python ? >> >> Any pointer would be appreciated. >> >> Thanks in advance >> Nils >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > > -- > Information System Engineer, Ph.D. > Blog: http://matt.eifelle.com > LinkedIn: http://www.linkedin.com/in/matthieubrucher > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Fri Mar 4 11:21:05 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Fri, 04 Mar 2011 17:21:05 +0100 Subject: [SciPy-User] Parallel processing In-Reply-To: References: Message-ID: Hi all, Thank you all for your comments and suggestions concerning parallel processing ! Nils From vineethrakesh at gmail.com Fri Mar 4 14:20:58 2011 From: vineethrakesh at gmail.com (vineeth) Date: Fri, 04 Mar 2011 14:20:58 -0500 Subject: [SciPy-User] help stats module to determine the type of distribution Message-ID: <4D713B9A.7000600@gmail.com> Hello all, I am looking forward to determine the type of distribution my data set follows. Is there any way in stats module of python to do this kind of operation? or is there any round about way? Thank You Vineeth From josef.pktd at gmail.com Fri Mar 4 14:36:05 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Fri, 4 Mar 2011 14:36:05 -0500 Subject: [SciPy-User] help stats module to determine the type of distribution In-Reply-To: <4D713B9A.7000600@gmail.com> References: <4D713B9A.7000600@gmail.com> Message-ID: On Fri, Mar 4, 2011 at 2:20 PM, vineeth wrote: > Hello all, > > I am looking forward to determine the type of distribution my data set > follows. Is there any way in stats module of python to do this kind of > operation? or is there any round about way? It`s not yet in scipy. I have an experimental script in scikits.statsmodels, but James was working on this more recently, http://code.google.com/p/pythonequations/ , who posted his version to the mailing list a while ago. How does your data look like, how many observations, and does the histogram have some observable pattern? For some distribution the fit method of the scipy.stats distributions works well, and you can use kolmogorov-smirnov, scipy.stats.kstest as a distance measure to see which one fits best. Josef > > Thank You > Vineeth > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robertrobert93 at yahoo.com Tue Mar 8 04:10:48 2011 From: robertrobert93 at yahoo.com (Robert Robert) Date: Tue, 8 Mar 2011 01:10:48 -0800 (PST) Subject: [SciPy-User] fitting a 2D rotated gauss Message-ID: <52682.62674.qm@web59316.mail.re1.yahoo.com> Hello all, I was looking at the scipy cookbook and found a code for fitting 2D gauss. The code I found in the cookbook was for a not rotated 2D gauss. It points to another code which could be found through the following link http://code.google.com/p/agpy/source/browse/trunk/agpy/gaussfitter.py This code found at the link, says to be able to fit a rotated 2D Gauss. What I don't understand of the code found at the given link is that when it calculates the guess parameters for the 2D rotated gauss it always returns 0.0 degrees. How is this possible. I tried out the code myself, generated a rotated gauss and then let the function moments() in the code calculate the guess parameters. I don't seem to get the correct sigma x , sigma y and the rotation back. The rotation seems to be 0.0, as I expected, looking at the code. I was wondering if this is an error in the code or that the guess rotation is always zero ?? I hope someone can please help me out with this question. regards, pim From kwgoodman at gmail.com Tue Mar 8 12:39:45 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 09:39:45 -0800 Subject: [SciPy-User] [ANN] Bottleneck 0.4.0 Message-ID: Bottleneck is a collection of fast NumPy array functions written in Cython. It contains functions like median, nanmedian, nanargmax, move_mean. The fourth release of Bottleneck contains new functions and bug fixes. Separate source code distributions are now made for 32 bit and 64 bit operating systems. New functions: - rankdata() - nanrankdata() Enhancements: - Optionally specify the shapes of the arrays used in benchmark - Can specify which input arrays to fill with one-third NaNs in benchmark Breaks from 0.3.0: - Removed group_nanmean() function - Bump dependency from NumPy 1.4.1 to NumPy 1.5.1 - C files are now generated with Cython 0.14.1 instead of 0.13 Bug fixes: - #6 Some functions gave wrong output dtype for some input dtypes on 32 bit OS - #7 Some functions choked on size zero input arrays - #8 Segmentation fault with Cython 0.14.1 (but not 0.13) download ? http://pypi.python.org/pypi/Bottleneck docs ? http://berkeleyanalytics.com/bottleneck code ? http://github.com/kwgoodman/bottleneck mailing list ? http://groups.google.com/group/bottle-neck mailing list 2 ? http://mail.scipy.org/mailman/listinfo/scipy-user From kwgoodman at gmail.com Tue Mar 8 16:19:48 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 13:19:48 -0800 Subject: [SciPy-User] Bottleneck 0.4.1 Message-ID: Bottleneck is a collection of fast NumPy array functions written in Cython. It contains functions like median, nanmedian, nanargmax, move_mean. This is a bug fix release. The low-level functions nanstd_3d_int32_axis1 and nanstd_3d_int64_axis1, called by bottleneck.nanstd(), wrote beyond the memory owned by the output array if both arr.shape[1] == 0 and arr.shape[0] > arr.shape[2], where arr is the input array. Thanks to Christoph Gohlke for finding an example to demonstrate the bug. download ? http://pypi.python.org/pypi/Bottleneck docs ? http://berkeleyanalytics.com/bottleneck code ? http://github.com/kwgoodman/bottleneck mailing list ? http://groups.google.com/group/bottle-neck mailing list 2 ? http://mail.scipy.org/mailman/listinfo/scipy-user From kwgoodman at gmail.com Tue Mar 8 18:07:57 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 15:07:57 -0800 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 1:19 PM, Keith Goodman wrote: > Bottleneck is a collection of fast NumPy array functions written in > Cython. It contains functions like median, nanmedian, nanargmax, > move_mean. > > This is a bug fix release. > > The low-level functions nanstd_3d_int32_axis1 and > nanstd_3d_int64_axis1, called by bottleneck.nanstd(), wrote beyond the > memory owned by the output array if both arr.shape[1] == 0 and > arr.shape[0] > arr.shape[2], where arr is the input array. > > Thanks to Christoph Gohlke for finding an example to demonstrate the bug. How embarrassing! The same bug in nanstd() that was fixed in 0.4.1 exists in nanvar(). Thank you, Christoph, for pointing that out. Fixed in Bottleneck 0.4.2. > download > ? http://pypi.python.org/pypi/Bottleneck > docs > ? http://berkeleyanalytics.com/bottleneck > code > ? http://github.com/kwgoodman/bottleneck > mailing list > ? http://groups.google.com/group/bottle-neck > mailing list 2 > ? http://mail.scipy.org/mailman/listinfo/scipy-user > From wesmckinn at gmail.com Tue Mar 8 21:06:05 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 8 Mar 2011 21:06:05 -0500 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 6:07 PM, Keith Goodman wrote: > On Tue, Mar 8, 2011 at 1:19 PM, Keith Goodman wrote: >> Bottleneck is a collection of fast NumPy array functions written in >> Cython. It contains functions like median, nanmedian, nanargmax, >> move_mean. >> >> This is a bug fix release. >> >> The low-level functions nanstd_3d_int32_axis1 and >> nanstd_3d_int64_axis1, called by bottleneck.nanstd(), wrote beyond the >> memory owned by the output array if both arr.shape[1] == 0 and >> arr.shape[0] > arr.shape[2], where arr is the input array. >> >> Thanks to Christoph Gohlke for finding an example to demonstrate the bug. > > How embarrassing! The same bug in nanstd() that was fixed in 0.4.1 > exists in nanvar(). Thank you, Christoph, for pointing that out. Fixed > in Bottleneck 0.4.2. > >> download >> ? http://pypi.python.org/pypi/Bottleneck >> docs >> ? http://berkeleyanalytics.com/bottleneck >> code >> ? http://github.com/kwgoodman/bottleneck >> mailing list >> ? http://groups.google.com/group/bottle-neck >> mailing list 2 >> ? http://mail.scipy.org/mailman/listinfo/scipy-user >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Keith, Any interest adding a "min_periods" argument to the moving window functions in bottleneck? cf. http://pandas.sourceforge.net/stats_moments.html It would introduce some additional CPU cycles into the existing functions of course, but there are many practical applications (e.g. smoothing slightly patchy data) where you want it. One random question. Any idea on the long import time: $ time python -c "import bottleneck" real 0m0.712s user 0m0.546s sys 0m0.114s $ time python -c "import numpy" real 0m0.142s user 0m0.090s sys 0m0.049s $ time python -c "import scipy" real 0m0.201s user 0m0.132s sys 0m0.066s Best, Wes From kwgoodman at gmail.com Tue Mar 8 22:17:40 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 19:17:40 -0800 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 6:06 PM, Wes McKinney wrote: > Any interest adding a "min_periods" argument to the moving window > functions in bottleneck? Each moving window function in Bottleneck has a NaN version and a non-NaN version, so move_nanmean() and move_mean(), for example. Pandas has one version but you can adjust the min_periods to get either the NaN or non-NaN version or anything in between. That's clever. The rest of Bottleneck uses the NaN and non-NaN naming, for example, nanmedian and median. I think it is simpler (to discover what Bottleneck can do for example) to stick with that. Much harder to explain that the functionality is in a parameter that most users haven't seen before. But let me think about it. It would be useful. > One random question. Any idea on the long import time: > > $ time python -c "import bottleneck" > > real ? ?0m0.712s > user ? ?0m0.546s > sys ? ? 0m0.114s > $ time python -c "import numpy" > > real ? ?0m0.142s > user ? ?0m0.090s > sys ? ? 0m0.049s > $ time python -c "import scipy" > > real ? ?0m0.201s > user ? ?0m0.132s > sys ? ? 0m0.066s Bottleneck has many low-level functions, for example, median_2d_float64_axis0, median_2d_float64_axis1, median_2d_int32_axis0, etc, etc. Maybe that explains it? But scipy has a lot of functions too, so I don't know. From wesmckinn at gmail.com Tue Mar 8 23:18:05 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 8 Mar 2011 23:18:05 -0500 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 10:17 PM, Keith Goodman wrote: > On Tue, Mar 8, 2011 at 6:06 PM, Wes McKinney wrote: > >> Any interest adding a "min_periods" argument to the moving window >> functions in bottleneck? > > Each moving window function in Bottleneck has a NaN version and a > non-NaN version, so move_nanmean() and move_mean(), for example. > Pandas has one version but you can adjust the min_periods to get > either the NaN or non-NaN version or anything in between. That's > clever. Yes, this way you only need one API function. If you don't specify min_periods, you get the move_* function and if you do, then you get move_nan* but requiring a certain number of observations. Some performance is sacrificed but perhaps for the greater good :) > The rest of Bottleneck uses the NaN and non-NaN naming, for example, > nanmedian and median. I think it is simpler (to discover what > Bottleneck can do for example) to stick with that. Much harder to > explain that the functionality is in a parameter that most users > haven't seen before. But let me think about it. It would be useful. > >> One random question. Any idea on the long import time: >> >> $ time python -c "import bottleneck" >> >> real ? ?0m0.712s >> user ? ?0m0.546s >> sys ? ? 0m0.114s >> $ time python -c "import numpy" >> >> real ? ?0m0.142s >> user ? ?0m0.090s >> sys ? ? 0m0.049s >> $ time python -c "import scipy" >> >> real ? ?0m0.201s >> user ? ?0m0.132s >> sys ? ? 0m0.066s > > Bottleneck has many low-level functions, for example, > median_2d_float64_axis0, median_2d_float64_axis1, > median_2d_int32_axis0, etc, etc. Maybe that explains it? But scipy has > a lot of functions too, so I don't know. Yeah, I thought this was odd. Initially I thought perhaps it was due to the size of the DLLs. func.so and move.so but they are only 3 mb or so on my machine. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Tue Mar 8 23:52:43 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 20:52:43 -0800 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 8:18 PM, Wes McKinney wrote: > On Tue, Mar 8, 2011 at 10:17 PM, Keith Goodman wrote: >>> One random question. Any idea on the long import time: >>> >>> $ time python -c "import bottleneck" >>> >>> real ? ?0m0.712s >>> user ? ?0m0.546s >>> sys ? ? 0m0.114s >>> $ time python -c "import numpy" >>> >>> real ? ?0m0.142s >>> user ? ?0m0.090s >>> sys ? ? 0m0.049s >>> $ time python -c "import scipy" >>> >>> real ? ?0m0.201s >>> user ? ?0m0.132s >>> sys ? ? 0m0.066s >> >> Bottleneck has many low-level functions, for example, >> median_2d_float64_axis0, median_2d_float64_axis1, >> median_2d_int32_axis0, etc, etc. Maybe that explains it? But scipy has >> a lot of functions too, so I don't know. > > Yeah, I thought this was odd. Initially I thought perhaps it was due > to the size of the DLLs. func.so and move.so but they are only 3 mb or > so on my machine. The timings on my machine (64-bit Ubuntu 10.10) are not quite as bad: $ time python -c "import bottleneck" real 0m0.192s user 0m0.150s sys 0m0.040s $ time python -c "import numpy" real 0m0.060s user 0m0.030s sys 0m0.030s $ time python -c "import scipy" real 0m0.091s user 0m0.040s sys 0m0.050s I'm interested in ways to cut the import time if anyone knows of any. From kwgoodman at gmail.com Tue Mar 8 23:57:01 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Tue, 8 Mar 2011 20:57:01 -0800 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 8:52 PM, Keith Goodman wrote: > On Tue, Mar 8, 2011 at 8:18 PM, Wes McKinney wrote: >> On Tue, Mar 8, 2011 at 10:17 PM, Keith Goodman wrote: > >>>> One random question. Any idea on the long import time: >>>> >>>> $ time python -c "import bottleneck" >>>> >>>> real ? ?0m0.712s >>>> user ? ?0m0.546s >>>> sys ? ? 0m0.114s >>>> $ time python -c "import numpy" >>>> >>>> real ? ?0m0.142s >>>> user ? ?0m0.090s >>>> sys ? ? 0m0.049s >>>> $ time python -c "import scipy" >>>> >>>> real ? ?0m0.201s >>>> user ? ?0m0.132s >>>> sys ? ? 0m0.066s >>> >>> Bottleneck has many low-level functions, for example, >>> median_2d_float64_axis0, median_2d_float64_axis1, >>> median_2d_int32_axis0, etc, etc. Maybe that explains it? But scipy has >>> a lot of functions too, so I don't know. >> >> Yeah, I thought this was odd. Initially I thought perhaps it was due >> to the size of the DLLs. func.so and move.so but they are only 3 mb or >> so on my machine. ...the binaries are not massive but the function count is very high. > The timings on my machine (64-bit Ubuntu 10.10) are not quite as bad: > > $ time python -c "import bottleneck" > real ? ?0m0.192s > user ? ?0m0.150s > sys ? ? 0m0.040s > > $ time python -c "import numpy" > real ? ?0m0.060s > user ? ?0m0.030s > sys ? ? 0m0.030s > > $ time python -c "import scipy" > real ? ?0m0.091s > user ? ?0m0.040s > sys ? ? 0m0.050s > > I'm interested in ways to cut the import time if anyone knows of any. > From ben.whale at otago.ac.nz Wed Mar 9 00:15:37 2011 From: ben.whale at otago.ac.nz (Ben Whale) Date: Wed, 09 Mar 2011 18:15:37 +1300 Subject: [SciPy-User] Can scipy.integrate.ode/odeint use the DOPRI integrator? Message-ID: <4D770CF9.90106@otago.ac.nz> A friend mentioned that sci.integrate supported DOPRI, but I can't seem to find any documentation. Can anyone confirm if this is true? Thanks in advance, Ben From warren.weckesser at enthought.com Wed Mar 9 01:06:06 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Mar 2011 00:06:06 -0600 Subject: [SciPy-User] Can scipy.integrate.ode/odeint use the DOPRI integrator? In-Reply-To: <4D770CF9.90106@otago.ac.nz> References: <4D770CF9.90106@otago.ac.nz> Message-ID: On Tue, Mar 8, 2011 at 11:15 PM, Ben Whale wrote: > A friend mentioned that sci.integrate supported DOPRI, but I can't seem > to find any documentation. Can anyone confirm if this is true? > > Yes, it is true; the class scipy.integrate.ode includes the solves 'dopri5' and dop853'. An example is attached. The best documentation for ode appears to be here: http://docs.scipy.org/scipy/docs/scipy.integrate.ode.ode/#ode Warren Thanks in advance, > Ben > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: lorenz_dopri_demo.py Type: application/octet-stream Size: 845 bytes Desc: not available URL: From warren.weckesser at enthought.com Wed Mar 9 01:21:40 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Mar 2011 00:21:40 -0600 Subject: [SciPy-User] Can scipy.integrate.ode/odeint use the DOPRI integrator? In-Reply-To: References: <4D770CF9.90106@otago.ac.nz> Message-ID: On Wed, Mar 9, 2011 at 12:06 AM, Warren Weckesser < warren.weckesser at enthought.com> wrote: > > > On Tue, Mar 8, 2011 at 11:15 PM, Ben Whale wrote: > >> A friend mentioned that sci.integrate supported DOPRI, but I can't seem >> to find any documentation. Can anyone confirm if this is true? >> >> > > Yes, it is true; the class scipy.integrate.ode includes the solves 'dopri5' > and dop853'. An example is attached. > > The best documentation for ode appears to be here: > http://docs.scipy.org/scipy/docs/scipy.integrate.ode.ode/#ode > ...which is also the docstring for scipy.integrate.ode. Warren > > > Warren > > > Thanks in advance, >> Ben >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eskalaiyarasan at gmail.com Wed Mar 9 03:04:27 2011 From: eskalaiyarasan at gmail.com (ESKalaiyarasan) Date: Wed, 9 Mar 2011 03:04:27 -0500 Subject: [SciPy-User] an example program for 3d plot Message-ID: I saw example program in website " http://matplotlib.sourceforge.net/examples/mplot3d/2dcollections3d_demo.html " but i am not able to run it It say that error as projection ='3d' is not defined or not available. how to plot matrix in 3d help me. ---thanks Kalaiyarasan -------------- next part -------------- An HTML attachment was scrubbed... URL: From e.antero.tammi at gmail.com Wed Mar 9 03:21:02 2011 From: e.antero.tammi at gmail.com (eat) Date: Wed, 9 Mar 2011 10:21:02 +0200 Subject: [SciPy-User] an example program for 3d plot In-Reply-To: References: Message-ID: Hi, On Wed, Mar 9, 2011 at 10:04 AM, ESKalaiyarasan wrote: > I saw example program in website " > http://matplotlib.sourceforge.net/examples/mplot3d/2dcollections3d_demo.html > " but i am not able to run it > It say that error as projection ='3d' is not defined or not available. > how to plot matrix in 3d help me. > Matplotlib has its own mailing list. See for example https://lists.sourceforge.net/lists/listinfo/matplotlib-users Regards, eat > > > ---thanks > > Kalaiyarasan > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Wed Mar 9 06:55:59 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 9 Mar 2011 06:55:59 -0500 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 11:57 PM, Keith Goodman wrote: > On Tue, Mar 8, 2011 at 8:52 PM, Keith Goodman wrote: >> On Tue, Mar 8, 2011 at 8:18 PM, Wes McKinney wrote: >>> On Tue, Mar 8, 2011 at 10:17 PM, Keith Goodman wrote: >> >>>>> One random question. Any idea on the long import time: >>>>> >>>>> $ time python -c "import bottleneck" >>>>> >>>>> real ? ?0m0.712s >>>>> user ? ?0m0.546s >>>>> sys ? ? 0m0.114s >>>>> $ time python -c "import numpy" >>>>> >>>>> real ? ?0m0.142s >>>>> user ? ?0m0.090s >>>>> sys ? ? 0m0.049s >>>>> $ time python -c "import scipy" >>>>> >>>>> real ? ?0m0.201s >>>>> user ? ?0m0.132s >>>>> sys ? ? 0m0.066s >>>> >>>> Bottleneck has many low-level functions, for example, >>>> median_2d_float64_axis0, median_2d_float64_axis1, >>>> median_2d_int32_axis0, etc, etc. Maybe that explains it? But scipy has >>>> a lot of functions too, so I don't know. >>> >>> Yeah, I thought this was odd. Initially I thought perhaps it was due >>> to the size of the DLLs. func.so and move.so but they are only 3 mb or >>> so on my machine. > > ...the binaries are not massive but the function count is very high. > >> The timings on my machine (64-bit Ubuntu 10.10) are not quite as bad: >> >> $ time python -c "import bottleneck" >> real ? ?0m0.192s >> user ? ?0m0.150s >> sys ? ? 0m0.040s >> >> $ time python -c "import numpy" >> real ? ?0m0.060s >> user ? ?0m0.030s >> sys ? ? 0m0.030s >> >> $ time python -c "import scipy" >> real ? ?0m0.091s >> user ? ?0m0.040s >> sys ? ? 0m0.050s >> >> I'm interested in ways to cut the import time if anyone knows of any. import scipy is fast because it just imports numpy and the scaffolding for scipy. It`s not importing any big subpackages, so the import time comes with import scipy.subpackage. Josef >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From alan.isaac at gmail.com Wed Mar 9 07:42:22 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Wed, 09 Mar 2011 07:42:22 -0500 Subject: [SciPy-User] an example program for 3d plot In-Reply-To: References: Message-ID: <4D7775AE.8000203@gmail.com> On 3/9/2011 3:04 AM, ESKalaiyarasan wrote: > projection ='3d' is not defined or not available You need a more recent version of Matplotlib. Alan Isaac From nwagner at iam.uni-stuttgart.de Wed Mar 9 08:10:43 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Mar 2011 14:10:43 +0100 Subject: [SciPy-User] Signal processing and filtering functions Message-ID: Hi all, I am looking for the so-called SAE (Society of Automotive Engineers) Filter functions which are available in abaqus python. The Abaqus CAE User's manual points to http://www.nhtsa.gov/Research/Databases+and+Software/Signal+Analysis+Software+for+Windows Has someone implemented those filter functions ? Any pointer would be appreciated. Thanks in advance. Nils From nwagner at iam.uni-stuttgart.de Wed Mar 9 10:53:15 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Mar 2011 16:53:15 +0100 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: On Wed, 09 Mar 2011 14:10:43 +0100 "Nils Wagner" wrote: > Hi all, > > I am looking for the so-called SAE (Society of >Automotive > Engineers) Filter functions which are available in >abaqus > python. > > The Abaqus CAE User's manual points to > > http://www.nhtsa.gov/Research/Databases+and+Software/Signal+Analysis+Software+for+Windows > > Has someone implemented those filter functions ? > > Any pointer would be appreciated. > > Thanks in advance. > > Nils > > I should add the following information The SAE filtering operation performs two-pass, zero phase shift, second-order Butterworth filtering. Is it possible to reproduce such an operation with scipy.signal ? Nils From jkington at wisc.edu Wed Mar 9 11:33:49 2011 From: jkington at wisc.edu (Joe Kington) Date: Wed, 09 Mar 2011 10:33:49 -0600 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: > The SAE filtering operation performs two-pass, zero phase shift, second-order Butterworth filtering. You may already be aware of it, but that sounds suspiciously like this cookbook example... http://www.scipy.org/Cookbook/FiltFilt I'm not sure if that helps any, but I hope it does! -Joe On Wed, Mar 9, 2011 at 9:53 AM, Nils Wagner wrote: > On Wed, 09 Mar 2011 14:10:43 +0100 > "Nils Wagner" wrote: > > Hi all, > > > > I am looking for the so-called SAE (Society of > >Automotive > > Engineers) Filter functions which are available in > >abaqus > > python. > > > > The Abaqus CAE User's manual points to > > > > > http://www.nhtsa.gov/Research/Databases+and+Software/Signal+Analysis+Software+for+Windows > > > > Has someone implemented those filter functions ? > > > > Any pointer would be appreciated. > > > > Thanks in advance. > > > > Nils > > > > > I should add the following information > > The SAE filtering operation performs two-pass, zero phase > shift, second-order Butterworth filtering. > > Is it possible to reproduce such an operation with > scipy.signal ? > > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.anton.letnes at gmail.com Wed Mar 9 11:42:48 2011 From: paul.anton.letnes at gmail.com (Paul Anton Letnes) Date: Wed, 9 Mar 2011 17:42:48 +0100 Subject: [SciPy-User] building scipy Message-ID: <4B771B6B-30EE-4C82-A540-BED4D89EEF91@gmail.com> Hi everyone. I have built and installed scipy on a system on which I am not the administrator. Using the BLAS and LAPACK environment variables, I was able to install scipy successfully: env BLAS=$HOME/usr/local/lib/libgoto2.so LAPACK=$HOME/usr/local/lib/libgoto2.so python setup.py install --user As can be seen from __config__.py below, it seems that scipy somehow knows about this, finds the right folders, and so on. However, when importing scipy, python+scipy is unable to find libgoto2.so. I am able to fix the problem by exporting LD_LIBRARY_PATH to include the BLAS/LAPACK folder, but this seems like a bad way to proceed. More importantly, why doesn't scipy look for LAPACK and BLAS in the folder specified in the __config__.py file? Cheers, Paul. +++++++++++++++ __config__.py +++++++++++++++ # This file is generated by /gpfs/home/paulanto/src/scipy-0.9.0/setup.py # It contains system_info results at the time of building this package. __all__ = ["get_info","show"] blas_info={'libraries': ['goto2'], 'library_dirs': ['/home/paulanto/usr/local/lib'], 'language': 'f77'} lapack_info={'libraries': ['goto2'], 'library_dirs': ['/home/paulanto/usr/local/lib'], 'language': 'f77'} atlas_threads_info={} blas_opt_info={'libraries': ['goto2'], 'library_dirs': ['/home/paulanto/usr/local/lib'], 'define_macros': [('NO_ATLAS_INFO', 1)], 'language': 'f77'} umfpack_info={} atlas_blas_threads_info={} lapack_opt_info={'libraries': ['goto2', 'goto2'], 'library_dirs': ['/home/paulanto/usr/local/lib'], 'define_macros': [('NO_ATLAS_INFO', 1)], 'language': 'f77'} atlas_info={} lapack_mkl_info={} blas_mkl_info={} atlas_blas_info={} mkl_info={} +++++++++++++++ Test that fails +++++++++++++++ ~ % python -c 'from scipy.linalg.lapack import flapack as lapack' Traceback (most recent call last): File "", line 1, in File "/home/paulanto/.local/lib/python2.7/site-packages/scipy/linalg/__init__.py", line 9, in from basic import * File "/home/paulanto/.local/lib/python2.7/site-packages/scipy/linalg/basic.py", line 14, in from lapack import get_lapack_funcs File "/home/paulanto/.local/lib/python2.7/site-packages/scipy/linalg/lapack.py", line 14, in from scipy.linalg import flapack ImportError: libgoto2.so: cannot open shared object file: No such file or directory From pav at iki.fi Wed Mar 9 11:56:57 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 9 Mar 2011 16:56:57 +0000 (UTC) Subject: [SciPy-User] building scipy References: <4B771B6B-30EE-4C82-A540-BED4D89EEF91@gmail.com> Message-ID: Wed, 09 Mar 2011 17:42:48 +0100, Paul Anton Letnes wrote: [clip] > As can be seen from __config__.py below, it seems that scipy somehow > knows about this, finds the right folders, and so on. However, when > importing scipy, python+scipy is unable to find libgoto2.so. I am able > to fix the problem by exporting LD_LIBRARY_PATH to include the > BLAS/LAPACK folder, but this seems like a bad way to proceed. More > importantly, why doesn't scipy look for LAPACK and BLAS in the folder > specified in the __config__.py file? Setting LD_LIBRARY_PATH is the standard unix solution in this case. Scipy does not and should not look for the dynamic libraries by itself -- this is the job for the operating system's dynamic linker. In addition to setting LD_LIBRARY_PATH, you can tell the dynamic linker that it should look for the libraries in a specific place by including a "-rpath" flag during compilation. For details, see "man gcc". You can probably include it by setting LDLAST or LDLAST env. variables. From warren.weckesser at enthought.com Wed Mar 9 13:01:51 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Mar 2011 12:01:51 -0600 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: On Wed, Mar 9, 2011 at 10:33 AM, Joe Kington wrote: > > The SAE filtering operation performs two-pass, zero phase > > shift, second-order Butterworth filtering. > > > You may already be aware of it, but that sounds suspiciously like this > cookbook example... http://www.scipy.org/Cookbook/FiltFilt > That cookbook code was added to scipy in r4391 and r5195. scipy.signal has the filtfilt and lfilter_zi function. The basic idea is to apply an IIR (e.g. Butterworth) filter to the signal twice, first forward and then backward. There is a lot of room for improvement in that code. I've been working on rewriting it, and implementing the true Gustafsson algorithm; despite the comment in the code, lfilter_zi does not implement Gustafsson's method. Once the change to github is made, I'll start a branch containing this work. Warren > I'm not sure if that helps any, but I hope it does! > -Joe > > > > On Wed, Mar 9, 2011 at 9:53 AM, Nils Wagner wrote: > >> On Wed, 09 Mar 2011 14:10:43 +0100 >> "Nils Wagner" wrote: >> > Hi all, >> > >> > I am looking for the so-called SAE (Society of >> >Automotive >> > Engineers) Filter functions which are available in >> >abaqus >> > python. >> > >> > The Abaqus CAE User's manual points to >> > >> > >> http://www.nhtsa.gov/Research/Databases+and+Software/Signal+Analysis+Software+for+Windows >> > >> > Has someone implemented those filter functions ? >> > >> > Any pointer would be appreciated. >> > >> > Thanks in advance. >> > >> > Nils >> > >> > >> I should add the following information >> >> The SAE filtering operation performs two-pass, zero phase >> shift, second-order Butterworth filtering. >> >> Is it possible to reproduce such an operation with >> scipy.signal ? >> >> Nils >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nwagner at iam.uni-stuttgart.de Wed Mar 9 13:17:47 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Wed, 09 Mar 2011 19:17:47 +0100 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: On Wed, 9 Mar 2011 12:01:51 -0600 Warren Weckesser wrote: > On Wed, Mar 9, 2011 at 10:33 AM, Joe Kington > wrote: > >> >> The SAE filtering operation performs two-pass, zero >>phase >> >> shift, second-order Butterworth filtering. >> >> >> You may already be aware of it, but that sounds >>suspiciously like this >> cookbook example... >>http://www.scipy.org/Cookbook/FiltFilt >> > > > That cookbook code was added to scipy in r4391 and >r5195. scipy.signal has > the filtfilt and lfilter_zi function. The basic idea is >to apply an IIR > (e.g. Butterworth) filter to the signal twice, first >forward and then > backward. > > There is a lot of room for improvement in that code. > I've been working on > rewriting it, and implementing the true Gustafsson >algorithm; despite the > comment in the code, lfilter_zi does not implement >Gustafsson's method. > Once the change to github is made, I'll start a branch >containing this work. > > Warren > > Warren, Can you provide a reference wrt Gustafsson's algortihm ? Nils From warren.weckesser at enthought.com Wed Mar 9 13:25:09 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Mar 2011 12:25:09 -0600 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: On Wed, Mar 9, 2011 at 12:17 PM, Nils Wagner wrote: > On Wed, 9 Mar 2011 12:01:51 -0600 > Warren Weckesser wrote: > > On Wed, Mar 9, 2011 at 10:33 AM, Joe Kington > > wrote: > > > >> > >> The SAE filtering operation performs two-pass, zero > >>phase > >> > >> shift, second-order Butterworth filtering. > >> > >> > >> You may already be aware of it, but that sounds > >>suspiciously like this > >> cookbook example... > >>http://www.scipy.org/Cookbook/FiltFilt > >> > > > > > > That cookbook code was added to scipy in r4391 and > >r5195. scipy.signal has > > the filtfilt and lfilter_zi function. The basic idea is > >to apply an IIR > > (e.g. Butterworth) filter to the signal twice, first > >forward and then > > backward. > > > > There is a lot of room for improvement in that code. > > I've been working on > > rewriting it, and implementing the true Gustafsson > >algorithm; despite the > > comment in the code, lfilter_zi does not implement > >Gustafsson's method. > > Once the change to github is made, I'll start a branch > >containing this work. > > > > Warren > > > > > Warren, > > Can you provide a reference wrt Gustafsson's algortihm ? > The paper is available on his web page; see reference 1996 [A5] in the section "Journal Papers" here: http://www.control.isy.liu.se/~fredrik/pub.html In case that link ever dies, the full reference is: F. Gustafsson. Determining the initial states in forward-backward filtering. * Transactions on Signal Processing*, 46(4):988 - 992, 1996. Warren > > Nils > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.whale at otago.ac.nz Wed Mar 9 16:46:36 2011 From: ben.whale at otago.ac.nz (Ben Whale) Date: Thu, 10 Mar 2011 10:46:36 +1300 Subject: [SciPy-User] Can scipy.integrate.ode/odeint use the DOPRI integrator? In-Reply-To: References: <4D770CF9.90106@otago.ac.nz> Message-ID: <4D77F53C.3090304@otago.ac.nz> Thanks for the quick reply. Problem fixed after an upgrade to version 0.9. Thanks for the help! Ben On 03/09/2011 07:21 PM, Warren Weckesser wrote: > > > On Wed, Mar 9, 2011 at 12:06 AM, Warren Weckesser > > wrote: > > > > On Tue, Mar 8, 2011 at 11:15 PM, Ben Whale > wrote: > > A friend mentioned that sci.integrate supported DOPRI, but I > can't seem > to find any documentation. Can anyone confirm if this is true? > > > > Yes, it is true; the class scipy.integrate.ode includes the solves > 'dopri5' and dop853'. An example is attached. > > The best documentation for ode appears to be here: > http://docs.scipy.org/scipy/docs/scipy.integrate.ode.ode/#ode > > > > ...which is also the docstring for scipy.integrate.ode. > > > Warren > > > > > Warren > > > Thanks in advance, > Ben > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skapunxter at yahoo.com Wed Mar 9 17:20:12 2011 From: skapunxter at yahoo.com (Randy Williams) Date: Wed, 9 Mar 2011 14:20:12 -0800 (PST) Subject: [SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? Message-ID: <539506.47663.qm@web39409.mail.mud.yahoo.com> Greetings, I'm trying to model the dynamics of a catapult-like mechanism used to launch a projectile, and have a system of ODEs which I need to numerically integrate over time. I am trying to solve for the position of the projectile as well as the other components in my mechanism. At some point in time, the projectile separates from the mechanism, and becomes airborne. The equations governing the system change at that point in time, but because it's a function of position (which i'm solving for), I don't know up front what timespan to integrate over. I would like the ODE solver to stop integrating once the the solution reaches this certain value, and I will use the states at that point to compute the initial conditions to another ODE describing the motion from that time onward. Is there an ODE solver in Python/SciPy which will integrate from the initial t until the solution reaches a certain value, or until a specific condition is met? The ODE solvers in Matlab have "events" which will do this, but I'm trying my best to stick with Python. Thanks, Randy From warren.weckesser at enthought.com Wed Mar 9 17:28:14 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Wed, 9 Mar 2011 16:28:14 -0600 Subject: [SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? In-Reply-To: <539506.47663.qm@web39409.mail.mud.yahoo.com> References: <539506.47663.qm@web39409.mail.mud.yahoo.com> Message-ID: On Wed, Mar 9, 2011 at 4:20 PM, Randy Williams wrote: > Greetings, > > > I'm trying to model the dynamics of a catapult-like mechanism used to > launch a > projectile, and have a system of ODEs which I need to numerically integrate > over > > time. I am trying to solve for the position of the projectile as well as > the > other components in my mechanism. At some point in time, the projectile > separates from the mechanism, and becomes airborne. The equations > governing the > > system change at that point in time, but because it's a function of > position > (which i'm solving for), I don't know up front what timespan to integrate > over. > > I would like the ODE solver to stop integrating once the the solution > reaches > this certain value, and I will use the states at that point to compute the > initial conditions to another ODE describing the motion from that time > onward. > Is there an ODE solver in Python/SciPy which will integrate from the > initial t > until the solution reaches a certain value, or until a specific condition > is > met? The ODE solvers in Matlab have "events" which will do this, but I'm > trying > > my best to stick with Python. > > Randy, None of the ODE solvers in SciPy have event detection (but it is an oft-requested and sorely missed feature). Recently a couple projects were announced that provide python wrappers for the Sundials suite, one of which is python-sundials: http://code.google.com/p/python-sundials/ The very first example that you see on their web page includes event finding. Warren > Thanks, > Randy > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjstickel at vcn.com Wed Mar 9 18:15:01 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Wed, 09 Mar 2011 16:15:01 -0700 Subject: [SciPy-User] SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? In-Reply-To: References: Message-ID: <4D7809F5.7010006@vcn.com> On 3/9/11 15:28 , scipy-user-request at scipy.org wrote: > Date: Wed, 9 Mar 2011 14:20:12 -0800 (PST) > From: Randy Williams > Subject: [SciPy-User] Possible to integrate an ODE just until the > solution reaches a certain value? > To:scipy-user at scipy.org > Message-ID:<539506.47663.qm at web39409.mail.mud.yahoo.com> > Content-Type: text/plain; charset=us-ascii > > Greetings, > > > I'm trying to model the dynamics of a catapult-like mechanism used to launch a > projectile, and have a system of ODEs which I need to numerically integrate over > > time. I am trying to solve for the position of the projectile as well as the > other components in my mechanism. At some point in time, the projectile > separates from the mechanism, and becomes airborne. The equations governing the > > system change at that point in time, but because it's a function of position > (which i'm solving for), I don't know up front what timespan to integrate over. > > I would like the ODE solver to stop integrating once the the solution reaches > this certain value, and I will use the states at that point to compute the > initial conditions to another ODE describing the motion from that time onward. > Is there an ODE solver in Python/SciPy which will integrate from the initial t > until the solution reaches a certain value, or until a specific condition is > met? The ODE solvers in Matlab have "events" which will do this, but I'm trying > > my best to stick with Python. > > Thanks, > Randy If I understand what you are asking, you can do it with the ode class integrator (scipy.integrate.ode). Below is a short toy example. The key is how you setup your loop (while loop with solution criteria vs. for loop over time). HTH, Jonathan import numpy as np from scipy.integrate import ode from matplotlib.pyplot import * def dfdt(t, f, a,b,c,d): x = f[0] y = f[1] dxdt = np.sin(a*x + b*y) dydt = np.cos(c*x + d*y) return [dxdt, dydt] f0 = [1., 0.] a = 1.0 b = -2.0 c = -1.0 d = 1.0 t = [0.0] dt = 0.1 f = [f0] result = f[0][0] solver = ode(dfdt) solver.set_initial_value(f0,t[0]) solver.set_f_params(a,b,c,d) while solver.successful() and result < 4.0 and t[-1]<100.0: t.append(t[-1]+dt) solver.integrate(t[-1]) f.append(solver.y) result = f[-1][0] From rob.clewley at gmail.com Wed Mar 9 21:10:07 2011 From: rob.clewley at gmail.com (Rob Clewley) Date: Wed, 9 Mar 2011 21:10:07 -0500 Subject: [SciPy-User] SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? In-Reply-To: <4D7809F5.7010006@vcn.com> References: <4D7809F5.7010006@vcn.com> Message-ID: Hi, > I'm trying to model the dynamics of a catapult-like mechanism used to launch a > projectile, and have a system of ODEs which I need to numerically integrate over > time. Does your project require numerical precision or just "looks right" accuracy? >> >> I would like the ODE solver to stop integrating once the the solution reaches >> this certain value, and I will use the states at that point to compute the >> initial conditions to another ODE describing the motion from that time onward. This is often referred to as a hybrid system. >> Is there an ODE solver in Python/SciPy which will integrate from the initial t >> until the solution reaches a certain value, or until a specific condition is >> met? ?The ODE solvers in Matlab have "events" which will do this, but I'm trying >> >> my best to stick with Python. PyDSTool is a pure python implementation of event-based hybrid systems of ODEs (or discrete mappings), but in your case it may only be worthwhile to set up if you need accurate calculations and/or possibly more complex hybrid models. (There's some syntax overhead in setting up hybrid models.) > > If I understand what you are asking, you can do it with the ode class > integrator (scipy.integrate.ode). ?Below is a short toy example. ?The > key is how you setup your loop (while loop with solution criteria vs. > for loop over time). > Just FYI, the example given using the scipy solver is only fine if you just want a "quick and dirty" demonstration. If you care about accuracy then this will not work: the "result < 4.0" condition does not guarantee that you will stop *at* the point, typically you will stop somewhere close but before the point you wish to switch ODEs. You would have to (inefficiently) set dt to be very small to resolve the event accurately. An efficient and accurate way to do this is in the PyDSTool integrators or in Sundials, but the latter is not pure python. An example of using PyDSTool events to switch between sub-systems is given in IF_squarespike_model.py at http://pydstool.bzr.sourceforge.net/bzr/pydstool/revision/1#tests/IF_squarespike_model.py which demonstrates an "integrate and fire" neuron model with a fixed rectangular pulse for a spike. There are several other demos of hybrid models provided in the package, or you can ask me. -Rob From nwagner at iam.uni-stuttgart.de Thu Mar 10 05:52:20 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Thu, 10 Mar 2011 11:52:20 +0100 Subject: [SciPy-User] Signal processing and filtering functions In-Reply-To: References: Message-ID: On Wed, 9 Mar 2011 12:25:09 -0600 Warren Weckesser wrote: > On Wed, Mar 9, 2011 at 12:17 PM, Nils Wagner > wrote: > >> On Wed, 9 Mar 2011 12:01:51 -0600 >> Warren Weckesser >>wrote: >> > On Wed, Mar 9, 2011 at 10:33 AM, Joe Kington >> > wrote: >> > >> >> >> >> The SAE filtering operation performs two-pass, zero >> >>phase >> >> >> >> shift, second-order Butterworth filtering. >> >> >> >> >> >> You may already be aware of it, but that sounds >> >>suspiciously like this >> >> cookbook example... >> >>http://www.scipy.org/Cookbook/FiltFilt >> >> >> > >> > >> > That cookbook code was added to scipy in r4391 and >> >r5195. scipy.signal has >> > the filtfilt and lfilter_zi function. The basic idea >>is >> >to apply an IIR >> > (e.g. Butterworth) filter to the signal twice, first >> >forward and then >> > backward. >> > >> > There is a lot of room for improvement in that code. >> > I've been working on >> > rewriting it, and implementing the true Gustafsson >> >algorithm; despite the >> > comment in the code, lfilter_zi does not implement >> >Gustafsson's method. >> > Once the change to github is made, I'll start a branch >> >containing this work. >> > >> > Warren >> > >> > >> Warren, >> >> Can you provide a reference wrt Gustafsson's algortihm ? >> > > > > The paper is available on his web page; see reference >1996 [A5] in the > section "Journal Papers" here: > http://www.control.isy.liu.se/~fredrik/pub.html > > In case that link ever dies, the full reference is: > >F. Gustafsson. Determining the initial states in >forward-backward filtering. > * Transactions on Signal Processing*, 46(4):988 - 992, >1996. > > > Warren > > > Hi Warren, IMHO, the docstrings of filtfilt and lfilter_zi are very short. it would be nice if you could add some meaning information. >>> from scipy.signal import lfilter_zi >>> help (lfilter_zi) >>> from scipy.signal import filtfilt >>> help (filtfilt) Help on function filtfilt in module scipy.signal.signaltools: filtfilt(b, a, x) (END) Help on function lfilter_zi in module scipy.signal.signaltools: lfilter_zi(b, a) Thanks in advance. Nils From yennifersantiago at gmail.com Thu Mar 10 09:21:47 2011 From: yennifersantiago at gmail.com (Yennifer Santiago) Date: Thu, 10 Mar 2011 14:21:47 +0000 (UTC) Subject: [SciPy-User] =?utf-8?q?Invitaci=C3=B3n_a_conectarnos_en_LinkedIn?= Message-ID: <1838633042.3232509.1299766907421.JavaMail.app@ela4-bed32.prod> LinkedIn ------------ Me gustar?a a?adirte a mi red profesional en LinkedIn. -Yennifer Yennifer Santiago Programadora en Superintendencia de Seguros SUDESEG Venezuela Confirma que conoces a Yennifer Santiago https://www.linkedin.com/e/-3wy1w2-gl3rnod7-5j/isd/2479701935/jjSqDFrl/ -- (c) 2011, LinkedIn Corporation -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjstickel at vcn.com Thu Mar 10 13:28:48 2011 From: jjstickel at vcn.com (Jonathan Stickel) Date: Thu, 10 Mar 2011 11:28:48 -0700 Subject: [SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? In-Reply-To: References: Message-ID: <4D791860.208@vcn.com> On 3/10/11 11:00 , scipy-user-request at scipy.org wrote: > From: Rob Clewley > Subject: Re: [SciPy-User] SciPy-User] Possible to integrate an ODE > just until the solution reaches a certain value? > To: SciPy Users List > > Hi, > >> > I'm trying to model the dynamics of a catapult-like mechanism used to launch a >> > projectile, and have a system of ODEs which I need to numerically integrate over >> > time. > Does your project require numerical precision or just "looks right" accuracy? > >>> >> >>> >> I would like the ODE solver to stop integrating once the the solution reaches >>> >> this certain value, and I will use the states at that point to compute the >>> >> initial conditions to another ODE describing the motion from that time onward. > This is often referred to as a hybrid system. > >>> >> Is there an ODE solver in Python/SciPy which will integrate from the initial t >>> >> until the solution reaches a certain value, or until a specific condition is >>> >> met? ?The ODE solvers in Matlab have "events" which will do this, but I'm trying >>> >> >>> >> my best to stick with Python. > PyDSTool is a pure python implementation of event-based hybrid systems > of ODEs (or discrete mappings), but in your case it may only be > worthwhile to set up if you need accurate calculations and/or possibly > more complex hybrid models. (There's some syntax overhead in setting > up hybrid models.) > >> > >> > If I understand what you are asking, you can do it with the ode class >> > integrator (scipy.integrate.ode). ?Below is a short toy example. ?The >> > key is how you setup your loop (while loop with solution criteria vs. >> > for loop over time). >> > > Just FYI, the example given using the scipy solver is only fine if you > just want a "quick and dirty" demonstration. If you care about > accuracy then this will not work: the "result< 4.0" condition does > not guarantee that you will stop*at* the point, typically you will > stop somewhere close but before the point you wish to switch ODEs. You > would have to (inefficiently) set dt to be very small to resolve the > event accurately. > > An efficient and accurate way to do this is in the PyDSTool > integrators or in Sundials, but the latter is not pure python. You could add some more to my scipy example for a rudimentary dynamic time step to get a somewhat more precise answer: ... dt0 = 0.1 dtlow = 1e-5 target = 4.0 while solver.successful() and result < target and t[-1]<100.0: t.append(t[-1]+dt) solver.integrate(t[-1]) f.append(solver.y) prevres = result result = f[-1][0] r = (result - prevres)/dt dt1 = (target - result)/r if dt1 < dt0 and dt1 > 0: if dt1 > dtlow: dt = dt1 else: dt = dtlow else: dt = dt0 ... But I am sure PyDSTool and Sundials are much better tools for this (haven't tried them yet). Jonathan From skapunxter at yahoo.com Thu Mar 10 16:58:27 2011 From: skapunxter at yahoo.com (Randy Williams) Date: Thu, 10 Mar 2011 13:58:27 -0800 (PST) Subject: [SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? In-Reply-To: <4D791860.208@vcn.com> References: <4D791860.208@vcn.com> Message-ID: <824645.88825.qm@web39404.mail.mud.yahoo.com> Thanks for all the responses, guys. I can see I have a few options I need to examine - I'll take a deeper look at PyDSTool and Sundials this weekend, although setting them up may be more complicated than I have time to learn. Thanks again, Randy ----- Original Message ---- From: Jonathan Stickel To: scipy-user at scipy.org Sent: Thu, March 10, 2011 12:28:48 PM Subject: Re: [SciPy-User] Possible to integrate an ODE just until the solution reaches a certain value? On 3/10/11 11:00 , scipy-user-request at scipy.org wrote: > From: Rob Clewley > Subject: Re: [SciPy-User] SciPy-User] Possible to integrate an ODE > just until the solution reaches a certain value? > To: SciPy Users List > > Hi, > >> > I'm trying to model the dynamics of a catapult-like mechanism used to launch >>a >> > projectile, and have a system of ODEs which I need to numerically integrate >>over >> > time. > Does your project require numerical precision or just "looks right" accuracy? > >>> >> >>> >> I would like the ODE solver to stop integrating once the the solution >>>reaches >>> >> this certain value, and I will use the states at that point to compute >the >>> >> initial conditions to another ODE describing the motion from that time >>>onward. > This is often referred to as a hybrid system. > >>> >> Is there an ODE solver in Python/SciPy which will integrate from the >>>initial t >>> >> until the solution reaches a certain value, or until a specific condition >>>is >>> >> met? ?The ODE solvers in Matlab have "events" which will do this, but I'm >>>trying >>> >> >>> >> my best to stick with Python. > PyDSTool is a pure python implementation of event-based hybrid systems > of ODEs (or discrete mappings), but in your case it may only be > worthwhile to set up if you need accurate calculations and/or possibly > more complex hybrid models. (There's some syntax overhead in setting > up hybrid models.) > >> > >> > If I understand what you are asking, you can do it with the ode class >> > integrator (scipy.integrate.ode). ?Below is a short toy example. ?The >> > key is how you setup your loop (while loop with solution criteria vs. >> > for loop over time). >> > > Just FYI, the example given using the scipy solver is only fine if you > just want a "quick and dirty" demonstration. If you care about > accuracy then this will not work: the "result< 4.0" condition does > not guarantee that you will stop*at* the point, typically you will > stop somewhere close but before the point you wish to switch ODEs. You > would have to (inefficiently) set dt to be very small to resolve the > event accurately. > > An efficient and accurate way to do this is in the PyDSTool > integrators or in Sundials, but the latter is not pure python. You could add some more to my scipy example for a rudimentary dynamic time step to get a somewhat more precise answer: ... dt0 = 0.1 dtlow = 1e-5 target = 4.0 while solver.successful() and result < target and t[-1]<100.0: t.append(t[-1]+dt) solver.integrate(t[-1]) f.append(solver.y) prevres = result result = f[-1][0] r = (result - prevres)/dt dt1 = (target - result)/r if dt1 < dt0 and dt1 > 0: if dt1 > dtlow: dt = dt1 else: dt = dtlow else: dt = dt0 ... But I am sure PyDSTool and Sundials are much better tools for this (haven't tried them yet). Jonathan _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From wkerzendorf at googlemail.com Fri Mar 11 07:11:26 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Fri, 11 Mar 2011 23:11:26 +1100 Subject: [SciPy-User] rbf interpolating Message-ID: <4D7A116E.3050608@gmail.com> Dear all I am currently trying to use interpolate.Rbf to interpolate my data. My data looks like that: (array([ 9000., 9000., 9000., ..., 12000., 12000., 12000.]), array([ 3.5, 3.5, 3.5, ..., 4.5, 4.5, 4.5]), array([-2.5, -2.5, -2.5, ..., 0.5, 0.5, 0.5]), array([ 150., 150., 150., ..., 250., 250., 250.]), array([ 0.5, 0.7, 0.9, ..., 0.9, 1.1, 1.3]), array([[ 0.12608476, 0.12826188, 0.13127538, ..., 0.29928203, 0.27434822, 0.25906699], [ 0.06169096, 0.06278402, 0.06430721, ..., 0.44264917, 0.40578164, 0.38318576], [ 0.03013017, 0.03067692, 0.03144345, ..., 0.65333767, 0.59893744, 0.56559473], ..., [ 0.07080239, 0.07284835, 0.07565985, ..., 0.44098662, 0.40915711, 0.39057291], [ 0.0345885 , 0.03560238, 0.037001 , ..., 0.65085347, 0.60389154, 0.57647133], [ 0.01686656, 0.0173676 , 0.01806123, ..., 0.95862692, 0.88947996, 0.84910533]])) So the first 5 items are the coordinates. the last is the data (which is not only a single number but is actually a 1D array. If I do this with griddata it works. I tried giving it to Rbf by doing Rbf(*data), but I get the following error: /Library/Python/2.6/site-packages/scipy/interpolate/rbf.pyc in __init__(self, *args, **kwargs) 196 197 self.A = self._init_function(r) - eye(self.N)*self.smooth --> 198 self.nodes = linalg.solve(self.A, self.di) 199 200 def _call_norm(self, x1, x2): /Library/Python/2.6/site-packages/scipy/linalg/basic.pyc in solve(a, b, sym_pos, lower, overwrite_a, overwrite_b, debug) 49 raise ValueError('expected square matrix') 50 if a1.shape[0] != b1.shape[0]: ---> 51 raise ValueError('incompatible dimensions') 52 overwrite_a = overwrite_a or (a1 is not a and not hasattr(a,'__array__')) 53 overwrite_b = overwrite_b or (b1 is not b and not hasattr(b,'__array__')) ValueError: incompatible dimensions ----- Your help is greatly appreciated, Wolfgang From millman at berkeley.edu Sun Mar 13 00:22:09 2011 From: millman at berkeley.edu (Jarrod Millman) Date: Sat, 12 Mar 2011 21:22:09 -0800 Subject: [SciPy-User] Call for GSoC 2011 SciPy mentors Message-ID: Hi, It is time to start preparing for the 2011 Google Summer of Code (SoC). As in the past, we will participate in SoC with the Python Software Foundation (PSF) as our mentoring organization. The PSF has requested that every project, which wishes to participate in the SoC, provide a list of at least *three* potential mentors. If you are interested and willing to potentially mentor someone this summer to work on SciPy, please send me the following information by Monday evening: Name, Email, Phone, Link_ID, and whether you want to mentor a NumPy or SciPy project. You can find additional information on the 2011 SoC homepage: http://socghop.appspot.com/ Here is the PSF SoC page: http://wiki.python.org/moin/SummerOfCode Please start thinking about potential projects and add them to the SoC ideas page: http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas Thanks, Jarrod From tmp50 at ukr.net Sun Mar 13 01:58:20 2011 From: tmp50 at ukr.net (Dmitrey) Date: Sun, 13 Mar 2011 08:58:20 +0200 Subject: [SciPy-User] Call for GSoC 2011 SciPy mentors In-Reply-To: References: Message-ID: Hi, It is time to start preparing for the 2011 Google Summer of Code (SoC). As in the past, we will participate in SoC with the Python Software Foundation (PSF) as our mentoring organization. The PSF has requested that every project, which wishes to participate in the SoC, provide a list of at least *three* potential mentors. If you are interested and willing to potentially mentor someone this summer to work on SciPy, please send me the following information by Monday evening: Name, Email, Phone, Link_ID, and whether you want to mentor a NumPy or SciPy project. You can find additional information on the 2011 SoC homepage: http://socghop.appspot.com/ Here is the PSF SoC page: http://wiki.python.org/moin/SummerOfCode Please start thinking about potential projects and add them to the SoC ideas page: http://projects.scipy.org/scipy/wiki/SummerofCodeIdeas That page leads to year 2010 and seems to be read-only. I would suggest linking numpy with ACML and make result available from easy_install and Linux ap-get / yum. Lots of info about linking ACML with CBLAS had been published in internet (e.g. http://forums.amd.com/forum/messageview.cfm?catid=217&threadid=89362&enterthread=y), but somehow the task (I would declare as highest-priority for numpy) is not done yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Sun Mar 13 15:52:49 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Sun, 13 Mar 2011 20:52:49 +0100 Subject: [SciPy-User] scipy.stats.norm strange doc string Message-ID: Hi, The doc string of scipy.stats.norm tells me that the location and scale parameters are array-like. However, when I try to pass arrays to the loc and scale keywords I get an error. Specifically: In [1]: from scipy.stats import norm In [2]: import numpy as np In [3]: mu = np.array([1,1]) In [4]: simga = np.array([1,1]) In [5]: x = [0,1,2,3] In [6]: norm.pdf(x, loc = mu, scale = simga) results in : ValueError: shape mismatch: objects cannot be broadcast to a single shape I do understand how to resolve this problem, but for my specific purpose I would have liked to pass mu and sigma as arrays, that is, I would have liked to achieve tau = np.zeros([g,m]) for i in range(g): tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) in one pass. BTW: I am using this code to fit a set of normal distributions to a given (quite) general distribution function by using the EM algorithm. Is this already coded somewhere in scipy? If not, is somebody interested in me making this available on the scipy cookbook? bye Nicky From dplepage at gmail.com Sun Mar 13 16:18:58 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Sun, 13 Mar 2011 16:18:58 -0400 Subject: [SciPy-User] scipy.stats.norm strange doc string In-Reply-To: References: Message-ID: norm.pdf takes x, loc, and scale, and returns y such that y[i] is the value of a normal pdf with mean loc[i] and scale scale[i] evaluated at x[i]. If x, loc, or scale is a scalar, it's treated as though you passed in an array containing all the same element. Your error is because norm.pdf requires that if any of x, loc, and scale aren't scalars, they should be arrays *of the same length*. For example, the following three are (roughly) equivalent: norm.pdf([0,1,2], loc = 1, scale = 2) norm.pdf([0,1,2], loc = [1,1,1], scale = [2,2,2]) [norm.pdf(x,l,s) for (x,l,s) in zip([0,1,2], [1,1,1], [2,2,2])] but norm.pdf([0,1,2], loc = [1,1], scale = 2) will produce a shape mismatch because a length-3 array can't be broadcast to the same shape as a length-3 array. -- Dan Lepage On Sun, Mar 13, 2011 at 3:52 PM, nicky van foreest wrote: > Hi, > > The doc string of scipy.stats.norm tells me that the location and > scale parameters are array-like. However, when I try to pass arrays to > the loc and scale keywords I get an error. Specifically: > > > In [1]: from scipy.stats import norm > > In [2]: import numpy as np > > In [3]: mu = np.array([1,1]) > > In [4]: simga = np.array([1,1]) > > In [5]: x = [0,1,2,3] > > In [6]: norm.pdf(x, loc = mu, scale = simga) > > > results in : > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > I do understand how to resolve this problem, but for my specific > purpose I would have liked to pass mu and sigma as arrays, that is, I > would have liked to achieve > > ? ? ? ?tau = np.zeros([g,m]) > ? ? ? ?for i in range(g): > ? ? ? ? ? ?tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) > > > in one pass. > > BTW: > > I am using this code to fit a set of normal distributions to a given > (quite) general distribution function by using the EM algorithm. Is > this already coded somewhere in scipy? If not, is somebody interested > in me making this available on the scipy cookbook? > > bye > > Nicky > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dplepage at gmail.com Sun Mar 13 16:24:16 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Sun, 13 Mar 2011 16:24:16 -0400 Subject: [SciPy-User] scipy.stats.norm strange doc string In-Reply-To: References: Message-ID: Also, I believe EM for fitting Gaussian Mixture Models is implemented in scikits.learn; check out Hope that helps! -- Dan Lepage On Sun, Mar 13, 2011 at 3:52 PM, nicky van foreest wrote: > Hi, > > The doc string of scipy.stats.norm tells me that the location and > scale parameters are array-like. However, when I try to pass arrays to > the loc and scale keywords I get an error. Specifically: > > > In [1]: from scipy.stats import norm > > In [2]: import numpy as np > > In [3]: mu = np.array([1,1]) > > In [4]: simga = np.array([1,1]) > > In [5]: x = [0,1,2,3] > > In [6]: norm.pdf(x, loc = mu, scale = simga) > > > results in : > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > I do understand how to resolve this problem, but for my specific > purpose I would have liked to pass mu and sigma as arrays, that is, I > would have liked to achieve > > ? ? ? ?tau = np.zeros([g,m]) > ? ? ? ?for i in range(g): > ? ? ? ? ? ?tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) > > > in one pass. > > BTW: > > I am using this code to fit a set of normal distributions to a given > (quite) general distribution function by using the EM algorithm. Is > this already coded somewhere in scipy? If not, is somebody interested > in me making this available on the scipy cookbook? > > bye > > Nicky > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From warren.weckesser at enthought.com Sun Mar 13 16:25:01 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 13 Mar 2011 15:25:01 -0500 Subject: [SciPy-User] scipy.stats.norm strange doc string In-Reply-To: References: Message-ID: On Sun, Mar 13, 2011 at 2:52 PM, nicky van foreest wrote: > Hi, > > The doc string of scipy.stats.norm tells me that the location and > scale parameters are array-like. However, when I try to pass arrays to > the loc and scale keywords I get an error. Specifically: > > > In [1]: from scipy.stats import norm > > In [2]: import numpy as np > > In [3]: mu = np.array([1,1]) > > In [4]: simga = np.array([1,1]) > > In [5]: x = [0,1,2,3] > > In [6]: norm.pdf(x, loc = mu, scale = simga) > > > results in : > > ValueError: shape mismatch: objects cannot be broadcast to a single shape > > I do understand how to resolve this problem, but for my specific > purpose I would have liked to pass mu and sigma as arrays, that is, I > would have liked to achieve > > tau = np.zeros([g,m]) > for i in range(g): > tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) > > > in one pass. > All the arguments, including x, are broadcast, so you have ensure that their shapes are all compatible. This can be accomplished with some judicious use of np.newaxis. Here's a complete version of your snippet, with a "loop" version and a broadcasting version: ----- import numpy as np from scipy.stats import norm mu = np.array([1.0, 1.25]) sigma = np.array([4.0, 5.0]) p = np.array([0.25, 0.75]) x = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) g = p.shape[0] m = x.shape[0] # Compute tau in a loop. tau = np.empty([g,m]) for i in range(g): tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) # Compute tau with broadcasting. tau2 = p[:,np.newaxis] * norm.pdf(x, loc=mu[:,np.newaxis], scale=sigma[:,np.newaxis]) print "tau:" print tau print print "tau2:" print tau2 ----- Warren > BTW: > > I am using this code to fit a set of normal distributions to a given > (quite) general distribution function by using the EM algorithm. Is > this already coded somewhere in scipy? If not, is somebody interested > in me making this available on the scipy cookbook? > > bye > > Nicky > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwgoodman at gmail.com Sun Mar 13 17:32:39 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Sun, 13 Mar 2011 14:32:39 -0700 Subject: [SciPy-User] Bottleneck 0.4.1 In-Reply-To: References: Message-ID: On Tue, Mar 8, 2011 at 6:06 PM, Wes McKinney wrote: > One random question. Any idea on the long import time: > > $ time python -c "import bottleneck" > > real ? ?0m0.712s > user ? ?0m0.546s > sys ? ? 0m0.114s > $ time python -c "import numpy" > > real ? ?0m0.142s > user ? ?0m0.090s > sys ? ? 0m0.049s > $ time python -c "import scipy" > > real ? ?0m0.201s > user ? ?0m0.132s > sys ? ? 0m0.066s Bottleneck imports are now 3x faster. I switched to a lazy import of scipy (Bottleneck rarely uses scipy). Before: $ time python -c "import bottleneck" real 0m0.196s user 0m0.150s sys 0m0.040s After: $ time python -c "import bottleneck" real 0m0.061s user 0m0.010s sys 0m0.050s Does adding Bottleneck to your package increase the import time by 0.06 seconds? No, not if your package imports numpy: $ time python -c "import numpy; import bottleneck" real 0m0.060s user 0m0.020s sys 0m0.030s I used this pattern for lazy imports: email = None def parse_email(): global email if email is None: import email which I found here: http://wiki.python.org/moin/PythonSpeed/PerformanceTips#Import_Statement_Overhead Thanks, Wes, for the report. From vanforeest at gmail.com Mon Mar 14 04:04:14 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Mon, 14 Mar 2011 09:04:14 +0100 Subject: [SciPy-User] scipy.stats.norm strange doc string In-Reply-To: References: Message-ID: Hi, Thanks for your answers. Very helpful. Nicky On 13 March 2011 21:25, Warren Weckesser wrote: > > > On Sun, Mar 13, 2011 at 2:52 PM, nicky van foreest > wrote: >> >> Hi, >> >> The doc string of scipy.stats.norm tells me that the location and >> scale parameters are array-like. However, when I try to pass arrays to >> the loc and scale keywords I get an error. Specifically: >> >> >> In [1]: from scipy.stats import norm >> >> In [2]: import numpy as np >> >> In [3]: mu = np.array([1,1]) >> >> In [4]: simga = np.array([1,1]) >> >> In [5]: x = [0,1,2,3] >> >> In [6]: norm.pdf(x, loc = mu, scale = simga) >> >> >> results in : >> >> ValueError: shape mismatch: objects cannot be broadcast to a single shape >> >> I do understand how to resolve this problem, but for my specific >> purpose I would have liked to pass mu and sigma as arrays, that is, I >> would have liked to achieve >> >> ? ? ? ?tau = np.zeros([g,m]) >> ? ? ? ?for i in range(g): >> ? ? ? ? ? ?tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) >> >> >> in one pass. > > > All the arguments, including x, are broadcast, so you have ensure that their > shapes are all compatible. ? This can be accomplished with some judicious > use of np.newaxis.? Here's a complete version of your snippet, with a "loop" > version and a broadcasting version: > > ----- > import numpy as np > from scipy.stats import norm > > mu = np.array([1.0, 1.25]) > sigma = np.array([4.0, 5.0]) > p = np.array([0.25, 0.75]) > x = np.array([1.0, 2.0, 3.0, 4.0, 5.0]) > > g = p.shape[0] > m = x.shape[0] > > # Compute tau in a loop. > tau = np.empty([g,m]) > for i in range(g): > ??? tau[i] = p[i]*norm.pdf(x, loc=mu[i], scale = sigma[i]) > > # Compute tau with broadcasting. > tau2 = p[:,np.newaxis] * norm.pdf(x, loc=mu[:,np.newaxis], > scale=sigma[:,np.newaxis]) > > print "tau:" > print tau > > print > print "tau2:" > print tau2 > ----- > > Warren > >> >> BTW: >> >> I am using this code to fit a set of normal distributions to a given >> (quite) general distribution function by using the EM algorithm. Is >> this already coded somewhere in scipy? If not, is somebody interested >> in me making this available on the scipy cookbook? >> >> bye >> >> Nicky >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From denis-bz-gg at t-online.de Mon Mar 14 06:16:53 2011 From: denis-bz-gg at t-online.de (denis) Date: Mon, 14 Mar 2011 03:16:53 -0700 (PDT) Subject: [SciPy-User] rbf interpolating In-Reply-To: <4D7A116E.3050608@gmail.com> References: <4D7A116E.3050608@gmail.com> Message-ID: Wolfgang, Rbf seems to do only scalar z, the source has self.di = asarray(args[-1]).flatten() (I wouldn't recommend RBF anyway, because - arbitrary choice of function = gaussian ... - one epsilon can't adapt to data fine here / coarse there - the matrix A can be near-singular - global, O(N) for each interpolation.) I like (advt) the combination of scipy.spatial.cKDTree and inverse distance weighting under http://stackoverflow.com/questions/3104781/inverse-distance-weighted-idw-interpolation-with-python cheers -- denis On Mar 11, 1:11?pm, Wolfgang Kerzendorf wrote: > Dear all > > I am currently trying to use interpolate.Rbf to interpolate my data. My > data looks like that: ... > So the first 5 items are the coordinates. the last is the data (which is > not only a single number but is actually a 1D array. If I do this with > griddata it works. > > I tried giving it to Rbf by doing Rbf(*data), but I get the following error: > > ValueError: incompatible dimensions From wkerzendorf at googlemail.com Mon Mar 14 10:03:20 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Tue, 15 Mar 2011 01:03:20 +1100 Subject: [SciPy-User] rbf interpolating In-Reply-To: References: <4D7A116E.3050608@gmail.com> Message-ID: <4D7E2028.9000308@gmail.com> Thanks. that explains it. I'm now using the idw. I think it should become part of the interpolate package. it seems very well written... cheers W On 14/03/11 9:16 PM, denis wrote: > Wolfgang, > Rbf seems to do only scalar z, the source has > self.di = asarray(args[-1]).flatten() > > (I wouldn't recommend RBF anyway, because > - arbitrary choice of function = gaussian ... > - one epsilon can't adapt to data fine here / coarse there > - the matrix A can be near-singular > - global, O(N) for each interpolation.) > > I like (advt) the combination of scipy.spatial.cKDTree and inverse > distance weighting under > http://stackoverflow.com/questions/3104781/inverse-distance-weighted-idw-interpolation-with-python > > cheers > -- denis > > On Mar 11, 1:11 pm, Wolfgang Kerzendorf > wrote: >> Dear all >> >> I am currently trying to use interpolate.Rbf to interpolate my data. My >> data looks like that: > ... > >> So the first 5 items are the coordinates. the last is the data (which is >> not only a single number but is actually a 1D array. If I do this with >> griddata it works. >> >> I tried giving it to Rbf by doing Rbf(*data), but I get the following error: >> ValueError: incompatible dimensions > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user From wesmckinn at gmail.com Mon Mar 14 11:13:27 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 14 Mar 2011 11:13:27 -0400 Subject: [SciPy-User] fast small matrix multiplication with cython? In-Reply-To: References: Message-ID: On Mon, Mar 14, 2011 at 11:12 AM, Wes McKinney wrote: > On Thu, Dec 9, 2010 at 5:01 PM, Skipper Seabold wrote: >> On Thu, Dec 9, 2010 at 4:33 PM, Skipper Seabold wrote: >>> On Wed, Dec 8, 2010 at 11:28 PM, ? wrote: >>>>> >>>>> It looks like I don't save too much time with just Python/scipy >>>>> optimizations. ?Apparently ~75% of the time is spent in l-bfgs-b, >>>>> judging by its user time output and the profiler's CPU time output(?). >>>>> ?Non-cython versions: >>>>> >>>>> Brief and rough profiling on my laptop for ARMA(2,2) with 1000 >>>>> observations. ?Optimization uses fmin_l_bfgs_b with m = 12 and iprint >>>>> = 0. >>>> >>>> Completely different idea: How costly are the numerical derivatives in l-bfgs-b? >>>> With l-bfgs-b, you should be able to replace the derivatives with the >>>> complex step derivatives that calculate the loglike function value and >>>> the derivatives in one iteration. >>>> >>> >>> I couldn't figure out how to use it without some hacks. ?The >>> fmin_l_bfgs_b will call both f and fprime as (x, *args), but >>> approx_fprime or approx_fprime_cs need actually approx_fprime(x, func, >>> args=args) and call func(x, *args). ?I changed fmin_l_bfgs_b to make >>> the call like this for the gradient, and I get (different computer) >>> >>> >>> Using approx_fprime_cs >>> ----------------------------------- >>> ? ? ? ? 861609 function calls (861525 primitive calls) in 3.337 CPU seconds >>> >>> ? Ordered by: internal time >>> >>> ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) >>> ? ? ? 70 ? ?1.942 ? ?0.028 ? ?3.213 ? ?0.046 kalmanf.py:504(loglike) >>> ? 840296 ? ?1.229 ? ?0.000 ? ?1.229 ? ?0.000 {numpy.core._dotblas.dot} >>> ? ? ? 56 ? ?0.038 ? ?0.001 ? ?0.038 ? ?0.001 {numpy.linalg.lapack_lite.zgesv} >>> ? ? ?270 ? ?0.025 ? ?0.000 ? ?0.025 ? ?0.000 {sum} >>> ? ? ? 90 ? ?0.019 ? ?0.000 ? ?0.019 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} >>> ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 >>> function_base.py:494(asarray_chkfinite) >>> ? ? ?162 ? ?0.012 ? ?0.000 ? ?0.014 ? ?0.000 arima.py:117(_transparams) >>> >>> >>> Using approx_grad = True >>> --------------------------------------- >>> ? ? ? ? 1097454 function calls (1097370 primitive calls) in 3.615 CPU seconds >>> >>> ? Ordered by: internal time >>> >>> ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) >>> ? ? ? 90 ? ?2.316 ? ?0.026 ? ?3.489 ? ?0.039 kalmanf.py:504(loglike) >>> ?1073757 ? ?1.164 ? ?0.000 ? ?1.164 ? ?0.000 {numpy.core._dotblas.dot} >>> ? ? ?270 ? ?0.025 ? ?0.000 ? ?0.025 ? ?0.000 {sum} >>> ? ? ? 90 ? ?0.020 ? ?0.000 ? ?0.020 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} >>> ? ? ?182 ? ?0.014 ? ?0.000 ? ?0.016 ? ?0.000 arima.py:117(_transparams) >>> ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 >>> function_base.py:494(asarray_chkfinite) >>> ? ? ? 46 ? ?0.008 ? ?0.000 ? ?0.023 ? ?0.000 decomp_svd.py:12(svd) >>> ? ? ? 23 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 {method 'var' of >>> 'numpy.ndarray' objects} >>> >>> >>> Definitely less function calls and a little faster, but I had to write >>> some hacks to get it to work. >>> >> >> This is more like it! ?With fast recursions in Cython: >> >> ? ? ? ? 15186 function calls (15102 primitive calls) in 0.750 CPU seconds >> >> ? Ordered by: internal time >> >> ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) >> ? ? ? 18 ? ?0.622 ? ?0.035 ? ?0.625 ? ?0.035 >> kalman_loglike.pyx:15(kalman_loglike) >> ? ? ?270 ? ?0.024 ? ?0.000 ? ?0.024 ? ?0.000 {sum} >> ? ? ? 90 ? ?0.019 ? ?0.000 ? ?0.019 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} >> ? ? ?156 ? ?0.013 ? ?0.000 ? ?0.013 ? ?0.000 {numpy.core._dotblas.dot} >> ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 >> function_base.py:494(asarray_chkfinite) >> ? ? ?110 ? ?0.008 ? ?0.000 ? ?0.010 ? ?0.000 arima.py:118(_transparams) >> ? ? ? 46 ? ?0.008 ? ?0.000 ? ?0.023 ? ?0.000 decomp_svd.py:12(svd) >> ? ? ? 23 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 {method 'var' of >> 'numpy.ndarray' objects} >> ? ? ? 26 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 tsatools.py:109(lagmat) >> ? ? ? 90 ? ?0.004 ? ?0.000 ? ?0.042 ? ?0.000 arima.py:197(loglike_css) >> ? ? ? 81 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 >> {numpy.core.multiarray._fastCopyAndTranspose} >> >> I can live with this for now. >> >> Skipper >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > Revisiting this topic from a few months ago. I was able to get Tokyo > (see github.com/tokyo/tokyo and > http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/) > to build against (ATLAS? or MKL?) in EPD 7.0 with some modifications > to the setup.py, see my fork on github: > > https://github.com/wesm/tokyo/blob/master/setup.py > > Someone who knows a bit more about the linking against multiple > versions of BLAS/ATLAS/MKL might be able to make it work more > generally-- I basically just poked around numpy/core/setup.py > > The speedups for small matrices suggest it would probably be > worthwhile to have in our toolbelt: > > $ python double_speed.py > > > > SPEED TEST BLAS 2 > > Double precision: Vector size = 4 ?Matrix size = 4x4 > > numpy.dot +: ? ? ? ?312 kc/s > dgemv: ? ? ? ? ? ? 2857 kc/s ? 9.1x > dgemv3: ? ? ? ? ? ?9091 kc/s ?29.1x > dgemv5: ? ? ? ? ? ?9375 kc/s ?30.0x > dgemv6: ? ? ? ? ? ?9375 kc/s ?30.0x > dgemv_: ? ? ? ? ? 11321 kc/s ?36.2x > > numpy.outer: ? ? ? ?118 kc/s > dger: ? ? ? ? ? ? ?2344 kc/s ?19.8x > dger3: ? ? ? ? ? ? 7895 kc/s ?66.7x > dger4: ? ? ? ? ? ? 8108 kc/s ?68.5x > dger_: ? ? ? ? ? ? 9449 kc/s ?79.8x > > Double precision: Vector size = 15 ?Matrix size = 15x15 > > numpy.dot +: ? ? ? ?296 kc/s > dgemv: ? ? ? ? ? ? 2000 kc/s ? 6.8x > dgemv3: ? ? ? ? ? ?4444 kc/s ?15.0x > dgemv5: ? ? ? ? ? ?5000 kc/s ?16.9x > dgemv6: ? ? ? ? ? ?4615 kc/s ?15.6x > dgemv_: ? ? ? ? ? ?5217 kc/s ?17.6x > > numpy.outer: ? ? ? ? 89 kc/s > dger: ? ? ? ? ? ? ?1143 kc/s ?12.9x > dger3: ? ? ? ? ? ? 2330 kc/s ?26.2x > dger4: ? ? ? ? ? ? 2667 kc/s ?30.0x > dger_: ? ? ? ? ? ? 2824 kc/s ?31.8x > > Double precision: Vector size = 30 ?Matrix size = 30x30 > > numpy.dot +: ? ? ? ?261 kc/s > dgemv: ? ? ? ? ? ? 1271 kc/s ? 4.9x > dgemv3: ? ? ? ? ? ?2676 kc/s ?10.3x > dgemv5: ? ? ? ? ? ?2311 kc/s ? 8.9x > dgemv6: ? ? ? ? ? ?2676 kc/s ?10.3x > dgemv_: ? ? ? ? ? ?2421 kc/s ? 9.3x > > numpy.outer: ? ? ? ? 64 kc/s > dger: ? ? ? ? ? ? ? 782 kc/s ?12.2x > dger3: ? ? ? ? ? ? 1412 kc/s ?22.1x > dger4: ? ? ? ? ? ? 1182 kc/s ?18.5x > dger_: ? ? ? ? ? ? 1356 kc/s ?21.2x > > > SPEED TEST BLAS 3 > > Double precision: Vector size = 4 ?Matrix size = 4x4 > > numpy.dot: ? ? ? ?845 kc/s > dgemm: ? ? ? ? ? 2259 kc/s ? 2.7x > dgemm3: ? ? ? ? ?4808 kc/s ? 5.7x > dgemm5: ? ? ? ? ?4934 kc/s ? 5.8x > dgemm7: ? ? ? ? ?4808 kc/s ? 5.7x > dgemm_: ? ? ? ? ?5357 kc/s ? 6.3x > > Double precision: Vector size = 15 ?Matrix size = 15x15 > > numpy.dot: ? ? ? ?290 kc/s > dgemm: ? ? ? ? ? ?476 kc/s ? 1.6x > dgemm3: ? ? ? ? ? 580 kc/s ? 2.0x > dgemm5: ? ? ? ? ? 606 kc/s ? 2.1x > dgemm7: ? ? ? ? ? 580 kc/s ? 2.0x > dgemm_: ? ? ? ? ? 606 kc/s ? 2.1x > > Double precision: Vector size = 30 ?Matrix size = 30x30 > > numpy.dot: ? ? ? ?108 kc/s > dgemm: ? ? ? ? ? ?128 kc/s ? 1.2x > dgemm3: ? ? ? ? ? 145 kc/s ? 1.3x > dgemm5: ? ? ? ? ? 139 kc/s ? 1.3x > dgemm7: ? ? ? ? ? 145 kc/s ? 1.3x > dgemm_: ? ? ? ? ? 145 kc/s ? 1.3x > I should add that it worked on both OS X and Ubuntu-- have not tested on Windows (yet) From wesmckinn at gmail.com Mon Mar 14 11:12:22 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Mon, 14 Mar 2011 11:12:22 -0400 Subject: [SciPy-User] fast small matrix multiplication with cython? In-Reply-To: References: Message-ID: On Thu, Dec 9, 2010 at 5:01 PM, Skipper Seabold wrote: > On Thu, Dec 9, 2010 at 4:33 PM, Skipper Seabold wrote: >> On Wed, Dec 8, 2010 at 11:28 PM, ? wrote: >>>> >>>> It looks like I don't save too much time with just Python/scipy >>>> optimizations. ?Apparently ~75% of the time is spent in l-bfgs-b, >>>> judging by its user time output and the profiler's CPU time output(?). >>>> ?Non-cython versions: >>>> >>>> Brief and rough profiling on my laptop for ARMA(2,2) with 1000 >>>> observations. ?Optimization uses fmin_l_bfgs_b with m = 12 and iprint >>>> = 0. >>> >>> Completely different idea: How costly are the numerical derivatives in l-bfgs-b? >>> With l-bfgs-b, you should be able to replace the derivatives with the >>> complex step derivatives that calculate the loglike function value and >>> the derivatives in one iteration. >>> >> >> I couldn't figure out how to use it without some hacks. ?The >> fmin_l_bfgs_b will call both f and fprime as (x, *args), but >> approx_fprime or approx_fprime_cs need actually approx_fprime(x, func, >> args=args) and call func(x, *args). ?I changed fmin_l_bfgs_b to make >> the call like this for the gradient, and I get (different computer) >> >> >> Using approx_fprime_cs >> ----------------------------------- >> ? ? ? ? 861609 function calls (861525 primitive calls) in 3.337 CPU seconds >> >> ? Ordered by: internal time >> >> ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) >> ? ? ? 70 ? ?1.942 ? ?0.028 ? ?3.213 ? ?0.046 kalmanf.py:504(loglike) >> ? 840296 ? ?1.229 ? ?0.000 ? ?1.229 ? ?0.000 {numpy.core._dotblas.dot} >> ? ? ? 56 ? ?0.038 ? ?0.001 ? ?0.038 ? ?0.001 {numpy.linalg.lapack_lite.zgesv} >> ? ? ?270 ? ?0.025 ? ?0.000 ? ?0.025 ? ?0.000 {sum} >> ? ? ? 90 ? ?0.019 ? ?0.000 ? ?0.019 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} >> ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 >> function_base.py:494(asarray_chkfinite) >> ? ? ?162 ? ?0.012 ? ?0.000 ? ?0.014 ? ?0.000 arima.py:117(_transparams) >> >> >> Using approx_grad = True >> --------------------------------------- >> ? ? ? ? 1097454 function calls (1097370 primitive calls) in 3.615 CPU seconds >> >> ? Ordered by: internal time >> >> ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) >> ? ? ? 90 ? ?2.316 ? ?0.026 ? ?3.489 ? ?0.039 kalmanf.py:504(loglike) >> ?1073757 ? ?1.164 ? ?0.000 ? ?1.164 ? ?0.000 {numpy.core._dotblas.dot} >> ? ? ?270 ? ?0.025 ? ?0.000 ? ?0.025 ? ?0.000 {sum} >> ? ? ? 90 ? ?0.020 ? ?0.000 ? ?0.020 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} >> ? ? ?182 ? ?0.014 ? ?0.000 ? ?0.016 ? ?0.000 arima.py:117(_transparams) >> ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 >> function_base.py:494(asarray_chkfinite) >> ? ? ? 46 ? ?0.008 ? ?0.000 ? ?0.023 ? ?0.000 decomp_svd.py:12(svd) >> ? ? ? 23 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 {method 'var' of >> 'numpy.ndarray' objects} >> >> >> Definitely less function calls and a little faster, but I had to write >> some hacks to get it to work. >> > > This is more like it! ?With fast recursions in Cython: > > ? ? ? ? 15186 function calls (15102 primitive calls) in 0.750 CPU seconds > > ? Ordered by: internal time > > ? ncalls ?tottime ?percall ?cumtime ?percall filename:lineno(function) > ? ? ? 18 ? ?0.622 ? ?0.035 ? ?0.625 ? ?0.035 > kalman_loglike.pyx:15(kalman_loglike) > ? ? ?270 ? ?0.024 ? ?0.000 ? ?0.024 ? ?0.000 {sum} > ? ? ? 90 ? ?0.019 ? ?0.000 ? ?0.019 ? ?0.000 {numpy.linalg.lapack_lite.dgesdd} > ? ? ?156 ? ?0.013 ? ?0.000 ? ?0.013 ? ?0.000 {numpy.core._dotblas.dot} > ? ? ? 46 ? ?0.013 ? ?0.000 ? ?0.014 ? ?0.000 > function_base.py:494(asarray_chkfinite) > ? ? ?110 ? ?0.008 ? ?0.000 ? ?0.010 ? ?0.000 arima.py:118(_transparams) > ? ? ? 46 ? ?0.008 ? ?0.000 ? ?0.023 ? ?0.000 decomp_svd.py:12(svd) > ? ? ? 23 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 {method 'var' of > 'numpy.ndarray' objects} > ? ? ? 26 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 tsatools.py:109(lagmat) > ? ? ? 90 ? ?0.004 ? ?0.000 ? ?0.042 ? ?0.000 arima.py:197(loglike_css) > ? ? ? 81 ? ?0.004 ? ?0.000 ? ?0.004 ? ?0.000 > {numpy.core.multiarray._fastCopyAndTranspose} > > I can live with this for now. > > Skipper > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Revisiting this topic from a few months ago. I was able to get Tokyo (see github.com/tokyo/tokyo and http://www.vetta.org/2009/09/tokyo-a-cython-blas-wrapper-for-fast-matrix-math/) to build against (ATLAS? or MKL?) in EPD 7.0 with some modifications to the setup.py, see my fork on github: https://github.com/wesm/tokyo/blob/master/setup.py Someone who knows a bit more about the linking against multiple versions of BLAS/ATLAS/MKL might be able to make it work more generally-- I basically just poked around numpy/core/setup.py The speedups for small matrices suggest it would probably be worthwhile to have in our toolbelt: $ python double_speed.py SPEED TEST BLAS 2 Double precision: Vector size = 4 Matrix size = 4x4 numpy.dot +: 312 kc/s dgemv: 2857 kc/s 9.1x dgemv3: 9091 kc/s 29.1x dgemv5: 9375 kc/s 30.0x dgemv6: 9375 kc/s 30.0x dgemv_: 11321 kc/s 36.2x numpy.outer: 118 kc/s dger: 2344 kc/s 19.8x dger3: 7895 kc/s 66.7x dger4: 8108 kc/s 68.5x dger_: 9449 kc/s 79.8x Double precision: Vector size = 15 Matrix size = 15x15 numpy.dot +: 296 kc/s dgemv: 2000 kc/s 6.8x dgemv3: 4444 kc/s 15.0x dgemv5: 5000 kc/s 16.9x dgemv6: 4615 kc/s 15.6x dgemv_: 5217 kc/s 17.6x numpy.outer: 89 kc/s dger: 1143 kc/s 12.9x dger3: 2330 kc/s 26.2x dger4: 2667 kc/s 30.0x dger_: 2824 kc/s 31.8x Double precision: Vector size = 30 Matrix size = 30x30 numpy.dot +: 261 kc/s dgemv: 1271 kc/s 4.9x dgemv3: 2676 kc/s 10.3x dgemv5: 2311 kc/s 8.9x dgemv6: 2676 kc/s 10.3x dgemv_: 2421 kc/s 9.3x numpy.outer: 64 kc/s dger: 782 kc/s 12.2x dger3: 1412 kc/s 22.1x dger4: 1182 kc/s 18.5x dger_: 1356 kc/s 21.2x SPEED TEST BLAS 3 Double precision: Vector size = 4 Matrix size = 4x4 numpy.dot: 845 kc/s dgemm: 2259 kc/s 2.7x dgemm3: 4808 kc/s 5.7x dgemm5: 4934 kc/s 5.8x dgemm7: 4808 kc/s 5.7x dgemm_: 5357 kc/s 6.3x Double precision: Vector size = 15 Matrix size = 15x15 numpy.dot: 290 kc/s dgemm: 476 kc/s 1.6x dgemm3: 580 kc/s 2.0x dgemm5: 606 kc/s 2.1x dgemm7: 580 kc/s 2.0x dgemm_: 606 kc/s 2.1x Double precision: Vector size = 30 Matrix size = 30x30 numpy.dot: 108 kc/s dgemm: 128 kc/s 1.2x dgemm3: 145 kc/s 1.3x dgemm5: 139 kc/s 1.3x dgemm7: 145 kc/s 1.3x dgemm_: 145 kc/s 1.3x From mikehulluk at googlemail.com Tue Mar 15 14:33:39 2011 From: mikehulluk at googlemail.com (Michael Hull) Date: Tue, 15 Mar 2011 18:33:39 +0000 Subject: [SciPy-User] 1 dimensional interpolation of vectors Message-ID: Hi, Sorry for the confusing title! Firstly, thanks for all the great work on numpy and scipy, its very appreciated. What I have is an array of time recordings of various properties. If have recordings of prop1,prop2,prop3,prop4.... propN, and for each recording, I have the values at millisecond time intervals, stored in a 2 dimensional array. The properties are not linked, what I am trying to do is to find the value at say t=2.4ms, i.e. a non integer millisecond, by linearly interpolating between the two time points 2ms and 3ms. I can do this in one dimension using scipy.interpolate.interp1d for each property, but what I would like to do is get an entire row in one go,, because the number of properties is pretty large. I can write this myself, but I was wondering if there was already something built in? Many thanks Mike From pfeldman at verizon.net Tue Mar 15 14:45:18 2011 From: pfeldman at verizon.net (Dr. Phillip M. Feldman) Date: Tue, 15 Mar 2011 11:45:18 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] support for truncated normal distribution Message-ID: <31156263.post@talk.nabble.com> I've noticed that there is no truncated normal distribution in NumPy, at least according to the following source: http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html, I've written code to generate random deviates from a truncated normal distribution via acceptance-rejection, but this is inefficient when the acceptance probability is low. I assume that NumPy is generating standard normal deviates via the Ziggurat algorithm. That algorithm can be modified to produce random deviates from a truncated normal without the use of acceptance-rejection. I'd be very grateful if someone can implement this. -- View this message in context: http://old.nabble.com/support-for-truncated-normal-distribution-tp31156263p31156263.html Sent from the Scipy-User mailing list archive at Nabble.com. From hnry2k at hotmail.com Tue Mar 15 14:48:51 2011 From: hnry2k at hotmail.com (=?iso-8859-1?B?Sm9yZ2UgRS4gtFNhbmNoZXogU2FuY2hleg==?=) Date: Tue, 15 Mar 2011 12:48:51 -0600 Subject: [SciPy-User] gradient of meshed surfaces In-Reply-To: References: , , , , , , , , , , , , , , , , Message-ID: Dear friends, I have some "n" discrete curvilinear parallel isosurfaces S_n(x_i, y_i, z_i_n) whose gradient at their inner nodes I am interested to calculate, from the information of numpy.gradient I understand that it only works when dx, dy and dz are constants, but due to the curvilinear nature of these isosurfaces it seems to me that I cannot use it. So, I have been trying to design a method to calculate them as finite differences, but I am not convinced of the goodnesses of it and I would like to know if somebody knows of a python implementation with the best way to do this in order to not invent the octhagonal wheel (not "circularly" polished), or otherwise refer me to a proper reference to learn how to do it the best way possible with a reasonable precision. Thanks in advance, Jorge -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Tue Mar 15 14:58:04 2011 From: robert.kern at gmail.com (Robert Kern) Date: Tue, 15 Mar 2011 13:58:04 -0500 Subject: [SciPy-User] [SciPy-user] support for truncated normal distribution In-Reply-To: <31156263.post@talk.nabble.com> References: <31156263.post@talk.nabble.com> Message-ID: On Tue, Mar 15, 2011 at 13:45, Dr. Phillip M. Feldman wrote: > > I've noticed that there is no truncated normal distribution in NumPy, at > least according to the following source: > > http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html, > > I've written code to generate random deviates from a truncated normal > distribution via acceptance-rejection, but this is inefficient when the > acceptance probability is low. I assume that NumPy is generating standard > normal deviates via the Ziggurat algorithm. That algorithm can be modified > to produce random deviates from a truncated normal without the use of > acceptance-rejection. ?I'd be very grateful if someone can implement this. No, we use the Box-Mueller transform, which is not easily truncated. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From wesmckinn at gmail.com Tue Mar 15 15:03:23 2011 From: wesmckinn at gmail.com (Wes McKinney) Date: Tue, 15 Mar 2011 15:03:23 -0400 Subject: [SciPy-User] [SciPy-user] support for truncated normal distribution In-Reply-To: References: <31156263.post@talk.nabble.com> Message-ID: On Tue, Mar 15, 2011 at 2:58 PM, Robert Kern wrote: > On Tue, Mar 15, 2011 at 13:45, Dr. Phillip M. Feldman > wrote: >> >> I've noticed that there is no truncated normal distribution in NumPy, at >> least according to the following source: >> >> http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html, >> >> I've written code to generate random deviates from a truncated normal >> distribution via acceptance-rejection, but this is inefficient when the >> acceptance probability is low. I assume that NumPy is generating standard >> normal deviates via the Ziggurat algorithm. That algorithm can be modified >> to produce random deviates from a truncated normal without the use of >> acceptance-rejection. ?I'd be very grateful if someone can implement this. > > No, we use the Box-Mueller transform, which is not easily truncated. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless > enigma that is made terrible by our own mad attempt to interpret it as > though it had an underlying truth." > ? -- Umberto Eco > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > I have an implementation here (using the inverse CDF method): https://github.com/wesm/statlib/blob/master/statlib/distributions.py#L12 There is also scipy.stats.truncnorm (which I have not tested but assume works): Notes ----- Truncated Normal distribution. The standard form of this distribution is a standard normal truncated to the range [a,b] --- notice that a and b are defined over the domain of the standard normal. To convert clip values for a specific mean and standard deviation use a,b = (myclip_a-my_mean)/my_std, (myclip_b-my_mean)/my_std From warren.weckesser at enthought.com Tue Mar 15 15:23:47 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Tue, 15 Mar 2011 14:23:47 -0500 Subject: [SciPy-User] 1 dimensional interpolation of vectors In-Reply-To: References: Message-ID: On Tue, Mar 15, 2011 at 1:33 PM, Michael Hull wrote: > Hi, > Sorry for the confusing title! > Firstly, thanks for all the great work on numpy and scipy, its very > appreciated. > > What I have is an array of time recordings of various properties. If > have recordings of prop1,prop2,prop3,prop4.... propN, and for each > recording, I have the values at millisecond time intervals, stored in > a 2 dimensional array. > The properties are not linked, what I am trying to do is to find the > value at say t=2.4ms, i.e. a non integer millisecond, by linearly > interpolating between the two time points 2ms and 3ms. > > I can do this in one dimension using scipy.interpolate.interp1d for > each property, but what I would like to do is get an entire row in one > go,, because the number of properties is > pretty large. > > I can write this myself, but I was wondering if there was already > something built in? > > interp1d can take a 2D array for the y value. For example, In [16]: x = array([0.0, 1.0, 2.0, 3.0]) In [17]: y = array([[1.0, 0.0, 0.5, 10.5],[-4, 2, 2, 0]]) In [18]: func = interp1d(x, y) In [19]: func(0.5) Out[19]: array([ 0.5, -1. ]) In [20]: func(1.1) Out[20]: array([ 0.05, 2. ]) In [21]: func(2.75) Out[21]: array([ 8. , 0.5]) So if you have a 2D array of measurements--one row for each "prop"--you can use interp1d without a loop. If each property is a column, you can use the 'axis=0' keyword argument in interp1d, or transpose the array. Warren > Many thanks > > > Mike > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Solomon.Negusse at twdb.state.tx.us Tue Mar 15 15:46:45 2011 From: Solomon.Negusse at twdb.state.tx.us (Solomon Negusse) Date: Tue, 15 Mar 2011 14:46:45 -0500 Subject: [SciPy-User] timeseries plotting with two y axes Message-ID: <4D7F7BD5.5886.0024.1@twdb.state.tx.us> Hi, I'm trying to use timeseries.lib.plotlib to plot two timeseries with two y axes on a single plot. I tried to use the twinx function the same way that I had done so with matplotlib's plot_date. This is part of my code: fig = tpl.tsfigure() fsp = fig.add_tsplot(111) raw_salinity = fsp.tsplot(datasonde_series['Salinity'],'r.') fsp.set_ylim(0,100) fsp.set_dlim(sdate,edate) fsptemp = fsp.twinx() water_level = fsptemp.tsplot(datasonde_series['WaterDepth'],'b.') Doing this I get an error : 'Axes' object has no attribute tsplot. I'm kind of lost in how to fix this problem and I'll appreciate any tips on how I can implement twinx correctly or if there is other way to do it within timeseries.lib.plotlib. thanks, -solomon -------------- next part -------------- An HTML attachment was scrubbed... URL: From josef.pktd at gmail.com Tue Mar 15 16:30:33 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Tue, 15 Mar 2011 16:30:33 -0400 Subject: [SciPy-User] [SciPy-user] support for truncated normal distribution In-Reply-To: References: <31156263.post@talk.nabble.com> Message-ID: On Tue, Mar 15, 2011 at 3:03 PM, Wes McKinney wrote: > On Tue, Mar 15, 2011 at 2:58 PM, Robert Kern wrote: >> On Tue, Mar 15, 2011 at 13:45, Dr. Phillip M. Feldman >> wrote: >>> >>> I've noticed that there is no truncated normal distribution in NumPy, at >>> least according to the following source: >>> >>> http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.mtrand.RandomState.html, >>> >>> I've written code to generate random deviates from a truncated normal >>> distribution via acceptance-rejection, but this is inefficient when the >>> acceptance probability is low. I assume that NumPy is generating standard >>> normal deviates via the Ziggurat algorithm. That algorithm can be modified >>> to produce random deviates from a truncated normal without the use of >>> acceptance-rejection. ?I'd be very grateful if someone can implement this. >> >> No, we use the Box-Mueller transform, which is not easily truncated. >> >> -- >> Robert Kern >> >> "I have come to believe that the whole world is an enigma, a harmless >> enigma that is made terrible by our own mad attempt to interpret it as >> though it had an underlying truth." >> ? -- Umberto Eco >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > I have an implementation here (using the inverse CDF method): > > https://github.com/wesm/statlib/blob/master/statlib/distributions.py#L12 > > There is also scipy.stats.truncnorm (which I have not tested but assume works): It`s using the generic rvs which is also inverse cdf method. Josef > > ? ?Notes > ? ?----- > ? ?Truncated Normal distribution. > > ? ? ?The standard form of this distribution is a standard normal > truncated to the > ? ? ?range [a,b] --- notice that a and b are defined over the domain > ? ? ?of the standard normal. ?To convert clip values for a specific mean and > ? ? ?standard deviation use a,b = (myclip_a-my_mean)/my_std, > (myclip_b-my_mean)/my_std > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From story645 at yahoo.com Tue Mar 15 23:00:12 2011 From: story645 at yahoo.com (hannah aiznman) Date: Wed, 16 Mar 2011 03:00:12 +0000 (UTC) Subject: [SciPy-User] timeseries plotting with two y axes References: <4D7F7BD5.5886.0024.1@twdb.state.tx.us> Message-ID: >Solomon Negusse twdb.state.tx.us> writes: > fig = tpl.tsfigure()??? > fsp = fig.add_tsplot(111)??? > raw_salinity = > fsp.tsplot(datasonde_series['Salinity'],'r.')??? fsp.set_ylim(0,100)??? > fsp.set_dlim(sdate,edate)??? > fsptemp = fsp.twinx() > water_level = fsptemp.tsplot(datasonde_series['WaterDepth'],'b.')?? > ? > Doing this I get an error :? 'Axes' object has no attribute tsplot.? What version of timeseries and matplotlib are you using? There's an incompatibility between two versions that requires patching timeseries, and I remember getting the bug specifically when I tried to add subplots. See here for more instructions and a possible fix: http://projects.scipy.org/scikits/ticket/113 From tmp50 at ukr.net Wed Mar 16 03:29:14 2011 From: tmp50 at ukr.net (Dmitrey) Date: Wed, 16 Mar 2011 09:29:14 +0200 Subject: [SciPy-User] OpenOpt Suite release 0.33 Message-ID: Hi all, I'm glad to inform you about new release 0.33 of our completely free (license: BSD) cross-platform software: OpenOpt: > * cplex has been connected > * New global solver interalg with guarantied precision, competitor to LGO, BARON, MATLAB's intsolver and Direct (also can work in inexact mode) > * New solver amsg2p for unconstrained medium-scaled NLP and NSP > FuncDesigner: > * Essential speedup for automatic differentiation when vector-variables are involved, for both dense and sparse cases > * Solving MINLP became available > * Add uncertainty analysis > * Add interval analysis > * Now you can solve systems of equations with automatic determination is the system linear or nonlinear (subjected to given set of free or fixed variables) > * FD Funcs min and max can work on lists of oofuns > * Bugfix for sparse SLE (system of linear equations), that slowed down computation time and demanded more memory > * New oofuns angle, cross > * Using OpenOpt result(oovars) is available, also, start points with oovars() now can be assigned easier > SpaceFuncs (2D, 3D, N-dimensional geometric package with abilities for parametrized calculations, solving systems of geometric equations and numerical optimization with automatic differentiation): > * Some bugfixes > DerApproximator: > * Adjusted with some changes in FuncDesigner > For more details visit our site http://openopt.org. > Regards, Dmitrey. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikehulluk at googlemail.com Wed Mar 16 05:26:27 2011 From: mikehulluk at googlemail.com (Michael Hull) Date: Wed, 16 Mar 2011 09:26:27 +0000 Subject: [SciPy-User] SciPy-User Digest, Vol 91, Issue 21 In-Reply-To: References: Message-ID: > Message: 6 > Date: Tue, 15 Mar 2011 14:23:47 -0500 > From: Warren Weckesser > Subject: Re: [SciPy-User] 1 dimensional interpolation of vectors > To: SciPy Users List > Message-ID: > ? ? ? ? > Content-Type: text/plain; charset="utf-8" > > On Tue, Mar 15, 2011 at 1:33 PM, Michael Hull wrote: > >> Hi, >> Sorry for the confusing title! >> Firstly, thanks for all the great work on numpy and scipy, its very >> appreciated. >> >> What I have is an array of time recordings of various properties. If >> have recordings of prop1,prop2,prop3,prop4.... propN, and for each >> recording, I have the values at millisecond time intervals, stored in >> a 2 dimensional array. >> The properties are not linked, what I am trying to do is to find the >> value at say t=2.4ms, i.e. ?a non integer millisecond, by linearly >> interpolating between the two time points 2ms and 3ms. >> >> I can do this in one dimension using scipy.interpolate.interp1d for >> each property, but what I would like to do is get an entire row in one >> go,, because the number of properties is >> pretty large. >> >> I can write this myself, but I was wondering if there was already >> something built in? >> >> > > interp1d can take a 2D array for the y value. ?For example, > > > In [16]: x = array([0.0, 1.0, 2.0, 3.0]) > > In [17]: y = array([[1.0, 0.0, 0.5, 10.5],[-4, 2, 2, 0]]) > > In [18]: func = interp1d(x, y) > > In [19]: func(0.5) > Out[19]: array([ 0.5, -1. ]) > > In [20]: func(1.1) > Out[20]: array([ 0.05, ?2. ?]) > > In [21]: func(2.75) > Out[21]: array([ 8. , ?0.5]) > > > So if you have a 2D array of measurements--one row for each "prop"--you can > use interp1d without a loop. ?If each property is a column, you can use the > 'axis=0' keyword argument in interp1d, or transpose the array. Ah, thats great, I'll give this a try. Thanks Mike From Solomon.Negusse at twdb.state.tx.us Wed Mar 16 10:09:07 2011 From: Solomon.Negusse at twdb.state.tx.us (Solomon Negusse) Date: Wed, 16 Mar 2011 09:09:07 -0500 Subject: [SciPy-User] timeseries plotting with two y axes In-Reply-To: References: <4D7F7BD5.5886.0024.1@twdb.state.tx.us> Message-ID: <4D807E33.5886.0024.1@twdb.state.tx.us> Hi Hannah, I'm using matplotlib1.0.1 and scikits.timeseries0.91.3 so I'm not sure if mine is the same incompatibility problem.But I worked around the problem yesterday by using matplotlib's plot_dates instead which works for now but doesn't give you the neat axis autoscaling that timeseries' plotlib does. In any case I'll test your patched version and see if it works. Thanks a lot, -solomon >>> hannah aiznman 3/15/2011 10:00 PM >>> >Solomon Negusse twdb.state.tx.us> writes: > fig = tpl.tsfigure() > fsp = fig.add_tsplot(111) > raw_salinity = > fsp.tsplot(datasonde_series['Salinity'],'r.') fsp.set_ylim(0,100) > fsp.set_dlim(sdate,edate) > fsptemp = fsp.twinx() > water_level = fsptemp.tsplot(datasonde_series['WaterDepth'],'b.') > > Doing this I get an error : 'Axes' object has no attribute tsplot. What version of timeseries and matplotlib are you using? There's an incompatibility between two versions that requires patching timeseries, and I remember getting the bug specifically when I tried to add subplots. See here for more instructions and a possible fix: http://projects.scipy.org/scikits/ticket/113 _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Richards at mmu.ac.uk Wed Mar 16 10:57:28 2011 From: D.Richards at mmu.ac.uk (Dan Richards) Date: Wed, 16 Mar 2011 14:57:28 +0000 Subject: [SciPy-User] Scipy.spatial.Delaunay Message-ID: <000601cbe3ea$7885d400$69917c00$@Richards@mmu.ac.uk> Hi, I am working on a project that requires Delaunay triangulation of a set of points in 3D space. I am trying to use 'Scipy.spatial.Delaunay' to convert a set of [x,y,z] points into vertices that define the triangulated simplexes/faces, however I am struggling to work out which attributes give me this data? #So far I simply have a set of points: Points = [[0,0,0],[10,91,23],[53,4,66],[49,392,49],[39,20,0]] #I am then using: x = scipy.spatial.Delaunay(Points) I am sure for here it is very simple, however I have been struggling to understand which attributes I need access to get the vertices of connecting lines? * x.vertices? * x.neighbors? * x.vertex_to_simplex? * x.convex_hull...? If anyone can help or point me in the right direction that would be very much appreciated. Thanks, Dan "Before acting on this email or opening any attachments you should read the Manchester Metropolitan University email disclaimer available on its website http://www.mmu.ac.uk/emaildisclaimer " -------------- next part -------------- An HTML attachment was scrubbed... URL: From emanuele at relativita.com Wed Mar 16 11:41:07 2011 From: emanuele at relativita.com (Emanuele Olivetti) Date: Wed, 16 Mar 2011 16:41:07 +0100 Subject: [SciPy-User] efficient way to store and use a 4D redundant matrix Message-ID: <4D80DA13.2060109@relativita.com> Hi Everybody, I have a 4D matrix "A" where entries differing only by a permutation of indices are equal, i.e. A[1,2,3,4] = A[3,4,2,1]. Since each set of of 4 indices I have then 24 permutations, my goal would be to avoid storing 24 times the necessary space. I am looking at scipy.sparse but I have no clear understanding whether it could handle my case. Any idea? For sake of clarity here is a simple code computing a matrix A like the one above. I usually have A of shape (100,100,100,100) which requires ~800Mb when the dtype is double. Of course the non-redundant part is just ~4Mb so you may understand my interest in this issue. ---- import numpy as np from itertools import permutations try: form itertools import combinations_with_replacement except ImportError: # if Python < v2.7 that function is not available... form itertools import product def combinations_with_replacement(iterable, r): pool = tuple(iterable) n = len(pool) for indices in product(range(n), repeat=r): if sorted(indices) == list(indices): yield tuple(pool[i] for i in indices) rows = 10 columns = 20 x = np.random.rand(rows,columns)+1.0 A = np.zeros((rows, rows, rows, rows)) indices_rows = range(rows) for i,j,k,l in combinations_with_replacement(indices_rows, 4): tmp = (x[i,:]*x[j,:]*x[k,:]*x[l,:]).sum() for a,b,c,d in permutations([i,j,k,l]): A[a,b,c,d] = tmp ---- In case you wonder which kind of operations do I need to do on A, they are usual manypulations (slicing, reshaping, etc.), and common linear algebra. Best, Emanuele From pav at iki.fi Wed Mar 16 13:31:20 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Mar 2011 17:31:20 +0000 (UTC) Subject: [SciPy-User] Scipy.spatial.Delaunay References: <39079.3753009804$1300287481@news.gmane.org> Message-ID: Wed, 16 Mar 2011 14:57:28 +0000, Dan Richards wrote: [clip] > I am sure for here it is very simple, however I have been struggling to > understand which attributes I need access to get the vertices of > connecting lines? > > * x.vertices? > * x.neighbors? > * x.vertex_to_simplex? > * x.convex_hull...? > > If anyone can help or point me in the right direction that would be very > much appreciated. The edges are recorded in the `vertices` array, which contains indices of the points making up each triangle. The overall structure is recorded `neighbors`. This is maybe easiest to explain in code. The set of edges is: edges = [] for i in xrange(x.nsimplex): edges.append((x.vertices[i,0], x.vertices[i,1])) edges.append((x.vertices[i,1], x.vertices[i,2])) edges.append((x.vertices[i,2], x.vertices[i,0])) This however counts each edge multiple times. To get around, that: edges = [] for i in xrange(x.nsimplex): if i > x.neighbors[i,2]: edges.append((x.vertices[i,0], x.vertices[i,1])) if i > x.neighbors[i,0]: edges.append((x.vertices[i,1], x.vertices[i,2])) if i > x.neighbors[i,1]: edges.append((x.vertices[i,2], x.vertices[i,0])) This counts each edge only once. Note how the `neighbors` array relates to `vertices`: its j-th entry gives the neighboring triangle on the other side of the edge formed that remains after the j-th vertex is removed from the triangle. -- Pauli Virtanen From D.Richards at mmu.ac.uk Wed Mar 16 15:22:24 2011 From: D.Richards at mmu.ac.uk (Dan Richards) Date: Wed, 16 Mar 2011 19:22:24 +0000 Subject: [SciPy-User] Scipy.spatial.Delaunay In-Reply-To: References: <39079.3753009804$1300287481@news.gmane.org> Message-ID: <000e01cbe40f$7b12e7b0$7138b710$@Richards@mmu.ac.uk> Hi Pauli, Thanks for your quick reply, I really appreciate the help. I am still a little confused as to how the points, vertices and neighbors relate to one another. Perhaps I can explain how I understand them and you can correct me? When I type x.vertices I get an array that has values for each index: >>>x.vertices array([[6, 4, 5, 9], [8, 6, 4, 5], [8, 1, 4, 7], [8, 1, 6, 4], [3, 6, 4, 9]...]) Do these numbers [w,x,y,z] represent a triangulation whereby the connections are as follows?: w-x x-y y-z w-y w-z Your code did seem to work well, although I added an extra line which I assume should have been there? edges = [] for i in xrange(x.nsimplex): edges.append((x.vertices[i,0], x.vertices[i,1])) edges.append((x.vertices[i,1], x.vertices[i,2])) edges.append((x.vertices[i,2], x.vertices[i,3])) # New line here edges.append((x.vertices[i,3], x.vertices[i,0])) The confusion on my part is that I expected the vertices to hold three indices relating to the points of a triangle so I am confused as to how to interpret the four values? Equally with the neighbors, could you tell me what the four indices define? >>>x.neighbors array([[-1, -1, 1, 6], [ 0, 2, 54, 9], [-1, 1, 19, 4], [12, 27, 31, 10], [ 2, 5, 13, 44]...]) Many Thanks, Dan The edges are recorded in the `vertices` array, which contains indices of the points making up each triangle. The overall structure is recorded `neighbors`. This is maybe easiest to explain in code. The set of edges is: edges = [] for i in xrange(x.nsimplex): edges.append((x.vertices[i,0], x.vertices[i,1])) edges.append((x.vertices[i,1], x.vertices[i,2])) edges.append((x.vertices[i,2], x.vertices[i,0])) This however counts each edge multiple times. To get around, that: edges = [] for i in xrange(x.nsimplex): if i > x.neighbors[i,2]: edges.append((x.vertices[i,0], x.vertices[i,1])) if i > x.neighbors[i,0]: edges.append((x.vertices[i,1], x.vertices[i,2])) if i > x.neighbors[i,1]: edges.append((x.vertices[i,2], x.vertices[i,0])) This counts each edge only once. Note how the `neighbors` array relates to `vertices`: its j-th entry gives the neighboring triangle on the other side of the edge formed that remains after the j-th vertex is removed from the triangle. -- Pauli Virtanen _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user "Before acting on this email or opening any attachments you should read the Manchester Metropolitan University email disclaimer available on its website http://www.mmu.ac.uk/emaildisclaimer " From dplepage at gmail.com Wed Mar 16 15:31:44 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Wed, 16 Mar 2011 15:31:44 -0400 Subject: [SciPy-User] efficient way to store and use a 4D redundant matrix In-Reply-To: <4D80DA13.2060109@relativita.com> References: <4D80DA13.2060109@relativita.com> Message-ID: I don't know of any scipy class that will do this for you; when last I checked, scipy didn't even include a good class for upper/lower-triangular matrix storage (which would cut your space in half). You could save some space with an index matrix - a 100x100x100x100 array of 32-bit ints indexing another array of 64-bit floats; that cuts your storage by half, but you'd have to implement linear algebra operations yourself. The sparse matrix formats will only help you if you can rewrite A in terms of matrices that are mostly 0. Do you need the results of slicing, reshaping, etc. to also be similarly compressed? If so, I can't see any way to implement this without an index array, because once you reshape or slice A you won't know which cells correspond to which indices in the original A. If you're only taking small slices of this and then applying linear algebra operations to those, you might be better off writing a class that looks up the relevant values on the fly; you could overload __getitem__ so that e.g. A[:,1,:,3] would generate the correct float64 array on the fly and return it. However, if the nonredundant part takes only ~4MB, then maybe I don't understand your layout - for a 100x100x100x100 and 64-bit floats, I think the nonredundant part should take ((100 choose 4) + ((100 choose 3) * 3) + ((100 choose 2) * 3) + (100 choose 1)) * 8 bytes = about 34MB. Was that a math error, or am I misunderstanding the question? Thanks, Dan Lepage On Wed, Mar 16, 2011 at 11:41 AM, Emanuele Olivetti wrote: > Hi Everybody, > > I have a 4D matrix "A" where entries differing only by a permutation of > indices are equal, i.e. A[1,2,3,4] = A[3,4,2,1]. Since each set of > of 4 indices I have then 24 permutations, my goal would be to avoid > storing 24 times the necessary space. I am looking at scipy.sparse but > I have no clear understanding whether it could handle my case. > > Any idea? > > For sake of clarity here is a simple code computing a matrix A like > the one above. I usually have A of shape (100,100,100,100) which requires > ~800Mb when the dtype is double. Of course the non-redundant part is > just ~4Mb so you may understand my interest in this issue. > > ---- > import numpy as np > from itertools import permutations > try: > ? ? form itertools import combinations_with_replacement > except ImportError: # if Python < v2.7 that function is not available... > ? ? form itertools import product > ? ? def combinations_with_replacement(iterable, r): > ? ? ? ? pool = tuple(iterable) > ? ? ? ? n = len(pool) > ? ? ? ? for indices in product(range(n), repeat=r): > ? ? ? ? ? ? if sorted(indices) == list(indices): > ? ? ? ? ? ? ? ? yield tuple(pool[i] for i in indices) > > rows = 10 > columns = 20 > x = np.random.rand(rows,columns)+1.0 > A = np.zeros((rows, rows, rows, rows)) > > indices_rows = range(rows) > for i,j,k,l in combinations_with_replacement(indices_rows, 4): > ? ? tmp = (x[i,:]*x[j,:]*x[k,:]*x[l,:]).sum() > ? ? for a,b,c,d in permutations([i,j,k,l]): > ? ? ? ? A[a,b,c,d] = tmp > ---- > > In case you wonder which kind of operations do I need to do on A, > they are usual manypulations (slicing, reshaping, etc.), and common > linear algebra. > > Best, > > Emanuele > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From robert.kern at gmail.com Wed Mar 16 15:40:22 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 16 Mar 2011 14:40:22 -0500 Subject: [SciPy-User] Scipy.spatial.Delaunay In-Reply-To: <1907631167043638734@unknownmsgid> References: <39079.3753009804$1300287481@news.gmane.org> <1907631167043638734@unknownmsgid> Message-ID: On Wed, Mar 16, 2011 at 14:22, Dan Richards wrote: > Hi Pauli, > > Thanks for your quick reply, I really appreciate the help. > > I am still a little confused as to how the points, vertices and neighbors > relate to one another. Perhaps I can explain how I understand them and you > can correct me? > > When I type x.vertices I get an array that has values for each index: >>>>x.vertices > array([[6, 4, 5, 9], > ? ? ? [8, 6, 4, 5], > ? ? ? [8, 1, 4, 7], > ? ? ? [8, 1, 6, 4], > ? ? ? [3, 6, 4, 9]...]) > > Do these numbers [w,x,y,z] represent a triangulation whereby the connections > are as follows?: > > w-x > x-y > y-z > w-y > w-z And x-z. It's a tetrahedralization, technically. But you probably don't want to deal with edges. Rather, you usually want to deal with faces. w-x-y x-z-y w-z-x w-y-z > Your code did seem to work well, although I added an extra line which I > assume should have been there? > > edges = [] > for i in xrange(x.nsimplex): > ? ? ? ?edges.append((x.vertices[i,0], x.vertices[i,1])) > ? ? ? ?edges.append((x.vertices[i,1], x.vertices[i,2])) > ? ? ? ?edges.append((x.vertices[i,2], x.vertices[i,3])) # New line here > ? ? ? ?edges.append((x.vertices[i,3], x.vertices[i,0])) > > The confusion on my part is that I expected the vertices to hold three > indices relating to the points of a triangle so I am confused as to how to > interpret the four values? He was showing you the 2D case for triangles. For the 3D case of tetrahedra, you probably want faces rather than edges. faces = [] v = x.vertices for i in xrange(x.nsimplex): faces.extend([ (v[i,0], v[i,1], v[i,2]), (v[i,1], v[i,3], v[i,2]), (v[i,0], v[i,3], v[i,1]), (v[i,0], v[i,2], v[i,3]), ]) > Equally with the neighbors, could you tell me what the four indices define? >>>>x.neighbors > array([[-1, -1, ?1, ?6], > ? ? ? [ 0, ?2, 54, ?9], > ? ? ? [-1, ?1, 19, ?4], > ? ? ? [12, 27, 31, 10], > ? ? ? [ 2, ?5, 13, 44]...]) Each is the index into x.vertices of the simplex that is adjacent to the face opposite the given point. This index is -1 when that face is on the outer boundary. The first simplex has these neighbors: [-1, -1, 1, 6] This means that the face opposite point w (x-y-z) is on the outer boundary. The face opposite point y (w-x-z) adjoins the simplex given by x.vertices[1], and so on. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From dplepage at gmail.com Wed Mar 16 15:40:57 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Wed, 16 Mar 2011 15:40:57 -0400 Subject: [SciPy-User] Scipy.spatial.Delaunay In-Reply-To: <1907631167043638734@unknownmsgid> References: <39079.3753009804$1300287481@news.gmane.org> <1907631167043638734@unknownmsgid> Message-ID: Your points are 3 dimensional, and so Delaunay provides a 3D triangulation where each element is a tetrahedron. Each tetrahedron has four vertices and up to four neighboring tetrahedra. Pauli's example has fewer connections than you expect because it assumes a 2D triangulation. In the 3D case, you should have six lines of code connect each of the four vertices to each other: edges = [] for i in xrange(x.nsimplex): edges.append((x.vertices[i,0], x.vertices[i,1])) edges.append((x.vertices[i,0], x.vertices[i,2])) edges.append((x.vertices[i,0], x.vertices[i,3])) edges.append((x.vertices[i,1], x.vertices[i,2])) edges.append((x.vertices[i,1], x.vertices[i,3])) edges.append((x.vertices[i,2], x.vertices[i,3])) -- Dan Lepage On Wed, Mar 16, 2011 at 3:22 PM, Dan Richards wrote: > Hi Pauli, > > Thanks for your quick reply, I really appreciate the help. > > I am still a little confused as to how the points, vertices and neighbors > relate to one another. Perhaps I can explain how I understand them and you > can correct me? > > When I type x.vertices I get an array that has values for each index: >>>>x.vertices > array([[6, 4, 5, 9], > ? ? ? [8, 6, 4, 5], > ? ? ? [8, 1, 4, 7], > ? ? ? [8, 1, 6, 4], > ? ? ? [3, 6, 4, 9]...]) > > Do these numbers [w,x,y,z] represent a triangulation whereby the connections > are as follows?: > > w-x > x-y > y-z > w-y > w-z > > Your code did seem to work well, although I added an extra line which I > assume should have been there? > > edges = [] > for i in xrange(x.nsimplex): > ? ? ? ?edges.append((x.vertices[i,0], x.vertices[i,1])) > ? ? ? ?edges.append((x.vertices[i,1], x.vertices[i,2])) > ? ? ? ?edges.append((x.vertices[i,2], x.vertices[i,3])) # New line here > ? ? ? ?edges.append((x.vertices[i,3], x.vertices[i,0])) > > The confusion on my part is that I expected the vertices to hold three > indices relating to the points of a triangle so I am confused as to how to > interpret the four values? > > Equally with the neighbors, could you tell me what the four indices define? >>>>x.neighbors > array([[-1, -1, ?1, ?6], > ? ? ? [ 0, ?2, 54, ?9], > ? ? ? [-1, ?1, 19, ?4], > ? ? ? [12, 27, 31, 10], > ? ? ? [ 2, ?5, 13, 44]...]) > > Many Thanks, > Dan > > > > > > The edges are recorded in the `vertices` array, which contains indices of > the points making up each triangle. The overall structure is recorded > `neighbors`. > > This is maybe easiest to explain in code. The set of edges is: > > ? ? ? ?edges = [] > ? ? ? ?for i in xrange(x.nsimplex): > ? ? ? ? ? ?edges.append((x.vertices[i,0], x.vertices[i,1])) > ? ? ? ? ? ?edges.append((x.vertices[i,1], x.vertices[i,2])) > ? ? ? ? ? ?edges.append((x.vertices[i,2], x.vertices[i,0])) > > This however counts each edge multiple times. To get around, that: > > ? ? ? ?edges = [] > ? ? ? ?for i in xrange(x.nsimplex): > ? ? ? ? ? ?if i > x.neighbors[i,2]: > ? ? ? ? ? ? ? ?edges.append((x.vertices[i,0], x.vertices[i,1])) > ? ? ? ? ? ?if i > x.neighbors[i,0]: > ? ? ? ? ? ? ? ?edges.append((x.vertices[i,1], x.vertices[i,2])) > ? ? ? ? ? ?if i > x.neighbors[i,1]: > ? ? ? ? ? ? ? ?edges.append((x.vertices[i,2], x.vertices[i,0])) > > This counts each edge only once. Note how the `neighbors` array relates > to `vertices`: its j-th entry gives the neighboring triangle on the > other side of the edge formed that remains after the j-th vertex is > removed from the triangle. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > "Before acting on this email or opening any attachments you should read the Manchester Metropolitan University email disclaimer available on its website http://www.mmu.ac.uk/emaildisclaimer " > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From johnl at cs.wisc.edu Wed Mar 16 15:53:00 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Wed, 16 Mar 2011 14:53:00 -0500 Subject: [SciPy-User] Large banded matrix least squares solution Message-ID: <4D81151C.6080005@cs.wisc.edu> Hello. I'm trying to find a least squares solution to a system Ax=b, where A is a lower diagonal, banded matrix. The entries of A on a given diagonal are all identical, with about 300 unique values, and A can be quite large, on the order of 1e6 rows and columns. scipy.sparse.linalg.lsqr works on smaller examples, up to a few thousand rows and columns, but not much larger. It is also very time consuming to construct A, though I'm sure there must be a fast way to do that. Given the amount of symmetry in the problem, I suspect there is a fast way to calculate the result, or perhaps another way to solve the problem entirely. Thank you for your help, David Lee From charlesr.harris at gmail.com Wed Mar 16 16:02:57 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 16 Mar 2011 14:02:57 -0600 Subject: [SciPy-User] Large banded matrix least squares solution In-Reply-To: <4D81151C.6080005@cs.wisc.edu> References: <4D81151C.6080005@cs.wisc.edu> Message-ID: On Wed, Mar 16, 2011 at 1:53 PM, J. David Lee wrote: > Hello. > > I'm trying to find a least squares solution to a system Ax=b, where A is > a lower diagonal, banded matrix. The entries of A on a given diagonal > are all identical, with about 300 unique values, and A can be quite > large, on the order of 1e6 rows and columns. > > So this is sort of a convolution? Do you need exact, or will somewhat approximate do? I think you can probably do something useful with an fft. scipy.sparse.linalg.lsqr works on smaller examples, up to a few thousand > rows and columns, but not much larger. It is also very time consuming to > construct A, though I'm sure there must be a fast way to do that. > > Given the amount of symmetry in the problem, I suspect there is a fast > way to calculate the result, or perhaps another way to solve the problem > entirely. > > Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnl at cs.wisc.edu Wed Mar 16 17:06:11 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Wed, 16 Mar 2011 16:06:11 -0500 Subject: [SciPy-User] Large banded matrix least squares solution In-Reply-To: References: <4D81151C.6080005@cs.wisc.edu> Message-ID: <4D812643.5090905@cs.wisc.edu> On 03/16/2011 03:02 PM, Charles R Harris wrote: > > > On Wed, Mar 16, 2011 at 1:53 PM, J. David Lee > wrote: > > Hello. > > I'm trying to find a least squares solution to a system Ax=b, > where A is > a lower diagonal, banded matrix. The entries of A on a given diagonal > are all identical, with about 300 unique values, and A can be quite > large, on the order of 1e6 rows and columns. > > > So this is sort of a convolution? Do you need exact, or will somewhat > approximate do? I think you can probably do something useful with an fft. What I have is data from a detector that is passed through a shaping amplifier that turns voltage steps into pulses. I've measured the characteristic pulse shape, but now I'm interested to see if I can move backwards from the shaped data to the detector data. The idea is that we assume that there is a pulse at every time point and find the amplitude at each point in time to match our raw data. Here is an image of the detector's data (green), and the shaped data (blue): http://mywebspace.wisc.edu/jdlee1/web/detector_and_shaped_data.png David > scipy.sparse.linalg.lsqr works on smaller examples, up to a few > thousand > rows and columns, but not much larger. It is also very time > consuming to > construct A, though I'm sure there must be a fast way to do that. > > Given the amount of symmetry in the problem, I suspect there is a fast > way to calculate the result, or perhaps another way to solve the > problem > entirely. > > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Wed Mar 16 17:46:08 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 16 Mar 2011 14:46:08 -0700 (PDT) Subject: [SciPy-User] Large banded matrix least squares solution In-Reply-To: <4D812643.5090905@cs.wisc.edu> References: <4D81151C.6080005@cs.wisc.edu> <4D812643.5090905@cs.wisc.edu> Message-ID: <326211.61952.qm@web113412.mail.gq1.yahoo.com> definitely sounds like a deconvolution problem, you could start by trying something like Weiner filtering (http://en.wikipedia.org/wiki/Wiener_deconvolution) and go from there. If you need to go further, the inverse problems notes at http://home.comcast.net/~szemengtan/ are excellent (you probably want to look at Chapter 3, Regularization Methods for Linear Inverse Problems, in particular 3.7 which talks about solving large systems). I've got a python implementation of the Matlab code for Tikhonov regularised deconvolution given there. cheers, David ________________________________ From: J. David Lee To: SciPy Users List Sent: Thu, 17 March, 2011 10:06:11 AM Subject: Re: [SciPy-User] Large banded matrix least squares solution On 03/16/2011 03:02 PM, Charles R Harris wrote: > > >On Wed, Mar 16, 2011 at 1:53 PM, J. David Lee wrote: > >Hello. >> >>I'm trying to find a least squares solution to a system Ax=b, where A >>is >>a lower diagonal, banded matrix. The entries of A on a given diagonal >>are all identical, with about 300 unique values, and A can be quite >>large, on the order of 1e6 rows and columns. >> >> So this is sort of a convolution? Do you need exact, or will somewhat approximate do? I think you can probably do something useful with an fft. What I have is data from a detector that is passed through a shaping amplifier that turns voltage steps into pulses. I've measured the characteristic pulse shape, but now I'm interested to see if I can move backwards from the shaped data to the detector data. The idea is that we assume that there is a pulse at every time point and find the amplitude at each point in time to match our raw data. Here is an image of the detector's data (green), and the shaped data (blue): http://mywebspace.wisc.edu/jdlee1/web/detector_and_shaped_data.png David scipy.sparse.linalg.lsqr works on smaller examples, up to a few thousand >>rows and columns, but not much larger. It is also very time consuming >>to >>construct A, though I'm sure there must be a fast way to do that. >> >>Given the amount of symmetry in the problem, I suspect there is a >fast >>way to calculate the result, or perhaps another way to solve the >>problem >>entirely. >> >> Chuck > _______________________________________________ SciPy-User mailing list >SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanforeest at gmail.com Wed Mar 16 17:47:33 2011 From: vanforeest at gmail.com (nicky van foreest) Date: Wed, 16 Mar 2011 22:47:33 +0100 Subject: [SciPy-User] Large banded matrix least squares solution In-Reply-To: <4D81151C.6080005@cs.wisc.edu> References: <4D81151C.6080005@cs.wisc.edu> Message-ID: Hi, On 16 March 2011 20:53, J. David Lee wrote: > Hello. > > I'm trying to find a least squares solution to a system Ax=b, where A is > a lower diagonal, banded matrix. The entries of A on a given diagonal > are all identical, with about 300 unique values, and A can be quite > large, on the order of 1e6 rows and columns. Perhaps I get you wrong, but it appears to me that a_11 x_1 =b_1 (the system is lower diagonal) fixes x_1 uniquely. The second line of A then fixes x_2 etc. Hence, this system is not overspecified, and a least squares approach does not seems to make sense. Least squares becomes interesting when A has more columns than rows, i.e., is overspecified. NIcky > > scipy.sparse.linalg.lsqr works on smaller examples, up to a few thousand > rows and columns, but not much larger. It is also very time consuming to > construct A, though I'm sure there must be a fast way to do that. > > Given the amount of symmetry in the problem, I suspect there is a fast > way to calculate the result, or perhaps another way to solve the problem > entirely. > > Thank you for your help, > > David Lee > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From pav at iki.fi Wed Mar 16 18:02:31 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 16 Mar 2011 22:02:31 +0000 (UTC) Subject: [SciPy-User] Scipy.spatial.Delaunay References: <39079.3753009804$1300287481@news.gmane.org> <1907631167043638734@unknownmsgid> Message-ID: Wed, 16 Mar 2011 15:40:57 -0400, Daniel Lepage wrote: [clip] > Pauli's example has fewer connections than you expect because it assumes > a 2D triangulation. In the 3D case, you should have six lines of code > connect each of the four vertices to each other: Yes, sorry, I didn't read the original mail carefully enough. For 3D, each tetrahedron has 6 edges, and they can be found as Daniel shows: > edges = [] > for i in xrange(x.nsimplex): > edges.append((x.vertices[i,0], x.vertices[i,1])) > edges.append((x.vertices[i,0], x.vertices[i,2])) > edges.append((x.vertices[i,0], x.vertices[i,3])) > edges.append((x.vertices[i,1], x.vertices[i,2])) > edges.append((x.vertices[i,1], x.vertices[i,3])) > edges.append((x.vertices[i,2], x.vertices[i,3])) If you want each edge to appear only once, the trick of using x.neighbours will not work in 3D, since ndim - 1 != 1. So a brute force algorithm needs to be used, for example detecting duplicates while processing, or by removing them afterwards. (This information is not available from Qhull.) If you want to write something yourself using Cython, this may give some ideas on how to proceed: https://github.com/pv/scipy-work/blob/enh%2Finterpnd-smooth/scipy/spatial/qhull.pyx#L969 Note that it'll need some adaptation. -- Pauli Virtanen From johnl at cs.wisc.edu Wed Mar 16 18:11:37 2011 From: johnl at cs.wisc.edu (J. David Lee) Date: Wed, 16 Mar 2011 17:11:37 -0500 Subject: [SciPy-User] Large banded matrix least squares solution In-Reply-To: <326211.61952.qm@web113412.mail.gq1.yahoo.com> References: <4D81151C.6080005@cs.wisc.edu> <4D812643.5090905@cs.wisc.edu> <326211.61952.qm@web113412.mail.gq1.yahoo.com> Message-ID: <4D813599.2000001@cs.wisc.edu> Thank you very much for your help. Wiener decomposition sounds like exactly what I want. I'm not very knowledgeable on working in the frequency domain, so it'll take me a while to get where I'm going, but I'll try to post back with my results. Thanks again, David On 03/16/2011 04:46 PM, David Baddeley wrote: > definitely sounds like a deconvolution problem, you could start by > trying something like Weiner filtering > (http://en.wikipedia.org/wiki/Wiener_deconvolution) and go from there. > > If you need to go further, the inverse problems notes at > http://home.comcast.net/~szemengtan/ > are excellent (you probably > want to look at Chapter 3, Regularization Methods for Linear Inverse > Problems, in particular 3.7 which talks about solving large systems). > I've got a python implementation of the Matlab code for Tikhonov > regularised deconvolution given there. > > cheers, > David > > ------------------------------------------------------------------------ > *From:* J. David Lee > *To:* SciPy Users List > *Sent:* Thu, 17 March, 2011 10:06:11 AM > *Subject:* Re: [SciPy-User] Large banded matrix least squares solution > > On 03/16/2011 03:02 PM, Charles R Harris wrote: >> >> >> On Wed, Mar 16, 2011 at 1:53 PM, J. David Lee > > wrote: >> >> Hello. >> >> I'm trying to find a least squares solution to a system Ax=b, >> where A is >> a lower diagonal, banded matrix. The entries of A on a given diagonal >> are all identical, with about 300 unique values, and A can be quite >> large, on the order of 1e6 rows and columns. >> >> >> So this is sort of a convolution? Do you need exact, or will somewhat >> approximate do? I think you can probably do something useful with an fft. > What I have is data from a detector that is passed through a shaping > amplifier that turns voltage steps into pulses. I've measured the > characteristic pulse shape, but now I'm interested to see if I can > move backwards from the shaped data to the detector data. The idea is > that we assume that there is a pulse at every time point and find the > amplitude at each point in time to match our raw data. > > Here is an image of the detector's data (green), and the shaped data > (blue): > > http://mywebspace.wisc.edu/jdlee1/web/detector_and_shaped_data.png > > David > >> scipy.sparse.linalg.lsqr works on smaller examples, up to a few >> thousand >> rows and columns, but not much larger. It is also very time >> consuming to >> construct A, though I'm sure there must be a fast way to do that. >> >> Given the amount of symmetry in the problem, I suspect there is a >> fast >> way to calculate the result, or perhaps another way to solve the >> problem >> entirely. >> >> >> Chuck >> >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Richards at mmu.ac.uk Thu Mar 17 05:20:22 2011 From: D.Richards at mmu.ac.uk (Dan Richards) Date: Thu, 17 Mar 2011 09:20:22 +0000 Subject: [SciPy-User] Scipy.spatial.Delaunay In-Reply-To: References: <39079.3753009804$1300287481@news.gmane.org> <1907631167043638734@unknownmsgid> Message-ID: <001801cbe484$8acb0cd0$a0612670$@Richards@mmu.ac.uk> Thanks Pauli, Dan and Rob, this is excellent! I will look into writing my own brute force algorithm to remove duplicate edges/faces as your example. >https://github.com/pv/scipy-work/blob/enh%2Finterpnd-smooth/scipy/spatial/q hull.pyx#L969 Thanks again, much appreciated. Dan "Before acting on this email or opening any attachments you should read the Manchester Metropolitan University email disclaimer available on its website http://www.mmu.ac.uk/emaildisclaimer " From dplepage at gmail.com Thu Mar 17 08:57:57 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Thu, 17 Mar 2011 08:57:57 -0400 Subject: [SciPy-User] Scipy.spatial.Delaunay In-Reply-To: <-383440162960267085@unknownmsgid> References: <39079.3753009804$1300287481@news.gmane.org> <1907631167043638734@unknownmsgid> <-383440162960267085@unknownmsgid> Message-ID: Python's built-in `set` class removes duplicates automatically (if you make sure to sort the indices so that each edge always has the same representation): edges = set() def add_edge(v1, v2): edges.add((min(v1, v2), max(v1, v2)) for i in xrange(x.nsimplex): add_edge(x.vertices[i,0], x.vertices[i,1]) add_edge(x.vertices[i,0], x.vertices[i,2]) add_edge(x.vertices[i,0], x.vertices[i,3]) add_edge(x.vertices[i,1], x.vertices[i,2]) add_edge(x.vertices[i,1], x.vertices[i,3]) add_edge(x.vertices[i,2], x.vertices[i,3]) You can then iterate over edges directly, or call `list(edges)` if you need them in an ordered list. -- Dan Lepage On Thu, Mar 17, 2011 at 5:20 AM, Dan Richards wrote: > Thanks Pauli, Dan and Rob, this is excellent! > > I will look into writing my own brute force algorithm to remove duplicate > edges/faces as your example. > >>https://github.com/pv/scipy-work/blob/enh%2Finterpnd-smooth/scipy/spatial/q > hull.pyx#L969 > > Thanks again, much appreciated. > Dan > > "Before acting on this email or opening any attachments you should read the Manchester Metropolitan University email disclaimer available on its website http://www.mmu.ac.uk/emaildisclaimer " > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From kwgoodman at gmail.com Thu Mar 17 13:36:03 2011 From: kwgoodman at gmail.com (Keith Goodman) Date: Thu, 17 Mar 2011 10:36:03 -0700 Subject: [SciPy-User] [ANN] Bottleneck 0.4.3 released Message-ID: Bottleneck is a collection of fast NumPy array functions written in Cython. It contains functions like median, nanmedian, nanargmax, move_mean, rankdata. This is a bug fix release. Bug fixes: - #11 median and nanmedian modified (partial sort) input array - #12 nanmedian wrong when odd number of elements with all but last a NaN Enhancement: - Lazy import of SciPy (rarely used) speeds Bottleneck import 3x download ? http://pypi.python.org/pypi/Bottleneck docs ? http://berkeleyanalytics.com/bottleneck code ? http://github.com/kwgoodman/bottleneck mailing list ? http://groups.google.com/group/bottle-neck mailing list 2 ? http://mail.scipy.org/mailman/listinfo/scipy-user From emanuele at relativita.com Fri Mar 18 06:01:30 2011 From: emanuele at relativita.com (Emanuele Olivetti) Date: Fri, 18 Mar 2011 11:01:30 +0100 Subject: [SciPy-User] efficient way to store and use a 4D redundant matrix In-Reply-To: References: <4D80DA13.2060109@relativita.com> Message-ID: <4D832D7A.3080308@relativita.com> On 03/16/2011 08:31 PM, Daniel Lepage wrote: > [...] > > The sparse matrix formats will only help you if you can rewrite A in > terms of matrices that are mostly 0. > Correct. This is not my case, you are right. > Do you need the results of slicing, reshaping, etc. to also be > similarly compressed? If so, I can't see any way to implement this > without an index array, because once you reshape or slice A you won't > know which cells correspond to which indices in the original A. > I will have a deeper look to a solution with index array. Thanks for pointing it out. > If you're only taking small slices of this and then applying linear > algebra operations to those, you might be better off writing a class > that looks up the relevant values on the fly; you could overload > __getitem__ so that e.g. A[:,1,:,3] would generate the correct float64 > array on the fly and return it. > Unfortunately I am not playing with small slices. So I guess that overloading __getitem__ would be impractical. > However, if the nonredundant part takes only ~4MB, then maybe I don't > understand your layout - for a 100x100x100x100 and 64-bit floats, I > think the nonredundant part should take ((100 choose 4) + ((100 choose > 3) * 3) + ((100 choose 2) * 3) + (100 choose 1)) * 8 bytes = about > 34MB. Was that a math error, or am I misunderstanding the question? > My fault. It is indeed ~34Mb. I missed one order of magnitude when computing (100**4 * 8byte) / 24 . Thanks again, Emanuele From wkerzendorf at googlemail.com Fri Mar 18 09:55:06 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 19 Mar 2011 00:55:06 +1100 Subject: [SciPy-User] Fix for griddata unsorted data Message-ID: <4D83643A.8030506@gmail.com> Dear all, I realized that for the LinearNDInterpolator to work the data needs to be sorted. This is explained in griddata, but unfortunatley is not included in the LinearNDInterpolator description. It would be great if that would be updated. I also think that (as most things in scipy) it should just work. Here's my proposed fix. Before giving the points and values to qhull we do this: sortIDx = np.lexsort(points.transpose()[::-1]) LinearNDInterpolator(points[sortIDx], newValues[sortIDx]) I think that this should work. I really think that Qhull is an amazing addition to scipy. It basically sped up my computations by orders of magnitude. While playing around with Qhull, I stumbled across another gem for numerics: PyMinuit. It is an extremely good optimizer, very easy to use and I was wondering if that would be interesting to include in scipy. Cheers Wolfgang From robert.kern at gmail.com Fri Mar 18 10:55:20 2011 From: robert.kern at gmail.com (Robert Kern) Date: Fri, 18 Mar 2011 09:55:20 -0500 Subject: [SciPy-User] Fix for griddata unsorted data In-Reply-To: <4D83643A.8030506@gmail.com> References: <4D83643A.8030506@gmail.com> Message-ID: On Fri, Mar 18, 2011 at 08:55, Wolfgang Kerzendorf wrote: > While playing around with Qhull, I stumbled across another gem for > numerics: PyMinuit. It is an extremely good optimizer, very easy to use > and I was wondering if that would be interesting to include in scipy. Interesting, yes. Possible, no. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From alan.isaac at gmail.com Fri Mar 18 11:44:25 2011 From: alan.isaac at gmail.com (Alan G Isaac) Date: Fri, 18 Mar 2011 11:44:25 -0400 Subject: [SciPy-User] Fix for griddata unsorted data In-Reply-To: References: <4D83643A.8030506@gmail.com> Message-ID: <4D837DD9.4060503@gmail.com> > On Fri, Mar 18, 2011 at 08:55, Wolfgang Kerzendorf > wrote: > >> While playing around with Qhull, I stumbled across another gem for >> numerics: PyMinuit. It is an extremely good optimizer, very easy to use >> and I was wondering if that would be interesting to include in scipy. > On 3/18/2011 10:55 AM, Robert Kern wrote: > Interesting, yes. Possible, no. Well ... probably impossible. The MINUIT development team is quite small and still includes the original author. It is perhaps (??) possible that they might respond to a request to release MINUIT as BSD instead of GPL. Sometimes code licenses are picked reflexively rather than thoughtfully ... If you really want this, Wolfgang, in my opinion it would not hurt to politely inquire. http://lcgapp.cern.ch/project/cls/work-packages/mathlibs/minuit/contact/contact.html fwiw, Alan Isaac From mttate at usgs.gov Fri Mar 18 13:28:51 2011 From: mttate at usgs.gov (Michael T Tate) Date: Fri, 18 Mar 2011 12:28:51 -0500 Subject: [SciPy-User] Calculating avgerage values on 20 minute time frequency Message-ID: I have a time series date set that contains a date/time field and data fields (sonic temperature, WS-U, WS-V, WS-W). The data are recorded at 10hz. I would like to calculate an average for each of the data fields on a 20min frequency so it can be matched up with data collected by another instrument. It is pretty straightforward to calculate hourly averages using convert from scikits.timeseries. Does anyone have any ideas on how I would calculate 20min averages for this data? Thanks in advance Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From lesserwhirls at gmail.com Fri Mar 18 14:23:06 2011 From: lesserwhirls at gmail.com (Sean Arms) Date: Fri, 18 Mar 2011 13:23:06 -0500 Subject: [SciPy-User] Calculating avgerage values on 20 minute time frequency In-Reply-To: References: Message-ID: On Mar 18, 2011, at 12:28 PM, Michael T Tate wrote: > > I have a time series date set that contains a date/time field and data fields (sonic temperature, WS-U, WS-V, WS-W). The data are recorded at 10hz. I would like to calculate an average for each of the data fields on a 20min frequency so it can be matched up with data collected by another instrument. > > It is pretty straightforward to calculate hourly averages using convert from scikits.timeseries. > > Does anyone have any ideas on how I would calculate 20min averages for this data? > What I do with my sonic data is reshape the data array such that the new time dimension is of desired length (in your case, 10*60*20 =12,000) and then take the average over the appropriate axis. Sean > > Thanks in advance > > Mike > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From vanleeuwen.martin at gmail.com Fri Mar 18 14:25:20 2011 From: vanleeuwen.martin at gmail.com (Martin van Leeuwen) Date: Fri, 18 Mar 2011 11:25:20 -0700 Subject: [SciPy-User] Calculating avgerage values on 20 minute time frequency In-Reply-To: References: Message-ID: Hi Mike, date2num and datestr2num from pylab let you treat datetimes as floats which may make things more flexible. If you have a lot of data (e.g. from a weather network) using a database and accessing these data using psycopg2 maybe also be an option as SQL also allows averaging timeseries. def smooth(dates, signal, halfWindowWidth): # signal is a one dimensional array # dates is a one dimensional array of datatype float with times in units days (e.g. from datestr2num) # elements in dates correspond to those in signal # halfWindowWidth (int or float) indicates half the window width in seconds # datestr2num returns a float with units days, thus halfWindowWidth should be divided by the number of seconds in a day to get the same units as datestr2num uses d = float(halfWindowWidth)/float(3600*24) smoothed = scipy.zeros((scipy.size(signal) )) i=0 for date in dates: t_num_start = date - d t_num_end = date + d s = scipy.argmin(abs(dates-t_num_start)) #get closest time to start of time window e = scipy.argmin(abs(dates-t_num_end)) #get closest time to end of time window av = scipy.average(signal[s:e]) smoothed[i] = av i+=1 return smoothed Hope that helps a bit. Notice the window attracts the closest datetime in your series, not necessarily what's within the window. Martin 2011/3/18 Michael T Tate : > > I have a time series date set that contains a date/time field and data > fields (sonic temperature, WS-U, WS-V, WS-W). The data are recorded at 10hz. > I would like to calculate an average for each of the data fields on a 20min > frequency so it can be matched up with data collected by another instrument. > > It is pretty straightforward to calculate hourly averages using convert from > scikits.timeseries. > > Does anyone have any ideas on how I would calculate 20min averages for this > data? > > > Thanks in advance > > Mike > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From dplepage at gmail.com Fri Mar 18 14:43:05 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Fri, 18 Mar 2011 14:43:05 -0400 Subject: [SciPy-User] Calculating avgerage values on 20 minute time frequency In-Reply-To: References: Message-ID: If your measurements are spaced (nearly) evenly, won't the window always be the same size (e.g. 12000 entries)? If so, the averages are the discrete convolution of your data with a length-12000 signal of all 1/12000s; you can do this efficiently with a fast fourier transform. -- Dan On Fri, Mar 18, 2011 at 2:25 PM, Martin van Leeuwen wrote: > Hi Mike, > > date2num and datestr2num from pylab let you treat datetimes as floats > which may make things more flexible. If you have a lot of data (e.g. > from a weather network) using a database and accessing these data > using psycopg2 maybe also be an option as SQL also allows averaging > timeseries. > > > def smooth(dates, signal, halfWindowWidth): > > ? ? ? ?# signal is a one dimensional array > ? ? ? ?# dates is a one dimensional array of datatype float with times in > units days (e.g. from datestr2num) > ? ? ? ?# elements in dates correspond to those in signal > ? ? ? ?# halfWindowWidth (int or float) indicates half the window width in seconds > > ? ? ? ?# datestr2num returns a float with units days, thus halfWindowWidth > should be divided by the number of seconds in a day to get the same > units as datestr2num uses > ? ? ? ?d = float(halfWindowWidth)/float(3600*24) > > ? ? ? ?smoothed = scipy.zeros((scipy.size(signal) )) > ? ? ? ?i=0 > > ? ? ? ?for date in dates: > ? ? ? ? ? ? ? ?t_num_start = date - d > ? ? ? ? ? ? ? ?t_num_end = date + d > > ? ? ? ? ? ? ? ?s = scipy.argmin(abs(dates-t_num_start)) ? ? ? ?#get closest time to start > of time window > ? ? ? ? ? ? ? ?e = scipy.argmin(abs(dates-t_num_end)) ?#get closest time to end of time window > > ? ? ? ? ? ? ? ?av = scipy.average(signal[s:e]) > > ? ? ? ? ? ? ? ?smoothed[i] = av > ? ? ? ? ? ? ? ?i+=1 > > ? ? ? ?return smoothed > > > > Hope that helps a bit. Notice the window attracts the closest datetime > in your series, not necessarily what's within the window. > > Martin > > > > 2011/3/18 Michael T Tate : >> >> I have a time series date set that contains a date/time field and data >> fields (sonic temperature, WS-U, WS-V, WS-W). The data are recorded at 10hz. >> I would like to calculate an average for each of the data fields on a 20min >> frequency so it can be matched up with data collected by another instrument. >> >> It is pretty straightforward to calculate hourly averages using convert from >> scikits.timeseries. >> >> Does anyone have any ideas on how I would calculate 20min averages for this >> data? >> >> >> Thanks in advance >> >> Mike >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> >> > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From nwagner at iam.uni-stuttgart.de Sat Mar 19 07:59:13 2011 From: nwagner at iam.uni-stuttgart.de (Nils Wagner) Date: Sat, 19 Mar 2011 12:59:13 +0100 Subject: [SciPy-User] make html in scipy/doc failed Message-ID: Hi all, I cannot build the documentation of scipy. It works well for numpy. make html mkdir -p build touch build/generate-stamp mkdir -p build/html build/doctrees LANG=C sphinx-build -b html -d build/doctrees source build/html /home/nwagner/local/bin/sphinx-build:5: UserWarning: Module pkg_resources was already imported from /home/nwagner/local/lib64/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/pkg_resources.py, but /home/nwagner/local/lib64/python2.6/site-packages/distribute-0.6.4-py2.6.egg is being added to sys.path from pkg_resources import load_entry_point /home/nwagner/local/bin/sphinx-build:5: UserWarning: Module site was already imported from /usr/lib64/python2.6/site.pyc, but /home/nwagner/local/lib64/python2.6/site-packages/distribute-0.6.4-py2.6.egg is being added to sys.path from pkg_resources import load_entry_point Running Sphinx v1.0.1 Scipy (VERSION 0.10.dev7179) (RELEASE 0.10.0.dev7179) Extension error: Could not import extension numpydoc (exception: No module named numpydoc) make: *** [html] Fehler 1 Nils From ralf.gommers at googlemail.com Sat Mar 19 09:43:46 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Sat, 19 Mar 2011 21:43:46 +0800 Subject: [SciPy-User] make html in scipy/doc failed In-Reply-To: References: Message-ID: On Sat, Mar 19, 2011 at 7:59 PM, Nils Wagner wrote: > Hi all, > > I cannot build the documentation of scipy. It works well > for numpy. > As the error message says says, install numpydoc (it's on pypi). Or copy everything under numpy/doc/sphinxext to scipy/doc/sphinxext/ Use sphinx 1.0.4 by the way, 1.0.7 is broken. Cheers, Ralf -------------- next part -------------- An HTML attachment was scrubbed... URL: From deshpande.jaidev at gmail.com Sat Mar 19 14:57:19 2011 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Sun, 20 Mar 2011 00:27:19 +0530 Subject: [SciPy-User] cubic spline interpolation with scipy.interpolate.interp1d Message-ID: Dear All, The scipy.interpolate.interp1d class generates an interpolation object which I find to be somewhat sluggish and also a little unnecessary. Are there alternatives that will simply produce an array? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Mar 19 16:16:15 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 19 Mar 2011 20:16:15 +0000 (UTC) Subject: [SciPy-User] cubic spline interpolation with scipy.interpolate.interp1d References: Message-ID: On Sun, 20 Mar 2011 00:27:19 +0530, Jaidev Deshpande wrote: > The scipy.interpolate.interp1d class generates an interpolation object > which I find to be somewhat sluggish and also a little unnecessary. > > Are there alternatives that will simply produce an array? Computing a spline interpolant requires solving for the spline coefficients. This step takes some time, and storing the coefficients somewhere cannot be avoided, because spline interpolation is global. Whether the spline coefficients are stored for later use or thrown away immediately will not likely affect the performance much. You can try using the low-level functions `splrep` and `splev`, but this will probably not be much faster. -- Pauli Virtanen From deshpande.jaidev at gmail.com Sat Mar 19 16:21:00 2011 From: deshpande.jaidev at gmail.com (Jaidev Deshpande) Date: Sun, 20 Mar 2011 01:51:00 +0530 Subject: [SciPy-User] cubic spline interpolation with scipy.interpolate.interp1d In-Reply-To: References: Message-ID: Hi, What do you mean by the interpolation being global? We only calculate cubic coefficients for splines between the required nodes, right? Also, is there a way to check if 'interp1d' does more than just store the cubic coefficients? Does it produce some other unnecessary data? On Sun, Mar 20, 2011 at 1:46 AM, Pauli Virtanen wrote: > On Sun, 20 Mar 2011 00:27:19 +0530, Jaidev Deshpande wrote: > > The scipy.interpolate.interp1d class generates an interpolation object > > which I find to be somewhat sluggish and also a little unnecessary. > > > > Are there alternatives that will simply produce an array? > > Computing a spline interpolant requires solving for the spline > coefficients. This step takes some time, and storing the coefficients > somewhere cannot be avoided, because spline interpolation is global. > > Whether the spline coefficients are stored for later use or thrown away > immediately will not likely affect the performance much. You can try > using the low-level functions `splrep` and `splev`, but this will > probably not be much faster. > > -- > Pauli Virtanen > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sat Mar 19 21:37:10 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 20 Mar 2011 01:37:10 +0000 (UTC) Subject: [SciPy-User] cubic spline interpolation with scipy.interpolate.interp1d References: Message-ID: On Sun, 20 Mar 2011 01:51:00 +0530, Jaidev Deshpande wrote: > What do you mean by the interpolation being global? We only calculate > cubic coefficients for splines between the required nodes, right? No. Typical spline interpolation looks at all the data points. Check Wikipedia or a book on the subject. > Also, > is there a way to check if 'interp1d' does more than just store the > cubic coefficients? The source code is available. But it doesn't do anything unnecessary. > Does it produce some other unnecessary data? No. From coolhead.pranay at gmail.com Sun Mar 20 03:06:53 2011 From: coolhead.pranay at gmail.com (coolhead.pranay at gmail.com) Date: Sun, 20 Mar 2011 03:06:53 -0400 Subject: [SciPy-User] Normalizing a sparse matrix Message-ID: Hi, I have a sparse matrix with nearly (300*10000) entries constructed out of 14000*14000 matrix...In each iteration after performing some operations on the sparse matrix(like multiply and dot) I have to divide each row of the corresponding dense matrix with the sum of its elements... Since sparse matrix format doesn't allow all the required matrix operation(divide) I tried to convert it to a dense format and then divide by the sum. But this raises MemoryError exception because 14000*14000 matrix doesn't fit memory.. Can someone tell me how to normalize a sparse matrix ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From warren.weckesser at enthought.com Sun Mar 20 06:37:21 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Sun, 20 Mar 2011 05:37:21 -0500 Subject: [SciPy-User] Normalizing a sparse matrix In-Reply-To: References: Message-ID: On Sun, Mar 20, 2011 at 2:06 AM, coolhead.pranay at gmail.com < coolhead.pranay at gmail.com> wrote: > Hi, > > I have a sparse matrix with nearly (300*10000) entries constructed out of > 14000*14000 matrix...In each iteration after performing some operations on > the sparse matrix(like multiply and dot) I have to divide each row of the > corresponding dense matrix with the sum of its elements... > > Since sparse matrix format doesn't allow all the required matrix > operation(divide) I tried to convert it to a dense format and then divide by > the sum. But this raises MemoryError exception because 14000*14000 matrix > doesn't fit memory.. > > Can someone tell me how to normalize a sparse matrix ? > > This will normalize the rows of R, a sparse matrix in CSR format: ----- # Normalize the rows of R. row_sums = np.array(R.sum(axis=1))[:,0] # OR: row_sums = R.dot(np.ones(R.shape[1])) row_indices, col_indices = R.nonzero() R.data /= row_sums[row_indices] ----- The attached code provides an example of that snippet in use. Warren -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sparse_normalize_rows_example.py Type: application/octet-stream Size: 711 bytes Desc: not available URL: From hector1618 at gmail.com Mon Mar 21 05:41:54 2011 From: hector1618 at gmail.com (Hector) Date: Mon, 21 Mar 2011 15:11:54 +0530 Subject: [SciPy-User] [scipy] GSoC 2011 Message-ID: Hello everyone, List of organizations participating in GSoC 2011 has been out for 3 days and as everyone expected, PSF is one of it. I was waiting for SciPy to pop up under PSF umbrella, it nothing has been updated there since then. Is SciPy planning to participate in GSoC this year? -- -Regards Hector Whenever you think you can or you can't, in either way you are right. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brockp at umich.edu Mon Mar 21 13:21:48 2011 From: brockp at umich.edu (Brock Palen) Date: Mon, 21 Mar 2011 13:21:48 -0400 Subject: [SciPy-User] SciPy on HPC focus podcast Message-ID: We host an HPC focused podcast (www.rce-cast.com) We recently had NumPy on the show: http://www.rce-cast.com/Podcast/rce-48-numpy.html Now we would like to dig into the details of SciPy. It only takes about an hour over the phone and is friendly and informative. We would like a SciPy dev or two to be on the show, feel free to contact me off list. Thank you Brock Palen www.umich.edu/~brockp Center for Advanced Computing brockp at umich.edu (734)936-1985 From david at silveregg.co.jp Mon Mar 21 22:04:12 2011 From: david at silveregg.co.jp (David) Date: Tue, 22 Mar 2011 11:04:12 +0900 Subject: [SciPy-User] Normalizing a sparse matrix In-Reply-To: References: Message-ID: <4D88039C.3080004@silveregg.co.jp> On 03/20/2011 04:06 PM, coolhead.pranay at gmail.com wrote: > Hi, > > I have a sparse matrix with nearly (300*10000) entries constructed out > of 14000*14000 matrix...In each iteration after performing some > operations on the sparse matrix(like multiply and dot) I have to divide > each row of the corresponding dense matrix with the sum of its elements... It is not well documented, not really part of the public API and too low-level, but you can use scipy.sparse.sparsetools. As it is implemented in C++, it should be both cpu and memory efficient: I am using the following function to normalize each row of a CSR matrix: def normalize_pairs(pairs): """Normalized rows of the pairs matrix so that sum(row) == 1 (or 0 for empty rows). Note ---- Does the modificiation in-place.""" factor = pairs.sum(axis=1) nnzeros = np.where(factor > 0) factor[nnzeros] = 1 / factor[nnzeros] factor = np.array(factor)[0] if not pairs.format == "csr": raise ValueError("csr only") csr_scale_rows(pairs.shape[0], pairs.shape[1], pairs.indptr, pairs.indices, pairs.data, factor) return pairs I don't advise using this function if reliability is a concern, but it works well for matrices bigger than the ones you are mentioning, cheers, David From mwtoews at gmail.com Tue Mar 22 18:56:43 2011 From: mwtoews at gmail.com (Mike Toews) Date: Wed, 23 Mar 2011 11:56:43 +1300 Subject: [SciPy-User] 2D interpolate issues Message-ID: I have a few questions regarding interpolate.interp2d, as I would like to do some bilinear interpolation on 2D rasters. I'll illustrate my issues with an example: import numpy from scipy import interpolate x = [100, 110, 120, 130, 140] y = [200, 210, 229, 230] z = [[ 1, 2, 3, 4, 5], [12,13,14,15,16], [23,24,25,26,27], [34,35,36,37,38]] First, why do I get an error with the following? >>> f1 = interpolate.interp2d(x, y, z, kind='linear', bounds_error=True) Warning: No more knots can be added because the additional knot would coincide with an old one. Probably cause: s too small or too large a weight to an inaccurate data point. (fp>s) kx,ky=1,1 nx,ny=8,4 m=20 fp=263.568959 s=0.000000 I do not get an error if I swap x, y: >>> f2 = interpolate.interp2d(y, x, z, kind='linear', bounds_error=True) but this is incorrect, as my z list of lists has 5 columns or x-values and 4 rows or y-values. Do I need to transpose my z? za = numpy.array(z).T f3 = interpolate.interp2d(x, y, za, kind='linear', bounds_error=True) The example in http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html does not use a transposed array. From the documented example, we can see that intuitively len(x) = columns and len(y) = rows in z Secondly, why does bounds_error do nothing? >>> f3(-100,-100) array([ 1.]) >>> f3(1000,1000) array([ 38.]) I've supplied x and y values far outside the range, and I do not get an error. Similarly, setting bounds_error=True, fill_value is not returned when x and y are out of bounds, as documented. Are these user errors or bugs? Thanks, -Mike From mwtoews at gmail.com Tue Mar 22 19:00:50 2011 From: mwtoews at gmail.com (Mike Toews) Date: Wed, 23 Mar 2011 12:00:50 +1300 Subject: [SciPy-User] 2D interpolate issues In-Reply-To: References: Message-ID: Opps, I found a typo: y = [200, 210, 229, 230] it should be y = [200, 210, 220, 230] which fixes my first question, however I still would like to know about bounds_error. Thanks, -Mike On 23 March 2011 11:56, Mike Toews wrote: > I have a few questions regarding interpolate.interp2d, as I would like > to do some bilinear interpolation on 2D rasters. I'll illustrate my > issues with an example: > > import numpy > from scipy import interpolate > x = [100, 110, 120, 130, 140] > y = [200, 210, 229, 230] > z = [[ 1, 2, 3, 4, 5], > ? ? [12,13,14,15,16], > ? ? [23,24,25,26,27], > ? ? [34,35,36,37,38]] > > First, why do I get an error with the following? >>>> f1 = interpolate.interp2d(x, y, z, kind='linear', bounds_error=True) > Warning: ? ? No more knots can be added because the additional knot > would coincide > ? ?with an old one. Probably cause: s too small or too large a weight > ? ?to an inaccurate data point. (fp>s) > ? ? ? ?kx,ky=1,1 nx,ny=8,4 m=20 fp=263.568959 s=0.000000 > > I do not get an error if I swap x, y: >>>> f2 = interpolate.interp2d(y, x, z, kind='linear', bounds_error=True) > but this is incorrect, as my z list of lists has 5 columns or x-values > and 4 rows or y-values. > > Do I need to transpose my z? > za = numpy.array(z).T > f3 = interpolate.interp2d(x, y, za, kind='linear', bounds_error=True) > > The example in http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html > does not use a transposed array. From the documented example, we can > see that intuitively len(x) = columns and len(y) = rows in z > > Secondly, why does bounds_error do nothing? >>>> f3(-100,-100) > array([ 1.]) >>>> f3(1000,1000) > array([ 38.]) > > I've supplied x and y values far outside the range, and I do not get > an error. Similarly, setting bounds_error=True, fill_value is not > returned when x and y are out of bounds, as documented. > > Are these user errors or bugs? > > Thanks, > -Mike > From mr.peter.baek at gmail.com Wed Mar 23 07:10:02 2011 From: mr.peter.baek at gmail.com (Peter Baek) Date: Wed, 23 Mar 2011 12:10:02 +0100 Subject: [SciPy-User] Problem with scipy.interpolate.RectBivariateSpline In-Reply-To: References: Message-ID: Hi, I find it strange that scipy.interpolate.RectBivariateSpline cannot evaluate a random vector. When i evaluate an ordered vector using e.g. linspace it works fine, but when i try a random vector it crashes. Please help me find a way to evaluate an unordered vector. Thanks, Peter. Here is a test bench code that demonstrates the problem: --------------BEGIN CODE------------------------------------------ from pylab import * from scipy import interpolate """ first create a 2D space to do interpolation within """ Nx=11 Ny=6 f=zeros((Ny,Nx)) x=linspace(0,Nx-1,Nx) y=linspace(0,Ny-1,Ny) for i in arange(Nx): ? ?for j in arange(Ny): ? ? ? ?f[j,i]=x[i]+y[j]**2 matx=kron(ones((Ny,1)),x) maty=kron(ones((1,Nx)),y.reshape(-1,1)) print f print matx print maty figure(1) c=contour(matx,maty,f) clabel(c) title('surface which i want to interpolate in') """ Now create an interpolation... """ finter_x=interpolate.RectBivariateSpline(y,x,f,kx=3,ky=3) """ and evaluate using a linspace vector... """ xi1=[5] yi1=linspace(0,5,100) print yi1 figure(2) plot(yi1,finter_x(yi1,xi1)) title('slice at x=5 ordered vector') """ and evaluate using a random vector... """ xi2=[5] yi2=rand(100)*5 #yi2=linspace(0,5,100) print yi2 figure(3) plot(yi1,finter_x(yi2,xi2)) title('slice at x=5 random vector') show() --------------------------- END CODE ----------------------------- From pav at iki.fi Wed Mar 23 08:24:50 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 Mar 2011 12:24:50 +0000 (UTC) Subject: [SciPy-User] Problem with scipy.interpolate.RectBivariateSpline References: Message-ID: Wed, 23 Mar 2011 12:10:02 +0100, Peter Baek wrote: > I find it strange that scipy.interpolate.RectBivariateSpline cannot > evaluate a random vector. When i evaluate an ordered vector using e.g. > linspace it works fine, but when i try a random vector it crashes. The algorithm is made for regridding, and handles only sorted grids. > Please help me find a way to evaluate an unordered vector. Sort the entries first. Or, if you want to evaluate the function only at the given points, rather than on a grid, use finter_x.ev(yi2, xi2) The two coordinate arrays need either to be of the same shape, e.g. via yi2, xi2 = np.broadcast_arrays(yi2,xi2) -- Pauli Virtanen From amenity at enthought.com Wed Mar 23 08:56:17 2011 From: amenity at enthought.com (Amenity Applewhite) Date: Wed, 23 Mar 2011 07:56:17 -0500 Subject: [SciPy-User] SciPy 2011 Call for Papers Message-ID: Hello, SciPy 2011 , the 10th Python in Science conference, will be held July 11 - 16, 2011, in Austin, TX. At this conference, novel applications and breakthroughs made in the pursuit of science using Python are presented. Attended by leading figures from both academia and industry, it is an excellent opportunity to experience the cutting edge of scientific software development. The conference is preceded by two days of tutorials, during which community experts provide training on several scientific Python packages. *We'd like to invite you to consider presenting at SciPy 2011.* The list of topics that are appropriate for the conference includes (but is not limited to): * new Python libraries for science and engineering; * applications of Python to the solution of scientific or computational problems; * high performance, parallel and GPU computing with Python; * use of Python in science education. *Specialized Tracks* This year we also have two specialized tracks. They will be run concurrent to the main conference. *Python in Data Science Chair: Peter Wang, Streamitive, Inc.* This track focuses on the advantages and challenges of applying Python in the emerging field of "data science". This includes a breadth of technologies, from wrangling realtime data streams from the social web, to machine learning and semantic analysis, to workflow and repository management for large datasets. *Python and Core Technologies Chair: Anthony Scopatz, Enthought, Inc.* In an effort to broaden the scope of SciPy and to engage the larger community of software developers, we are pleased to introduce the _Python & Core Technologies_ track. Talks will cover subjects that are not directly related to science and engineering, yet nonetheless affect scientific computing. Proposals on the Python language, visualization toolkits, web frameworks, education, and other topics are appropriate for this session. *Talk/Paper Submission* We invite you to take part by submitting a talk abstract on the conference website at: http://conference.scipy.org/scipy2011/papers.php Papers are included in the peer-reviewed conference proceedings, to be published online. *Important dates for authors:* Friday, April 15: Tutorial proposals due (remember: stipends will be provided for Tutorial instructors) http://conference.scipy.org/scipy2011/tutorials.php Sunday, April 24: Paper abstracts due Sunday, May 8: Student sponsorship request due http://conference.scipy.org/scipy2011/student.php Tuesday, May 10: Accepted talks announced Monday, May 16: Student sponsorships announced Monday, May 23: Early Registration ends Sunday, June 20: Papers due Monday-Tuesday, July 11 - 12: Tutorials Wednesday-Thursday, July 13 - July 14: Conference Friday-Saturday, July 15 - July 16: Sprints The SciPy 2011 Team @SciPy2011 http://twitter.com/SciPy2011 _________________________ Amenity Applewhite Enthought, Inc. Scientific Computing Solutions -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralf.gommers at googlemail.com Wed Mar 23 09:29:44 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 23 Mar 2011 14:29:44 +0100 Subject: [SciPy-User] [scipy] GSoC 2011 In-Reply-To: References: Message-ID: On Mon, Mar 21, 2011 at 10:41 AM, Hector wrote: > Hello everyone, > > List of organizations participating in GSoC 2011 has been out for 3 days and > as everyone expected, PSF is one of it. > I was waiting for SciPy to pop up under PSF umbrella, it nothing has been > updated there since then. > Is SciPy planning to participate in GSoC this year? I certainly hope so. Jarrod knows what is (or isn't) going on, hope he lets us know soon. Did you have something in mind to work on? If so, it can't hurt to share your idea already. Cheers, Ralf From ghuth at hotmail.fr Wed Mar 23 10:59:07 2011 From: ghuth at hotmail.fr (=?iso-8859-1?B?R+lyYWxkaW5lIGh1dGg=?=) Date: Wed, 23 Mar 2011 15:59:07 +0100 Subject: [SciPy-User] problem with scipy.optimize Message-ID: Hi all, I'm new to Python and I have some difficulties in using scipy optimization functions. I would like to optimize a function f, and the calcul of this function requires the use of the hypergeometric and beta function. Scipy has his own hypergeometric and beta function (names hyp2f1 and beta) but, with my arguments, the calcul of for example hyp2f1(-400,9.0972,-788.90,1.0) returns "nan", while the mpmath hypergeometric function (also names hyp2f1) returns a number. (It's the same with the beta function) So I would like to use mpmath hyp2f1 and beta functions in the calcul of the function f but the problem is that the mpf type returned by mpmath function seems not to be compatible with scipy optimization function. example with the use of fmin_cobyla function: >>> g=array([0.5]) >>> fmin_cobyla(f, g, [constr1, constr2], rhoend=1e-7) capi_return is NULL Call-back cb_calcfc_in__cobyla__user__routines failed. Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.6/dist-packages/scipy/optimize/cobyla.py", line 96, in fmin_cobyla iprint=iprint, maxfun=maxfun) File "/usr/lib/python2.6/dist-packages/scipy/optimize/cobyla.py", line 88, in calcfc f = func(x, *args) File "", line 13, in pnn File "/usr/lib/pymodules/python2.6/mpmath/functions/hypergeometric.py", line 249, in hyp2f1 return ctx.hyper([a,b],[c],z,**kwargs) File "/usr/lib/pymodules/python2.6/mpmath/functions/hypergeometric.py", line 198, in hyper z = ctx.convert(z) File "/usr/lib/pymodules/python2.6/mpmath/ctx_mp_python.py", line 654, in convert return ctx._convert_fallback(x, strings) File "/usr/lib/pymodules/python2.6/mpmath/ctx_mp.py", line 544, in _convert_fallback raise TypeError("cannot create mpf from " + repr(x)) TypeError: cannot create mpf from array([ 1.]) What is the solution? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Wed Mar 23 11:19:55 2011 From: pav at iki.fi (Pauli Virtanen) Date: Wed, 23 Mar 2011 15:19:55 +0000 (UTC) Subject: [SciPy-User] problem with scipy.optimize References: Message-ID: Wed, 23 Mar 2011 15:59:07 +0100, G?raldine huth wrote: > I would like to optimize a function f, and the calcul of this function > requires the use of the hypergeometric and beta function. > > Scipy has his own hypergeometric and beta function (names hyp2f1 and > beta) but, with my arguments, the calcul of for example > hyp2f1(-400,9.0972,-788.90,1.0) returns "nan", while the mpmath > hypergeometric function (also names hyp2f1) returns a number. (It's the > same with the beta function) Two options: (i) upgrade Scipy to >= 0.8.0 in which several issues in hyp2f1 were fixed, or (ii) use float(mpmath.hyp2f1(float(a), float(b), float(c), float(z))) to force ordinary floating point numbers in and out from the mpmath function. From hector1618 at gmail.com Wed Mar 23 11:29:32 2011 From: hector1618 at gmail.com (Hector) Date: Wed, 23 Mar 2011 20:59:32 +0530 Subject: [SciPy-User] [scipy] GSoC 2011 In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 6:59 PM, Ralf Gommers wrote: > On Mon, Mar 21, 2011 at 10:41 AM, Hector wrote: > > Hello everyone, > > > > List of organizations participating in GSoC 2011 has been out for 3 days > and > > as everyone expected, PSF is one of it. > > I was waiting for SciPy to pop up under PSF umbrella, it nothing has been > > updated there since then. > > Is SciPy planning to participate in GSoC this year? > > I certainly hope so. Jarrod knows what is (or isn't) going on, hope he > lets us know soon. > > I was really waiting for the reply of this mail or some update on python site for GSoC 2011. > Did you have something in mind to work on? If so, it can't hurt to > share your idea already. > > Being a mathematics students, I did courses on some of the advanced topics and wrote code for the well known algorithms in that field. The topics ( in the descending order of number of codes written ) are - 1) Numerical Analysis 2) Operational Research 3) Abstract Algebra 4) Graph theory ( Cliques) Unfortunately I was not aware of Python and FOSS at that time and wrote all of the programs in MatLab (except Cliques). I want to contribute these code and enhance the tools SciPy contains. And GSoC 2011 will give me a structured platform to do that. I would be very happy to work on these if someone is will to mentor me. My works can be seen at - https://github.com/hector1618 > Cheers, > Ralf > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > Thank you for your response and I would love to hear from community. -- -Regards Hector Whenever you think you can or you can't, in either way you are right. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vineethrakesh at gmail.com Wed Mar 23 11:32:39 2011 From: vineethrakesh at gmail.com (Vineeth Mohan) Date: Wed, 23 Mar 2011 11:32:39 -0400 Subject: [SciPy-User] help with weibull distribution Message-ID: <4D8A1297.6050301@gmail.com> Hello, I am trying to generate weibull distributions for the shape and scale parameters that I have with me. I am looking at the numpy random module. The weibull distribution defined here takes just the shape parameter and not the scale parameter. Can any one suggest me what module to use if any other module is available. Thank You Vin From kmichael.aye at gmail.com Wed Mar 23 11:33:46 2011 From: kmichael.aye at gmail.com (K.-Michael Aye) Date: Wed, 23 Mar 2011 16:33:46 +0100 Subject: [SciPy-User] ndimage docs Message-ID: Just wanted to let peeps know that there is a point in the ndimage documentation that seems to use an old arange implementation, at least it doesn't work currenty like stated. Page: http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html Several examples there use something like this: array = arange(12, shape=(4,3), type = Float64) and arange does not support the shape keyword. Maybe should be updated to avoid confusion/frustration. Best regards, Michael From ralf.gommers at googlemail.com Wed Mar 23 11:39:58 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 23 Mar 2011 16:39:58 +0100 Subject: [SciPy-User] help with weibull distribution In-Reply-To: <4D8A1297.6050301@gmail.com> References: <4D8A1297.6050301@gmail.com> Message-ID: On Wed, Mar 23, 2011 at 4:32 PM, Vineeth Mohan wrote: > > Hello, > > I am trying to generate weibull distributions for the shape and scale > parameters that I have with me. I am looking at the numpy random module. > The weibull distribution defined here takes just the shape parameter and > not the scale parameter. Can any one suggest me what module to use if > any other module is available. >From the weibull docstring: The more common 2-parameter Weibull, including a scale parameter :math:`\lambda` is just :math:`X = \lambda(-ln(U))^{1/a}`. So just multiply the returned values from numpy.random.weibull by your scale parameter and you're good to go. Cheers, Ralf From josef.pktd at gmail.com Wed Mar 23 11:40:25 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Wed, 23 Mar 2011 11:40:25 -0400 Subject: [SciPy-User] help with weibull distribution In-Reply-To: <4D8A1297.6050301@gmail.com> References: <4D8A1297.6050301@gmail.com> Message-ID: On Wed, Mar 23, 2011 at 11:32 AM, Vineeth Mohan wrote: > > Hello, > > I am trying to generate weibull distributions for the shape and scale > parameters that I have with me. I am looking at the numpy random module. > The weibull distribution defined here takes just the shape parameter and > not the scale parameter. Can any one suggest me what module to use if > any other module is available. In general you can just rescale the random variables, for example >>> scale = 2 >>> scale*numpy.random.weibull(5, size=10) Josef > > Thank You > Vin > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From ralf.gommers at googlemail.com Wed Mar 23 12:07:24 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Wed, 23 Mar 2011 17:07:24 +0100 Subject: [SciPy-User] ANN: Numpy 1.6.0 beta 1 Message-ID: Hi, I am pleased to announce the availability of the first beta of NumPy 1.6.0. Due to the extensive changes in the Numpy core for this release, the beta testing phase will last at least one month. Please test this beta and report any problems on the Numpy mailing list. Sources and binaries can be found at: http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/ For (preliminary) release notes see below. Enjoy, Ralf ========================= NumPy 1.6.0 Release Notes ========================= This release includes several new features as well as numerous bug fixes and improved documentation. It is backward compatible with the 1.5.0 release, and supports Python 2.4 - 2.7 and 3.1 - 3.2. Highlights ========== * Re-introduction of datetime dtype support to deal with dates in arrays. * A new 16-bit floating point type. * A new iterator, which improves performance of many functions. New features ============ New 16-bit floating point type ------------------------------ This release adds support for the IEEE 754-2008 binary16 format, available as the data type ``numpy.half``. Within Python, the type behaves similarly to `float` or `double`, and C extensions can add support for it with the exposed half-float API. New iterator ------------ A new iterator has been added, replacing the functionality of the existing iterator and multi-iterator with a single object and API. This iterator works well with general memory layouts different from C or Fortran contiguous, and handles both standard NumPy and customized broadcasting. The buffering, automatic data type conversion, and optional output parameters, offered by ufuncs but difficult to replicate elsewhere, are now exposed by this iterator. Legendre, Laguerre, Hermite, HermiteE polynomials in ``numpy.polynomial`` ------------------------------------------------------------------------- Extend the number of polynomials available in the polynomial package. In addition, a new ``window`` attribute has been added to the classes in order to specify the range the ``domain`` maps to. This is mostly useful for the Laguerre, Hermite, and HermiteE polynomials whose natural domains are infinite and provides a more intuitive way to get the correct mapping of values without playing unnatural tricks with the domain. Fortran assumed shape array and size function support in ``numpy.f2py`` ----------------------------------------------------------------------- F2py now supports wrapping Fortran 90 routines that use assumed shape arrays. Before such routines could be called from Python but the corresponding Fortran routines received assumed shape arrays as zero length arrays which caused unpredicted results. Thanks to Lorenz H?depohl for pointing out the correct way to interface routines with assumed shape arrays. In addition, f2py interprets Fortran expression ``size(array, dim)`` as ``shape(array, dim-1)`` which makes it possible to automatically wrap Fortran routines that use two argument ``size`` function in dimension specifications. Before users were forced to apply this mapping manually. Other new functions ------------------- ``numpy.ravel_multi_index`` : Converts a multi-index tuple into an array of flat indices, applying boundary modes to the indices. ``numpy.einsum`` : Evaluate the Einstein summation convention. Using the Einstein summation convention, many common multi-dimensional array operations can be represented in a simple fashion. This function provides a way compute such summations. ``numpy.count_nonzero`` : Counts the number of non-zero elements in an array. ``numpy.result_type`` and ``numpy.min_scalar_type`` : These functions expose the underlying type promotion used by the ufuncs and other operations to determine the types of outputs. These improve upon the ``numpy.common_type`` and ``numpy.mintypecode`` which provide similar functionality but do not match the ufunc implementation. Changes ======= Changes and improvements in the numpy core ------------------------------------------ ``numpy.distutils`` ------------------- Several new compilers are supported for building Numpy: the Portland Group Fortran compiler on OS X, the PathScale compiler suite and the 64-bit Intel C compiler on Linux. ``numpy.testing`` ----------------- The testing framework gained ``numpy.testing.assert_allclose``, which provides a more convenient way to compare floating point arrays than `assert_almost_equal`, `assert_approx_equal` and `assert_array_almost_equal`. ``C API`` --------- In addition to the APIs for the new iterator and half data type, a number of other additions have been made to the C API. The type promotion mechanism used by ufuncs is exposed via ``PyArray_PromoteTypes``, ``PyArray_ResultType``, and ``PyArray_MinScalarType``. A new enumeration ``NPY_CASTING`` has been added which controls what types of casts are permitted. This is used by the new functions ``PyArray_CanCastArrayTo`` and ``PyArray_CanCastTypeTo``. A more flexible way to handle conversion of arbitrary python objects into arrays is exposed by ``PyArray_GetArrayParamsFromObject``. Removed features ================ ``numpy.fft`` ------------- The functions `refft`, `refft2`, `refftn`, `irefft`, `irefft2`, `irefftn`, which were aliases for the same functions without the 'e' in the name, were removed. ``numpy.memmap`` ---------------- The `sync()` and `close()` methods of memmap were removed. Use `flush()` and "del memmap" instead. ``numpy.lib`` ------------- The deprecated functions ``numpy.unique1d``, ``numpy.setmember1d``, ``numpy.intersect1d_nu`` and ``numpy.lib.ufunclike.log2`` were removed. ``numpy.ma`` ------------ Several deprecated items were removed from the ``numpy.ma`` module:: * ``numpy.ma.MaskedArray`` "raw_data" method * ``numpy.ma.MaskedArray`` constructor "flag" keyword * ``numpy.ma.make_mask`` "flag" keyword * ``numpy.ma.allclose`` "fill_value" keyword ``numpy.distutils`` ------------------- The ``numpy.get_numpy_include`` function was removed, use ``numpy.get_include`` instead. Checksums ========= 89f52ae0f0ea84cfcb457298190bee14 release/installers/numpy-1.6.0b1-py2.7-python.org.dmg 8dee06b362540b2c8c033399951e5386 release/installers/numpy-1.6.0b1-win32-superpack-python2.5.exe c1b11bf48037ac8fe025bfd297969840 release/installers/numpy-1.6.0b1-win32-superpack-python2.6.exe 2b005cb359d8123bd5ee3063d9eae3ac release/installers/numpy-1.6.0b1-win32-superpack-python2.7.exe 45627e8f63fe34011817df66722c39a5 release/installers/numpy-1.6.0b1-win32-superpack-python3.1.exe 1d8b214752b19b51ee747a6436ca1d38 release/installers/numpy-1.6.0b1-win32-superpack-python3.2.exe aeab5881974aac595b87a848c0c6344a release/installers/numpy-1.6.0b1.tar.gz 3ffc6e308f9e0614531fa3babcb75544 release/installers/numpy-1.6.0b1.zip From charlesr.harris at gmail.com Wed Mar 23 12:39:27 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Wed, 23 Mar 2011 10:39:27 -0600 Subject: [SciPy-User] [scipy] GSoC 2011 In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 9:29 AM, Hector wrote: > > > On Wed, Mar 23, 2011 at 6:59 PM, Ralf Gommers > wrote: > >> On Mon, Mar 21, 2011 at 10:41 AM, Hector wrote: >> > Hello everyone, >> > >> > List of organizations participating in GSoC 2011 has been out for 3 days >> and >> > as everyone expected, PSF is one of it. >> > I was waiting for SciPy to pop up under PSF umbrella, it nothing has >> been >> > updated there since then. >> > Is SciPy planning to participate in GSoC this year? >> >> I certainly hope so. Jarrod knows what is (or isn't) going on, hope he >> lets us know soon. >> >> > I was really waiting for the reply of this mail or some update on python > site for GSoC 2011. > > >> Did you have something in mind to work on? If so, it can't hurt to >> share your idea already. >> >> > Being a mathematics students, I did courses on some of the advanced topics > and wrote code for the well known algorithms in that field. The topics ( in > the descending order of number of codes written ) are - > 1) Numerical Analysis > 2) Operational Research > 3) Abstract Algebra > 4) Graph theory ( Cliques) > > Unfortunately I was not aware of Python and FOSS at that time and wrote all > of the programs in MatLab (except Cliques). I want to contribute these code > and enhance the tools SciPy contains. And GSoC 2011 will give me a > structured platform to do that. I would be very happy to work on these if > someone is will to mentor me. > > My works can be seen at - > https://github.com/hector1618 > > Numbers 3) and 4) might fit better with SAGE. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From hector1618 at gmail.com Wed Mar 23 13:24:21 2011 From: hector1618 at gmail.com (Hector) Date: Wed, 23 Mar 2011 22:54:21 +0530 Subject: [SciPy-User] [scipy] GSoC 2011 In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 10:09 PM, Charles R Harris < charlesr.harris at gmail.com> wrote: > > > On Wed, Mar 23, 2011 at 9:29 AM, Hector wrote: > >> >> >> On Wed, Mar 23, 2011 at 6:59 PM, Ralf Gommers < >> ralf.gommers at googlemail.com> wrote: >> >>> On Mon, Mar 21, 2011 at 10:41 AM, Hector wrote: >>> > Hello everyone, >>> > >>> > List of organizations participating in GSoC 2011 has been out for 3 >>> days and >>> > as everyone expected, PSF is one of it. >>> > I was waiting for SciPy to pop up under PSF umbrella, it nothing has >>> been >>> > updated there since then. >>> > Is SciPy planning to participate in GSoC this year? >>> >>> I certainly hope so. Jarrod knows what is (or isn't) going on, hope he >>> lets us know soon. >>> >>> >> I was really waiting for the reply of this mail or some update on python >> site for GSoC 2011. >> >> >>> Did you have something in mind to work on? If so, it can't hurt to >>> share your idea already. >>> >>> >> Being a mathematics students, I did courses on some of the advanced topics >> and wrote code for the well known algorithms in that field. The topics ( in >> the descending order of number of codes written ) are - >> 1) Numerical Analysis >> 2) Operational Research >> 3) Abstract Algebra >> 4) Graph theory ( Cliques) >> >> Unfortunately I was not aware of Python and FOSS at that time and wrote >> all of the programs in MatLab (except Cliques). I want to contribute these >> code and enhance the tools SciPy contains. And GSoC 2011 will give me a >> structured platform to do that. I would be very happy to work on these if >> someone is will to mentor me. >> >> My works can be seen at - >> https://github.com/hector1618 >> >> > > Numbers 3) and 4) might fit better with SAGE. > > Chuck > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > Unfortunately, sage was not selected as one of the participating organization in GSoC. I will try to add those as soon as I get some free time. Actually I tried for Cliques but it is already extracted from some other software. -- -Regards Hector Whenever you think you can or you can't, in either way you are right. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krystian.rosinski at gmail.com Fri Mar 4 03:23:14 2011 From: krystian.rosinski at gmail.com (=?ISO-8859-2?Q?Krystian_Rosi=F1ski?=) Date: Fri, 04 Mar 2011 08:23:14 -0000 Subject: [SciPy-User] OT warning! Re: ANN: Spyder v2.0.8 In-Reply-To: <4D703809.403@gmail.com> References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> <4D6FFF66.4010706@gmail.com> <4D701899.5090409@noaa.gov> <4D703809.403@gmail.com> Message-ID: Guido van Rossum wrote in 2006: "(...) Fortunately it's easy to separate the two. If it uses two-space indents, it's corporate code; if it uses four-space indents, it's open source. (If it uses tabs, I didn't write it! :-)" http://www.artima.com/weblogs/viewpost.jsp?thread=143947 On 4 Mar, 01:53, Stef Mientki wrote: > On 03-03-2011 23:39, Christopher Barker wrote:> On 3/3/11 12:51 PM, Alan G Isaac wrote: > >> On 3/3/2011 2:31 PM, Christopher Barker wrote: > >>> four spaces is a well established standard > >> ... for the standard library. ?Individual projects > >> set their own standards. > > OK -- PEP 8 is only _official_ for the standard library, but if you > > define "standard" as "the way most people do it", then four spaces is it. > > >> ?(Unfortunately, PEP 8 came > >> down on the wrong side of tabs vs. spaces.) > > clearly debatable, but my point is that it is a good idea for all > > projects to use the same conventions, and the ONLY one that makes any > > sense at this point in that context is four spaces. > > Using a standard might be a good idea, > but the standard is depending on the environment. > Is Python the environment or the set of actually used tools. > For me and the people around me, > the programs we make, are the environment. > We use PHP, Delphi, C, JAL, JS, Matlab, Labview, .... > and in all these languages me and my environment uses 2 spaces. > So the standard for Python is also 2 spaces. > > Secondly, the libraries and programs that we put in the open source community, > by who will they be (probably) changed and maintained? > So it seems to me perfectly legal to ?use 2 spaces as th? standard. > > cheers, > Stef Mientki > > > Pythons "there should be only one obvious way to do it" philosophy > > applies here. > > > -Chris > > _______________________________________________ > SciPy-User mailing list > SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user From Chris.Barker at noaa.gov Mon Mar 7 17:18:11 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Mon, 07 Mar 2011 22:18:11 -0000 Subject: [SciPy-User] Scientific Software developer wanted in Seattle. Message-ID: <4D7559A1.5070704@noaa.gov> Scientific Software Developer NOAA Emergency Response Division Help us develop our next-generation oil spill transport model. Background: The Emergency Response Division (ERD) of NOAA's Office of Response and Restoration (OR&R) provides scientific expertise to support the response to oil and chemical spills in the coastal environment. We played a major role in the recent Deepwater Horizon oil spill in the Gulf of Mexico. In order to fulfill our mission, we develop many of the software tools and models required to support a response to hazardous material spills. In the wake of the Deepwater horizon incident, we are embarking on a program to develop our next-generation oil spill transport model, taking into account lessons learned from years of response and this major incident. General Characteristics: The incumbent of this position will provide software development services to support the mission of the Emergency Response Division of NOAA's Office of Response and Restoration. As part of his/her efforts, independent evaluation and application of development techniques, algorithms, software architecture, and programming patterns will be required. The incumbent will work with the staff of ERD to provide analysis on user needs and software, GUI, and library design. He/she will be expect to work primarily on site at NOAA's facility in Seattle. Knowledge: The incumbent must be able to apply modern concepts of software engineering and design to the development of computational code, desktop applications, web applications, and libraries. The incumbent will need to be able to design, write, refactor, and implement code for a complex desktop and/or web application and computational library. The incumbent will work with a multi-disciplinary team including scientists, users, and other developers, utilizing software development practices such as usability design, version control, bug and issue tracking, and unit testing. Good communication skills and the knowledge of working as part of a team are required. Direction received: The incumbent will participate on various research and development teams. While endpoints will be identified through Division management and some direct supervision will be provided, the incumbent will be responsible for progressively being able to take input from team meetings and design objectives and propose strategies for reaching endpoints. Typical duties and responsibilities: The incumbent will work with the oil and chemical spill modeling team to improve and develop new tools and models used in fate and transport forecasting. Different components of the project will be written in C++, Python, and Javascript. Education requirement, minimum: Bachelor's degree in a technical discipline. Experience requirement, minimum: One to five years experience in development of complex software systems in one or more full-featured programming languages (C, C++, Java, Python, Ruby, Fortran, etc.) The team requires experience in the following languages/disciplines. Each incumbent will need experience in some subset: * Computational/Scientific programming * Numerical Analysis/Methods * Parallel processing * Desktop GUI * Web services * Web clients: HTML/CSS/Javascript * Python * wxPython * OpenGL * C/C++ * Python--C/C++ integration * Software development team leadership While the incumbent will work on-site at NOAA, directly with the NOAA team, this is a contract position with General Dynamics Information Technology: http://www.gdit.com/default.aspx For more information and to apply, use the GDIT web site: https://secure.resumeware.net/gdns_rw/gdns_web/job_detail.cfm?key=59436&show_cart=0&referredId=20 if that long url doesn't work, try: http://www.resumeware.net/gdns_rw/gdns_web/job_search.cfm and search for job ID: 179178 NOTE: This is a potion being hired by GDIT to work with NOAA, so any questions about salary, benefits, etc, etc should go to GDIT. However, feel free to send me questions about our organization, working conditions, more detail about the nature of the projects etc. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From skapunxter at yahoo.com Mon Mar 7 21:45:48 2011 From: skapunxter at yahoo.com (Randy Williams) Date: Tue, 08 Mar 2011 02:45:48 -0000 Subject: [SciPy-User] Possible to integrate an ODE in until the solution reaches a certain value? Message-ID: <47115.53514.qm@web39420.mail.mud.yahoo.com> Greetings, I'm trying to model the dynamics of a catapult-like mechanism used to launch a projectile, and have a system of ODEs which I need to numerically integrate over time. I am trying to solve for the position of the projectile as well as the other components in my mechanism. At some point in time, the projectile separates from the mechanism, and becomes airborne. The equations governing the system change at that point in time, but because it's a function of position (which i'm solving for), I don't know up front what timespan I need to integrate over. I would like the ODE solver to stop integrating once the the solution reaches this certain value, and I will use the states at that point to compute the initial conditions to another ODE describing the motion from that time onward. Is there an ODE solver in Python/SciPy which will integrate from the initial t until the solution reaches a certain value, or until a specific condition is met? The ODE solvers in Matlab have "events" which will do this, but I'm trying my best to stick with Python. Thanks, Randy From krystian.rosinski at gmail.com Tue Mar 8 12:06:01 2011 From: krystian.rosinski at gmail.com (=?ISO-8859-2?Q?Krystian_Rosi=F1ski?=) Date: Tue, 08 Mar 2011 17:06:01 -0000 Subject: [SciPy-User] Problem with IndexError Message-ID: Hi, I've been using SciPy for some time, but I still have problems with indexing and the Python convention relating to the numbering from zero in more complex cases. I'm trying to translate excellent example "Matrix Structural Analysis of Plane Frames using Scilab" to Python. I've spent on this code a few hours and I still can't find the cause of the problem of IndexError. Hereis my code. I would be very grateful if someone could look at it and give me any advice. Krystian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MSA.py Type: application/octet-stream Size: 11207 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MSA.sce Type: application/octet-stream Size: 369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MSA.sci Type: application/octet-stream Size: 5107 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MSA_using_Scilab.pdf Type: application/pdf Size: 303718 bytes Desc: not available URL: From dark_slave_code1 at hotmail.com Tue Mar 8 19:54:25 2011 From: dark_slave_code1 at hotmail.com (DARK Cyrus) Date: Wed, 9 Mar 2011 00:54:25 +0000 Subject: [SciPy-User] hello Message-ID: http://_dark_slave_code1.enmatch.com/forum/2008/05/medicines-to-neglect-their.html Herewith the shortest version you'll ever see of what every chief executive of every technology company needs to know about running the business.Microsoft wants to buy startups. It just better make sure the price is right.For more than 100 years, the Rocky Mountain Collegian has been run by the students at Colorado State University without any corporate masters. On Tuesday, the school administration met with staff from the Gannett publication, The Coloradoan in Fort Collin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gputzar at beckarndt.com.au Wed Mar 9 23:55:47 2011 From: gputzar at beckarndt.com.au (Gero Putzar) Date: Thu, 10 Mar 2011 12:55:47 +0800 Subject: [SciPy-User] Mistake in the scipy-wiki/cookbook/KDTree Message-ID: <002201cbdedf$6d9a8eb0$48cfac10$@com.au> Preferably to Mike Toews or whoever supplied this wiki page. Hi, first: Thank you very much for putting this explanatory code into the wiki. I think there is a minor mistake in the code displayed on http://www.scipy.org/Cookbook/KDTree in the subsection "Searching a kd-tree", function radius_search(): Within this function the radius is supplied as second argument to the function intersect(). But intersect() expects the square of the radius as argument. I suggest the following modified version. Note that this modified version now returns the square distances very much like the corresponding knn_search function above. --- snip --- def radius_search(tree, datapoint, radius): """ find all points within radius of datapoint """ stack = [tree[0]] inside = [] radsq = radius**2 while stack: leaf_idx, leaf_data, left_hrect, \ right_hrect, left, right = stack.pop() # leaf if leaf_idx is not None: param=leaf_data.shape[0] distsq = ((leaf_data - datapoint.reshape((param,1)))**2).sum(axis=0) near = numpy.where(distsq<=radsq) if len(near[0]): idx = leaf_idx[near] distsq = distsq[near] inside += (zip(distsq, idx)) else: if intersect(left_hrect, radsq, datapoint): stack.append(tree[left]) if intersect(right_hrect, radsq, datapoint): stack.append(tree[right]) return inside --- snap --- I did not test this modification nor the original code. And I might be wrong all together, of course. Please cc directly to me if you want an answer because I'm not subscribed to the list. Thanks, kind regards, Gero. From wkerzend at mso.anu.edu.au Fri Mar 18 01:00:18 2011 From: wkerzend at mso.anu.edu.au (Wolfgang Kerzendorf) Date: Fri, 18 Mar 2011 16:00:18 +1100 Subject: [SciPy-User] segmentation fault when linear interpolating Message-ID: <4D82E6E2.2080104@mso.anu.edu.au> Dear all, I really like quickhull and use it to interpolate on grids that are extremely big (2-5 gig). Normally it works really well, but I encountered now a problem with it when interpolating in one of my grids. It gave me a segmentation fault and this traceback: Process: Python [2780] Path: /System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python Identifier: org.python.python.app Version: 2.6 (2.6) Build Info: python-440200~2 Code Type: X86-64 (Native) Parent Process: bash [2319] PlugIn Path: /Library/Python/2.6/site-packages/scipy/interpolate/interpnd.so PlugIn Identifier: interpnd.so PlugIn Version: ??? (???) Date/Time: 2011-03-18 15:56:04.405 +1100 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Interval Since Last Report: 4246070 sec Crashes Since Last Report: 75 Per-App Interval Since Last Report: 4572421 sec Per-App Crashes Since Last Report: 4 Anonymous UUID: 9BB099CA-10F8-4163-99F6-8109A901DB1A Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x000000023bf42000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 interpnd.so 0x0000000104bd3ebf __pyx_pf_8interpnd_20LinearNDInterpolator__evaluate_double + 8874 (interpnd.c:2935) 1 org.python.python 0x000000010000aff3 PyObject_Call + 112 2 org.python.python 0x000000010001a9df PyClass_New + 1575 3 org.python.python 0x000000010000aff3 PyObject_Call + 112 4 interpnd.so 0x0000000104bd004b __pyx_pf_8interpnd_18NDInterpolatorBase___call__ + 3448 (interpnd.c:2033) 5 org.python.python 0x000000010000aff3 PyObject_Call + 112 6 org.python.python 0x000000010001a9df PyClass_New + 1575 7 org.python.python 0x000000010000aff3 PyObject_Call + 112 8 org.python.python 0x0000000100051f0e _PyType_Lookup + 6095 9 org.python.python 0x000000010000aff3 PyObject_Call + 112 10 org.python.python 0x000000010008a51a PyEval_EvalFrameEx + 20328 11 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 12 org.python.python 0x000000010002c8e1 PyClassMethod_New + 1748 13 org.python.python 0x000000010000aff3 PyObject_Call + 112 14 org.python.python 0x00000001000849db PyEval_CallObjectWithKeywords + 175 15 minuit.so 0x000000010519cb75 MyFCN::operator()(std::vector > const&) const + 205 (minuit.cpp:1426) 16 minuit.so 0x000000010519d2d2 minuit_Minuit_scan(minuit_Minuit*, _object*, _object*) + 1264 (minuit.cpp:1229) 17 minuit.so 0x000000010519d43c minuit_Minuit_scan(minuit_Minuit*, _object*, _object*) + 1626 (minuit.cpp:1272) 18 minuit.so 0x000000010519d43c minuit_Minuit_scan(minuit_Minuit*, _object*, _object*) + 1626 (minuit.cpp:1272) 19 minuit.so 0x000000010519d43c minuit_Minuit_scan(minuit_Minuit*, _object*, _object*) + 1626 (minuit.cpp:1272) 20 org.python.python 0x0000000100089187 PyEval_EvalFrameEx + 15317 21 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 22 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 23 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 24 org.python.python 0x000000010008735d PyEval_EvalFrameEx + 7595 25 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 26 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 27 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 28 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 29 org.python.python 0x00000001000892e1 PyEval_EvalFrameEx + 15663 30 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 31 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 32 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 33 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 34 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 35 org.python.python 0x000000010008935e PyEval_EvalFrameEx + 15788 36 org.python.python 0x000000010008acce PyEval_EvalCodeEx + 1803 37 org.python.python 0x000000010008ad61 PyEval_EvalCode + 54 38 org.python.python 0x00000001000a265a Py_CompileString + 78 39 org.python.python 0x00000001000a2723 PyRun_FileExFlags + 150 40 org.python.python 0x00000001000a423d PyRun_SimpleFileExFlags + 704 41 org.python.python 0x00000001000b0286 Py_Main + 2718 42 org.python.python.app 0x0000000100000e6c start + 52 Thread 1: com.apple.CFSocket.private 0 libSystem.B.dylib 0x00007fff873d3e92 select$DARWIN_EXTSN + 10 1 com.apple.CoreFoundation 0x00007fff87675498 __CFSocketManager + 824 2 libSystem.B.dylib 0x00007fff873c9536 _pthread_start + 331 3 libSystem.B.dylib 0x00007fff873c93e9 thread_start + 13 Thread 2: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff873a916a kevent + 10 1 libSystem.B.dylib 0x00007fff873ab03d _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff873aad14 _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff873aa83e _dispatch_worker_thread2 + 252 4 libSystem.B.dylib 0x00007fff873aa168 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff873aa005 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000149adde00 rbx: 0x0000000000000000 rcx: 0x0000000000000000 rdx: 0x00000002037a1e00 rdi: 0x00000000000001c0 rsi: 0x00000000387a0200 rbp: 0x00007fff5fbfc750 rsp: 0x00007fff5fbfc270 r8: 0x00000000000001c0 r9: 0x0000000000000004 r10: 0x0000000000040c45 r11: 0x0000000000000004 r12: 0x0000000000000009 r13: 0x0000000000001f40 r14: 0x0000000000084a08 r15: 0x0000000000001f40 rip: 0x0000000104bd3ebf rfl: 0x0000000000010206 cr2: 0x000000023bf42000 Binary Images: 0x100000000 - 0x100000ff7 org.python.python.app 2.6 (2.6) <3F4A329D-3EF9-A736-6F3D-67A4B39AE8DE> /System/Library/Frameworks/Python.framework/Versions/2.6/Resources/Python.app/Contents/MacOS/Python 0x100004000 - 0x100114ff7 org.python.python 2.6.1 (2.6.1) <126DA8FF-5BC2-8788-51E3-D7A29A3F9F0F> /System/Library/Frameworks/Python.framework/Versions/2.6/Python 0x1001d8000 - 0x1001d9fff cStringIO.so ??? (???) <0C7D1D15-4241-99A9-7671-A3CF105C2EBB> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/cStringIO.so 0x1001de000 - 0x1001e1ff7 strop.so ??? (???) <4A91CDB0-6E91-DA0E-8E6B-38BE29105EA0> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/strop.so 0x1001e6000 - 0x1001e9fff operator.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/operator.so 0x1001ef000 - 0x1001f2fff _collections.so ??? (???) <57523A72-6EAC-69D0-DC9B-6B698973156C> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_collections.so 0x1001f8000 - 0x1001f9ff7 time.so ??? (???) <80513398-F49E-79D1-5014-514361869D40> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/time.so 0x100770000 - 0x100775fff itertools.so ??? (???) <9287854F-7F2B-D4AF-FCA3-EB69DA821DA9> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/itertools.so 0x10077d000 - 0x10077dfff _bisect.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_bisect.so 0x100781000 - 0x100782ff7 _heapq.so ??? (???) <9D1346ED-4A36-F835-2C9C-7303DEF1CFAA> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_heapq.so 0x100787000 - 0x100788ff7 _functools.so ??? (???) <97A294C7-290C-9746-B294-275516490F40> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_functools.so 0x10078c000 - 0x10078eff7 math.so ??? (???) <54D066FD-A9F9-EA69-8C3C-CE83DF68C58A> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/math.so 0x100794000 - 0x100796fe7 binascii.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/binascii.so 0x10079a000 - 0x10079bfff _random.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_random.so 0x10079f000 - 0x1007a0fff fcntl.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/fcntl.so 0x1007e4000 - 0x1007e6ff7 select.so ??? (???) <959BC45E-FCC7-107D-0D20-DF9A0BA0E86A> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/select.so 0x1007ec000 - 0x1007f0ff7 _struct.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_struct.so 0x1007f6000 - 0x1007f7fff termios.so ??? (???) <39AEBC3D-7BAC-6C0D-1697-8F6A0E32E07E> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/termios.so 0x101100000 - 0x10110bff7 _curses.so ??? (???) <7C8CF61B-E434-38EE-ADCF-27ECC0D79A66> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_curses.so 0x101112000 - 0x101113ff7 _hashlib.so ??? (???) <7F6E55A6-9FFF-0C47-2D65-76900C63C294> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_hashlib.so 0x101117000 - 0x101149ff7 +readline.so ??? (???) /Library/Python/2.6/site-packages/readline-2.6.4-py2.6-macosx-10.6-universal.egg/readline.so 0x101163000 - 0x101164ff7 resource.so ??? (???) <2797FA82-B572-1D65-059C-A2907E2F331A> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/resource.so 0x101168000 - 0x10117aff7 _ctypes.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_ctypes.so 0x1011c5000 - 0x1011c6fff _locale.so ??? (???) <43F84ED0-3A85-1997-8098-2D3870B4012D> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_locale.so 0x1011ca000 - 0x1011d6fff cPickle.so ??? (???) <7C50CFD6-883C-45B8-FAB8-D02C33E85437> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/cPickle.so 0x1011dc000 - 0x1011ddff7 _lsprof.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_lsprof.so 0x1011e1000 - 0x1011e8fff _socket.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_socket.so 0x1011f0000 - 0x1011f3fff _ssl.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_ssl.so 0x1011f9000 - 0x1011f9fff _weakref.so ??? (???) <2A4A85EC-30E5-11FB-40B5-544EEF7CD1C0> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_weakref.so 0x101700000 - 0x101766fff +multiarray.so ??? (???) <1B840A94-2D2A-8E86-D1E2-3C21C69CAE5A> /Library/Python/2.6/site-packages/numpy/core/multiarray.so 0x1017ab000 - 0x1017e8fff +umath.so ??? (???) /Library/Python/2.6/site-packages/numpy/core/umath.so 0x10180e000 - 0x10181dfff +_sort.so ??? (???) <09F6F29A-9407-DBB6-EC49-55FC8FFE57DB> /Library/Python/2.6/site-packages/numpy/core/_sort.so 0x101824000 - 0x101827fff +_dotblas.so ??? (???) <0877AEF8-A317-2BE8-FBE9-4B11CD7FF5D9> /Library/Python/2.6/site-packages/numpy/core/_dotblas.so 0x10182b000 - 0x101849ff7 +scalarmath.so ??? (???) <93E7F819-C711-858F-9F75-FF9040CCDF5C> /Library/Python/2.6/site-packages/numpy/core/scalarmath.so 0x10185a000 - 0x10185cff7 +_compiled_base.so ??? (???) <8B6DCB06-44AB-D0F9-3D2E-26B4200EEED7> /Library/Python/2.6/site-packages/numpy/lib/_compiled_base.so 0x101860000 - 0x101863fff +lapack_lite.so ??? (???) /Library/Python/2.6/site-packages/numpy/linalg/lapack_lite.so 0x101867000 - 0x10186efff +fftpack_lite.so ??? (???) <68B07373-5628-68B1-FEDD-E239FF1AAC0F> /Library/Python/2.6/site-packages/numpy/fft/fftpack_lite.so 0x101872000 - 0x1018aefff +mtrand.so ??? (???) <3BEED4ED-8AAD-D267-8F1A-97699430721E> /Library/Python/2.6/site-packages/numpy/random/mtrand.so 0x1018dd000 - 0x1018e8fe7 datetime.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/datetime.so 0x1019f1000 - 0x1019f2fff +nxutils.so ??? (???) <1F14BA4E-0878-5A91-000E-5FF4DECE4C13> /Library/Python/2.6/site-packages/matplotlib/nxutils.so 0x1019f5000 - 0x1019f8fff _csv.so ??? (???) /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_csv.so 0x101b00000 - 0x101b28ff7 +_path.so ??? (???) <994A76E0-4F21-2129-0008-5CC964C96205> /Library/Python/2.6/site-packages/matplotlib/_path.so 0x101b56000 - 0x101b84ff7 +ft2font.so ??? (???) <51B5568D-B510-365B-B7C8-77CC73148743> /Library/Python/2.6/site-packages/matplotlib/ft2font.so 0x101bb4000 - 0x101c2cfff +libfreetype.6.dylib 11.0.0 (compatibility 11.0.0) /usr/local/lib/libfreetype.6.dylib 0x1020dd000 - 0x1020e0ff7 +_cntr.so ??? (???) <9038D7B0-F060-A393-4901-5D2AAE5549E6> /Library/Python/2.6/site-packages/matplotlib/_cntr.so 0x1020e4000 - 0x1020efff7 +_delaunay.so ??? (???) <1BF09B64-9F40-9515-D61A-F0F851ED34DF> /Library/Python/2.6/site-packages/matplotlib/_delaunay.so 0x102280000 - 0x1022fdfef unicodedata.so ??? (???) <27EF63BF-90E9-3D71-FE24-1EA634E99498> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/unicodedata.so 0x102312000 - 0x102329ff7 +_png.so ??? (???) /Library/Python/2.6/site-packages/matplotlib/_png.so 0x10234a000 - 0x10236bfe7 +libpng14.14.dylib 18.0.0 (compatibility 18.0.0) /usr/local/lib/libpng14.14.dylib 0x10267b000 - 0x1026a7fff +_image.so ??? (???) <33B9CEAA-FCC1-8281-9B21-EED6AAFBECD2> /Library/Python/2.6/site-packages/matplotlib/_image.so 0x1026da000 - 0x1026fdff7 +_tri.so ??? (???) /Library/Python/2.6/site-packages/matplotlib/_tri.so 0x10272f000 - 0x102730ff7 MacOS.so ??? (???) <30CB87DA-44C8-FBD0-5609-8915E6D06F39> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/MacOS.so 0x102735000 - 0x102758ff7 +_macosx.so ??? (???) <0D4C0C64-5AD4-29DE-35DC-88929C73035C> /Library/Python/2.6/site-packages/matplotlib/backends/_macosx.so 0x102794000 - 0x102797ff7 zlib.so ??? (???) <647721E3-67B5-8CD0-3A78-060FF9C80924> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/zlib.so 0x1027f3000 - 0x1027f6ff7 +lambertw.so ??? (???) /Library/Python/2.6/site-packages/scipy/special/lambertw.so 0x102c00000 - 0x102c23fff +pyfitsComp.so ??? (???) <847401EA-83A8-2B66-DACF-63795182F68B> /Library/Python/2.6/site-packages/pyfits/pyfitsComp.so 0x102cc5000 - 0x102cd1fb1 +libgcc_s.1.dylib ??? (???) /usr/local/lib/libgcc_s.1.dylib 0x102cda000 - 0x102ce7ff7 +orthogonal_eval.so ??? (???) /Library/Python/2.6/site-packages/scipy/special/orthogonal_eval.so 0x102cf0000 - 0x102cf9ff7 +_flinalg.so ??? (???) <89CD5E98-4FE4-8824-8F5C-0632B017D8DA> /Library/Python/2.6/site-packages/scipy/linalg/_flinalg.so 0x102e83000 - 0x102f85ff7 +_cephes.so ??? (???) <3CE109B4-C499-C6CE-76ED-735FB43DA31F> /Library/Python/2.6/site-packages/scipy/special/_cephes.so 0x102fa0000 - 0x102fcffff +_fitpack.so ??? (???) /Library/Python/2.6/site-packages/scipy/interpolate/_fitpack.so 0x102fd3000 - 0x102fd6fff +clapack.so ??? (???) <2D7DADDD-013F-22E6-B9A3-32F98EDC7AF7> /Library/Python/2.6/site-packages/scipy/linalg/clapack.so 0x102fda000 - 0x102fe2ff7 +calc_lwork.so ??? (???) <21143851-DD93-037F-77DC-E148DADD5939> /Library/Python/2.6/site-packages/scipy/linalg/calc_lwork.so 0x102fe8000 - 0x102febfff +cblas.so ??? (???) /Library/Python/2.6/site-packages/scipy/linalg/cblas.so 0x102fef000 - 0x102ff3ff7 +_csgraph.so ??? (???) <1A281B40-926C-F19C-F78A-F06138731677> /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_csgraph.so 0x104900000 - 0x1049b6fcf +libgfortran.2.dylib 3.0.0 (compatibility 3.0.0) /usr/local/lib/libgfortran.2.dylib 0x1049fb000 - 0x104aaeff7 +specfun.so ??? (???) <69246894-69FF-C143-B490-8DCD8646F130> /Library/Python/2.6/site-packages/scipy/special/specfun.so 0x104abf000 - 0x104af7fff +dfitpack.so ??? (???) /Library/Python/2.6/site-packages/scipy/interpolate/dfitpack.so 0x104b01000 - 0x104b4fff7 +flapack.so ??? (???) /Library/Python/2.6/site-packages/scipy/linalg/flapack.so 0x104b82000 - 0x104ba8fff +fblas.so ??? (???) <46527B9D-7E69-09FD-AA91-A34E81A913CF> /Library/Python/2.6/site-packages/scipy/linalg/fblas.so 0x104bc1000 - 0x104be3ff7 +interpnd.so ??? (???) <3D36A1DD-B922-A4E5-6301-DB3348445894> /Library/Python/2.6/site-packages/scipy/interpolate/interpnd.so 0x104bf1000 - 0x104bf6fff +_distance_wrap.so ??? (???) /Library/Python/2.6/site-packages/scipy/spatial/_distance_wrap.so 0x104d00000 - 0x104ddcfff +_csr.so ??? (???) /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_csr.so 0x104e38000 - 0x104ebbfff +_csc.so ??? (???) <8663838B-3993-4C70-A8BB-274592BAB68E> /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_csc.so 0x104ee1000 - 0x104f08fff +_coo.so ??? (???) <0A141588-2858-F1B4-020F-49CAE887F574> /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_coo.so 0x104f13000 - 0x104f21ff7 +_dia.so ??? (???) <6FB69601-4855-23F8-5747-A4A79F7DC9E3> /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_dia.so 0x104f29000 - 0x105019fff +_bsr.so ??? (???) /Library/Python/2.6/site-packages/scipy/sparse/sparsetools/_bsr.so 0x10508c000 - 0x10509efff +ckdtree.so ??? (???) /Library/Python/2.6/site-packages/scipy/spatial/ckdtree.so 0x1050a9000 - 0x10510fff7 +qhull.so ??? (???) /Library/Python/2.6/site-packages/scipy/spatial/qhull.so 0x10516f000 - 0x105178fff _sqlite3.so ??? (???) <9FBFA469-EEA4-67CA-B92D-5C30D91887E9> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/_sqlite3.so 0x105182000 - 0x105195ff7 +_nd_image.so ??? (???) <932E1CD7-E345-F591-5C9E-326A9423469F> /Library/Python/2.6/site-packages/scipy/ndimage/_nd_image.so 0x10519b000 - 0x1051a6fff +minuit.so ??? (???) <299C8E9B-6AB2-0441-529D-BC2177EEAC86> /Library/Python/2.6/site-packages/minuit.so 0x1051b7000 - 0x105204ff7 +liblcg_Minuit.0.dylib ??? (???) <167DC5BD-BFA7-C81B-7322-97C0D527657D> /usr/local/lib/liblcg_Minuit.0.dylib 0x10523e000 - 0x105283ff7 +_backend_agg.so ??? (???) <98355641-D209-D8DE-0659-6C7B54D42FE1> /Library/Python/2.6/site-packages/matplotlib/backends/_backend_agg.so 0x1052d6000 - 0x1052dffff +ttconv.so ??? (???) /Library/Python/2.6/site-packages/matplotlib/ttconv.so 0x1052eb000 - 0x1052eeff7 bz2.so ??? (???) <5A76389C-66CD-85BE-9DF9-58DFCC14C148> /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload/bz2.so 0x7fff5fc00000 - 0x7fff5fc3bdef dyld 132.1 (???) /usr/lib/dyld 0x7fff80003000 - 0x7fff800d5fe7 com.apple.CFNetwork 454.11.5 (454.11.5) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CFNetwork.framework/Versions/A/CFNetwork 0x7fff800d6000 - 0x7fff800dbfff libGFXShared.dylib ??? (???) <991F8197-FD06-2AF1-F99B-E448ED4FB2AC> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib 0x7fff8013d000 - 0x7fff8013dff7 com.apple.CoreServices 44 (44) /System/Library/Frameworks/CoreServices.framework/Versions/A/CoreServices 0x7fff8013e000 - 0x7fff801a8fe7 libvMisc.dylib 268.0.1 (compatibility 1.0.0) <514D400C-50A5-C196-83AA-1035DDC8FBBE> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib 0x7fff801ed000 - 0x7fff80226fef libcups.2.dylib 2.8.0 (compatibility 2.0.0) <561D0DCB-47AD-A12C-9066-70E4CBAD331C> /usr/lib/libcups.2.dylib 0x7fff8022f000 - 0x7fff80235ff7 IOSurface ??? (???) <04EDCEDE-E36F-15F8-DC67-E61E149D2C9A> /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface 0x7fff80236000 - 0x7fff802b4fff com.apple.CoreText 3.5.0 (???) <4D5C7932-293B-17FF-7309-B580BB1953EA> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreText.framework/Versions/A/CoreText 0x7fff8042f000 - 0x7fff8046cff7 libFontRegistry.dylib ??? (???) <8C69F685-3507-1B8F-51AD-6183D5E88979> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontRegistry.dylib 0x7fff8055a000 - 0x7fff80610fff libobjc.A.dylib 227.0.0 (compatibility 1.0.0) /usr/lib/libobjc.A.dylib 0x7fff80641000 - 0x7fff8064cff7 com.apple.speech.recognition.framework 3.11.1 (3.11.1) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SpeechRecognition.framework/Versions/A/SpeechRecognition 0x7fff80758000 - 0x7fff80758ff7 com.apple.vecLib 3.6 (vecLib 3.6) <96FB6BAD-5568-C4E0-6FA7-02791A58B584> /System/Library/Frameworks/vecLib.framework/Versions/A/vecLib 0x7fff8077e000 - 0x7fff80babfe7 com.apple.RawCamera.bundle 3.6.0 (558) <9F93BC25-80D8-15B5-5529-E6199E3A5CA1> /System/Library/CoreServices/RawCamera.bundle/Contents/MacOS/RawCamera 0x7fff80bac000 - 0x7fff813b6fe7 libBLAS.dylib 219.0.0 (compatibility 1.0.0) /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib 0x7fff813b7000 - 0x7fff81470fff libsqlite3.dylib 9.6.0 (compatibility 9.0.0) <2C5ED312-E646-9ADE-73A9-6199A2A43150> /usr/lib/libsqlite3.dylib 0x7fff81731000 - 0x7fff8174cff7 com.apple.openscripting 1.3.1 (???) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/OpenScripting.framework/Versions/A/OpenScripting 0x7fff8174d000 - 0x7fff817d9fef SecurityFoundation ??? (???) <6860DE26-0D42-D1E8-CD7C-5B42D78C1E1D> /System/Library/Frameworks/SecurityFoundation.framework/Versions/A/SecurityFoundation 0x7fff817e2000 - 0x7fff81823fff com.apple.SystemConfiguration 1.10.5 (1.10.2) /System/Library/Frameworks/SystemConfiguration.framework/Versions/A/SystemConfiguration 0x7fff81824000 - 0x7fff81873ff7 com.apple.DirectoryService.PasswordServerFramework 6.0 (6.0) <14FD0978-4BE0-336B-A19E-F388694583EB> /System/Library/PrivateFrameworks/PasswordServer.framework/Versions/A/PasswordServer 0x7fff81874000 - 0x7fff8190efff com.apple.ApplicationServices.ATS 4.4 (???) <395849EE-244A-7323-6CBA-E71E3B722984> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/ATS 0x7fff826f0000 - 0x7fff8271bff7 libxslt.1.dylib 3.24.0 (compatibility 3.0.0) <87A0B228-B24A-C426-C3FB-B40D7258DD49> /usr/lib/libxslt.1.dylib 0x7fff8271c000 - 0x7fff82757fff com.apple.AE 496.4 (496.4) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/AE.framework/Versions/A/AE 0x7fff8276c000 - 0x7fff8282dfe7 libFontParser.dylib ??? (???) <8B12D37E-3A95-5A73-509C-3AA991E0C546> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libFontParser.dylib 0x7fff8282e000 - 0x7fff8283dfff com.apple.NetFS 3.2.1 (3.2.1) <0357C371-2E2D-069C-08FB-1180512B8516> /System/Library/Frameworks/NetFS.framework/Versions/A/NetFS 0x7fff8283e000 - 0x7fff8283fff7 com.apple.TrustEvaluationAgent 1.1 (1) <74800EE8-C14C-18C9-C208-20BBDB982D40> /System/Library/PrivateFrameworks/TrustEvaluationAgent.framework/Versions/A/TrustEvaluationAgent 0x7fff82840000 - 0x7fff8297efff com.apple.CoreData 102.1 (251) <32233D4D-00B7-CE14-C881-6BF19FD05A03> /System/Library/Frameworks/CoreData.framework/Versions/A/CoreData 0x7fff8297f000 - 0x7fff82984fff libGIF.dylib ??? (???) <9A2723D8-61F9-6D65-D254-4F9273CDA54A> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libGIF.dylib 0x7fff82985000 - 0x7fff82a5fff7 com.apple.vImage 4.0 (4.0) <354F34BF-B221-A3C9-2CA7-9BE5E14AD5AD> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage 0x7fff82a7c000 - 0x7fff82d7afe7 com.apple.HIToolbox 1.6.4 (???) <263AD497-F4CC-9610-E7D3-B95CF6F02030> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox 0x7fff82d7b000 - 0x7fff82e00ff7 com.apple.print.framework.PrintCore 6.3 (312.7) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/PrintCore.framework/Versions/A/PrintCore 0x7fff82e01000 - 0x7fff82f1afef libGLProgrammability.dylib ??? (???) <4F2DC233-7DD2-1204-CAA5-3E6524F0AB75> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLProgrammability.dylib 0x7fff82f1b000 - 0x7fff82f34fff com.apple.CFOpenDirectory 10.6 (10.6) /System/Library/Frameworks/OpenDirectory.framework/Versions/A/Frameworks/CFOpenDirectory.framework/Versions/A/CFOpenDirectory 0x7fff83096000 - 0x7fff8309aff7 libmathCommon.A.dylib 315.0.0 (compatibility 1.0.0) <95718673-FEEE-B6ED-B127-BCDBDB60D4E5> /usr/lib/system/libmathCommon.A.dylib 0x7fff8309b000 - 0x7fff830b1fff com.apple.ImageCapture 6.0.1 (6.0.1) <09ABF2E9-D110-71A9-4A6F-8A61B683E936> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/ImageCapture.framework/Versions/A/ImageCapture 0x7fff83225000 - 0x7fff8323bfe7 com.apple.MultitouchSupport.framework 207.10 (207.10) <1828C264-A54A-7FDD-FE1B-49DDE3F50779> /System/Library/PrivateFrameworks/MultitouchSupport.framework/Versions/A/MultitouchSupport 0x7fff8333f000 - 0x7fff83340ff7 com.apple.audio.units.AudioUnit 1.6.5 (1.6.5) <14F14B5E-9287-BC36-0C3F-6592E6696CD4> /System/Library/Frameworks/AudioUnit.framework/Versions/A/AudioUnit 0x7fff8336b000 - 0x7fff83491fff com.apple.audio.toolbox.AudioToolbox 1.6.5 (1.6.5) /System/Library/Frameworks/AudioToolbox.framework/Versions/A/AudioToolbox 0x7fff836ce000 - 0x7fff836ceff7 com.apple.Cocoa 6.6 (???) <68B0BE46-6E24-C96F-B341-054CF9E8F3B6> /System/Library/Frameworks/Cocoa.framework/Versions/A/Cocoa 0x7fff83726000 - 0x7fff8376ffef libGLU.dylib ??? (???) /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLU.dylib 0x7fff837f3000 - 0x7fff837f3ff7 com.apple.Accelerate.vecLib 3.6 (vecLib 3.6) <4CCE5D69-F1B3-8FD3-1483-E0271DB2CCF3> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/vecLib 0x7fff83800000 - 0x7fff83800ff7 com.apple.ApplicationServices 38 (38) <10A0B9E9-4988-03D4-FC56-DDE231A02C63> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices 0x7fff8380f000 - 0x7fff83a95fff com.apple.security 6.1.1 (37594) <1B4E1ABD-1BB3-DA49-F574-0EEB23E73C6A> /System/Library/Frameworks/Security.framework/Versions/A/Security 0x7fff83a96000 - 0x7fff83c54fff libicucore.A.dylib 40.0.0 (compatibility 1.0.0) <781E7B63-2AD0-E9BA-927C-4521DB616D02> /usr/lib/libicucore.A.dylib 0x7fff83c55000 - 0x7fff8464bfff com.apple.AppKit 6.6.7 (1038.35) <9F4DF818-9DB9-98DA-490C-EF29EA757A97> /System/Library/Frameworks/AppKit.framework/Versions/C/AppKit 0x7fff8464c000 - 0x7fff8464fff7 com.apple.securityhi 4.0 (36638) <38935851-09E4-DDAB-DB1D-30ADC39F7ED0> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/SecurityHI.framework/Versions/A/SecurityHI 0x7fff84650000 - 0x7fff848d3ff7 com.apple.Foundation 6.6.4 (751.42) <9A99D378-E97A-8C0F-3857-D0FAA30FCDD5> /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation 0x7fff848d4000 - 0x7fff84983fff edu.mit.Kerberos 6.5.10 (6.5.10) /System/Library/Frameworks/Kerberos.framework/Versions/A/Kerberos 0x7fff84984000 - 0x7fff84a01fef libstdc++.6.dylib 7.9.0 (compatibility 7.0.0) <35ECA411-2C08-FD7D-11B1-1B7A04921A5C> /usr/lib/libstdc++.6.dylib 0x7fff84a02000 - 0x7fff84a10ff7 libkxld.dylib ??? (???) <4016E9E6-0645-5384-A697-2775B5228113> /usr/lib/system/libkxld.dylib 0x7fff84a11000 - 0x7fff84a12fff liblangid.dylib ??? (???) /usr/lib/liblangid.dylib 0x7fff84a13000 - 0x7fff84a5ffff libauto.dylib ??? (???) /usr/lib/libauto.dylib 0x7fff84a60000 - 0x7fff84dfdfe7 com.apple.QuartzCore 1.6.3 (227.34) <215222AF-B30A-7CE5-C46C-1A766C1D1D2E> /System/Library/Frameworks/QuartzCore.framework/Versions/A/QuartzCore 0x7fff84dfe000 - 0x7fff84e02ff7 libCGXType.A.dylib 545.0.0 (compatibility 64.0.0) <63F77AC8-84CB-0C2F-8D2B-190EE5CCDB45> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCGXType.A.dylib 0x7fff84e03000 - 0x7fff84e0afff com.apple.OpenDirectory 10.6 (10.6) <4200CFB0-DBA1-62B8-7C7C-91446D89551F> /System/Library/Frameworks/OpenDirectory.framework/Versions/A/OpenDirectory 0x7fff84f04000 - 0x7fff84f05fff libffi.dylib ??? (???) /usr/lib/libffi.dylib 0x7fff84f12000 - 0x7fff84f64ff7 com.apple.HIServices 1.8.2 (???) <7C91D07D-FA20-0882-632F-0CAE4FAC2B79> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/HIServices.framework/Versions/A/HIServices 0x7fff8514a000 - 0x7fff85150ff7 com.apple.CommerceCore 1.0 (6) /System/Library/PrivateFrameworks/CommerceKit.framework/Versions/A/Frameworks/CommerceCore.framework/Versions/A/CommerceCore 0x7fff852c3000 - 0x7fff852e0ff7 libPng.dylib ??? (???) <14043CBC-329F-4009-299E-DEE411E16134> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libPng.dylib 0x7fff852e1000 - 0x7fff85322fef com.apple.QD 3.36 (???) <5DC41E81-32C9-65B2-5528-B33E934D5BB4> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/QD.framework/Versions/A/QD 0x7fff85398000 - 0x7fff853bdff7 com.apple.CoreVideo 1.6.2 (45.6) /System/Library/Frameworks/CoreVideo.framework/Versions/A/CoreVideo 0x7fff853c8000 - 0x7fff85407fef libncurses.5.4.dylib 5.4.0 (compatibility 5.4.0) /usr/lib/libncurses.5.4.dylib 0x7fff85572000 - 0x7fff855b9ff7 com.apple.coreui 2 (114) /System/Library/PrivateFrameworks/CoreUI.framework/Versions/A/CoreUI 0x7fff85614000 - 0x7fff8564efff libssl.0.9.8.dylib 0.9.8 (compatibility 0.9.8) /usr/lib/libssl.0.9.8.dylib 0x7fff8564f000 - 0x7fff85a92fef libLAPACK.dylib 219.0.0 (compatibility 1.0.0) <0CC61C98-FF51-67B3-F3D8-C5E430C201A9> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib 0x7fff85a95000 - 0x7fff85b4afe7 com.apple.ColorSync 4.6.3 (4.6.3) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ColorSync.framework/Versions/A/ColorSync 0x7fff85b5d000 - 0x7fff85b63ff7 com.apple.DiskArbitration 2.3 (2.3) <857F6E43-1EF4-7D53-351B-10DE0A8F992A> /System/Library/Frameworks/DiskArbitration.framework/Versions/A/DiskArbitration 0x7fff85c74000 - 0x7fff85d31ff7 com.apple.CoreServices.OSServices 357 (357) <718F0719-DC9F-E392-7C64-9D7DFE3D02E2> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/OSServices.framework/Versions/A/OSServices 0x7fff860c1000 - 0x7fff860f2fff libGLImage.dylib ??? (???) <1A8E58CF-FA2F-14F7-A097-D34EEA8A7D03> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGLImage.dylib 0x7fff860f3000 - 0x7fff8611bfff com.apple.DictionaryServices 1.1.2 (1.1.2) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/DictionaryServices.framework/Versions/A/DictionaryServices 0x7fff86303000 - 0x7fff86305fff com.apple.print.framework.Print 6.1 (237.1) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Print.framework/Versions/A/Print 0x7fff8640d000 - 0x7fff8642dff7 com.apple.DirectoryService.Framework 3.6 (621.9) /System/Library/Frameworks/DirectoryService.framework/Versions/A/DirectoryService 0x7fff864de000 - 0x7fff86bdb06f com.apple.CoreGraphics 1.545.0 (???) <356D59D6-1DD1-8BFF-F9B3-1CE51D2F1EC7> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/CoreGraphics 0x7fff86bdc000 - 0x7fff86c5bfe7 com.apple.audio.CoreAudio 3.2.6 (3.2.6) <1DD64A62-0DE4-223F-F781-B272FECF80F0> /System/Library/Frameworks/CoreAudio.framework/Versions/A/CoreAudio 0x7fff86c5c000 - 0x7fff86c71ff7 com.apple.LangAnalysis 1.6.6 (1.6.6) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/LangAnalysis.framework/Versions/A/LangAnalysis 0x7fff86c72000 - 0x7fff86d89fef libxml2.2.dylib 10.3.0 (compatibility 10.0.0) /usr/lib/libxml2.2.dylib 0x7fff86e63000 - 0x7fff86f84fe7 libcrypto.0.9.8.dylib 0.9.8 (compatibility 0.9.8) <48AEAFE1-21F4-B3C8-4199-35AD5E8D0613> /usr/lib/libcrypto.0.9.8.dylib 0x7fff86fee000 - 0x7fff8704efe7 com.apple.framework.IOKit 2.0 (???) /System/Library/Frameworks/IOKit.framework/Versions/A/IOKit 0x7fff8704f000 - 0x7fff8705cfe7 libCSync.A.dylib 545.0.0 (compatibility 64.0.0) <397B9057-5CDF-3B19-4E61-9DFD49369375> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libCSync.A.dylib 0x7fff8705d000 - 0x7fff87080fff com.apple.opencl 12.3 (12.3) /System/Library/Frameworks/OpenCL.framework/Versions/A/OpenCL 0x7fff87081000 - 0x7fff87166fef com.apple.DesktopServices 1.5.9 (1.5.9) <27890B2C-0CD2-7C27-9D0C-D5952C5E8438> /System/Library/PrivateFrameworks/DesktopServicesPriv.framework/Versions/A/DesktopServicesPriv 0x7fff87167000 - 0x7fff87167ff7 com.apple.Accelerate 1.6 (Accelerate 1.6) <15DF8B4A-96B2-CB4E-368D-DEC7DF6B62BB> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate 0x7fff872b5000 - 0x7fff872baff7 com.apple.CommonPanels 1.2.4 (91) <4D84803B-BD06-D80E-15AE-EFBE43F93605> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/CommonPanels.framework/Versions/A/CommonPanels 0x7fff872bb000 - 0x7fff87305ff7 com.apple.Metadata 10.6.3 (507.15) <5170FCE0-ED6C-2E3E-AB28-1DDE3F628FC5> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/Metadata.framework/Versions/A/Metadata 0x7fff8738f000 - 0x7fff87550fff libSystem.B.dylib 125.2.1 (compatibility 1.0.0) <71E6D4C9-F945-6EC2-998C-D61AD590DAB6> /usr/lib/libSystem.B.dylib 0x7fff87551000 - 0x7fff87606fe7 com.apple.ink.framework 1.3.3 (107) /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Ink.framework/Versions/A/Ink 0x7fff87607000 - 0x7fff8777efe7 com.apple.CoreFoundation 6.6.4 (550.42) <770C572A-CF70-168F-F43C-242B9114FCB5> /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation 0x7fff8777f000 - 0x7fff877cefef libTIFF.dylib ??? (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libTIFF.dylib 0x7fff878ae000 - 0x7fff878bdfff com.apple.opengl 1.6.12 (1.6.12) /System/Library/Frameworks/OpenGL.framework/Versions/A/OpenGL 0x7fff87913000 - 0x7fff87916fff com.apple.help 1.3.1 (41) <54B79BA2-B71B-268E-8752-5C8EE00E49E4> /System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/Help.framework/Versions/A/Help 0x7fff879e5000 - 0x7fff87a75fff com.apple.SearchKit 1.3.0 (1.3.0) <4175DC31-1506-228A-08FD-C704AC9DF642> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/SearchKit.framework/Versions/A/SearchKit 0x7fff87a90000 - 0x7fff87ac2fff libTrueTypeScaler.dylib ??? (???) /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ATS.framework/Versions/A/Resources/libTrueTypeScaler.dylib 0x7fff87b0b000 - 0x7fff87b1cff7 libz.1.dylib 1.2.3 (compatibility 1.0.0) /usr/lib/libz.1.dylib 0x7fff87bf2000 - 0x7fff87bf4fff libRadiance.dylib ??? (???) <76438F90-DD4B-9941-9367-F2DFDF927876> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libRadiance.dylib 0x7fff87de6000 - 0x7fff87e07fff libresolv.9.dylib 41.0.0 (compatibility 1.0.0) <6993F348-428F-C97E-7A84-7BD2EDC46A62> /usr/lib/libresolv.9.dylib 0x7fff87e08000 - 0x7fff87ea8fff com.apple.LaunchServices 362.2 (362.2) /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/LaunchServices.framework/Versions/A/LaunchServices 0x7fff87ea9000 - 0x7fff87ecffe7 libJPEG.dylib ??? (???) <6690F15D-E970-2678-430E-590A94F5C8E9> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/Resources/libJPEG.dylib 0x7fff87ed0000 - 0x7fff87ee6fef libbsm.0.dylib ??? (???) <42D3023A-A1F7-4121-6417-FCC6B51B3E90> /usr/lib/libbsm.0.dylib 0x7fff881a7000 - 0x7fff881eaff7 libRIP.A.dylib 545.0.0 (compatibility 64.0.0) <7E30B5F6-99FD-C716-8670-5DD4B4BAED72> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/CoreGraphics.framework/Versions/A/Resources/libRIP.A.dylib 0x7fff881eb000 - 0x7fff881eeff7 libCoreVMClient.dylib ??? (???) <609598E6-195D-E5D4-3B92-AE8D9768829C> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libCoreVMClient.dylib 0x7fff881ef000 - 0x7fff88237ff7 libvDSP.dylib 268.0.1 (compatibility 1.0.0) <98FC4457-F405-0262-00F7-56119CA107B6> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvDSP.dylib 0x7fff88238000 - 0x7fff883effef com.apple.ImageIO.framework 3.0.4 (3.0.4) <2CB9997A-A28D-80BC-5921-E7D50BBCACA7> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/ImageIO.framework/Versions/A/ImageIO 0x7fff883f0000 - 0x7fff883f0ff7 com.apple.Carbon 150 (152) <19B37B7B-1594-AD0A-7F14-FA2F85AD7241> /System/Library/Frameworks/Carbon.framework/Versions/A/Carbon 0x7fff883f1000 - 0x7fff88403fe7 libsasl2.2.dylib 3.15.0 (compatibility 3.0.0) <76B83C8D-8EFE-4467-0F75-275648AFED97> /usr/lib/libsasl2.2.dylib 0x7fff889ca000 - 0x7fff889d6fff libbz2.1.0.dylib 1.0.5 (compatibility 1.0.0) <5FFC8295-2DF7-B54C-3766-756842C53731> /usr/lib/libbz2.1.0.dylib 0x7fff88c42000 - 0x7fff88c97ff7 com.apple.framework.familycontrols 2.0.2 (2020) /System/Library/PrivateFrameworks/FamilyControls.framework/Versions/A/FamilyControls 0x7fff88e17000 - 0x7fff8914bfff com.apple.CoreServices.CarbonCore 861.23 (861.23) <08F360FA-1771-4F0B-F356-BEF68BB9D421> /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/CarbonCore 0x7fff891c6000 - 0x7fff891dafff libGL.dylib ??? (???) <1EB1BD0F-C17F-55DF-B8B4-8E9CF99359D4> /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGL.dylib 0x7fff891db000 - 0x7fff891efff7 com.apple.speech.synthesis.framework 3.10.35 (3.10.35) <621B7415-A0B9-07A7-F313-36BEEDD7B132> /System/Library/Frameworks/ApplicationServices.framework/Versions/A/Frameworks/SpeechSynthesis.framework/Versions/A/SpeechSynthesis 0x7fffffe00000 - 0x7fffffe01fff libSystem.B.dylib ??? (???) <71E6D4C9-F945-6EC2-998C-D61AD590DAB6> /usr/lib/libSystem.B.dylib Model: MacBookPro6,2, BootROM MBP61.0057.B0C, 2 processors, Intel Core i7, 2.66 GHz, 8 GB, SMC 1.58f16 Graphics: NVIDIA GeForce GT 330M, NVIDIA GeForce GT 330M, PCIe, 512 MB Graphics: Intel HD Graphics, Intel HD Graphics, Built-In, 288 MB Memory Module: global_name AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x93), Broadcom BCM43xx 1.0 (5.10.131.36.1) Bluetooth: Version 2.3.8f7, 2 service, 12 devices, 1 incoming serial ports Network Service: AirPort, AirPort, en1 Serial ATA Device: ST95005620AS, 465.76 GB Serial ATA Device: HL-DT-ST DVDRW GS23N USB Device: Hub, 0x0424 (SMSC), 0x2514, 0xfd100000 USB Device: Built-in iSight, 0x05ac (Apple Inc.), 0x8507, 0xfd110000 USB Device: IR Receiver, 0x05ac (Apple Inc.), 0x8242, 0xfd120000 USB Device: Hub, 0x0424 (SMSC), 0x2514, 0xfa100000 USB Device: Internal Memory Card Reader, 0x05ac (Apple Inc.), 0x8403, 0xfa130000 USB Device: Apple Internal Keyboard / Trackpad, 0x05ac (Apple Inc.), 0x0236, 0xfa120000 USB Device: BRCM2070 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0xfa110000 USB Device: Bluetooth USB Host Controller, 0x05ac (Apple Inc.), 0x8218, 0xfa113000 From mttate at usgs.gov Fri Mar 18 13:01:17 2011 From: mttate at usgs.gov (Michael T Tate) Date: Fri, 18 Mar 2011 12:01:17 -0500 Subject: [SciPy-User] Calculating avgerage values on 20 minute time frequency Message-ID: I have a time series date set that contains a date/time field and data fields (sonic temperature, WS-U, WS-V, WS-W). The data are recorded at 10hz. I would like to calculate an average for each of the data fields on a 20min frequency so it can be matched up with data collected by another instrument. It is pretty straightforward to calculate hourly averages using convert from scikits.timeseries. Does anyone have any ideas on how I would calculate 20min averages of the data? Thanks in advance Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From mr.peter.baek at gmail.com Tue Mar 22 06:07:07 2011 From: mr.peter.baek at gmail.com (Peter Baek) Date: Tue, 22 Mar 2011 11:07:07 +0100 Subject: [SciPy-User] Problem with scipy.interpolate.RectBivariateSpline Message-ID: Hi, I find it strange that scipy.interpolate.RectBivariateSpline cannot evaluate a random vector. When i evaluate an ordered vector using e.g. linspace it works fine, but when i try a random vector it crashes. Please help me find a way to evaluate an unordered vector. Thanks, Peter. Here is a test bench code that demonstrates the problem: ----------------------------------------------BEGIN CODE------------------------------------------------------------------------------ from pylab import * from scipy import interpolate """ first create a 2D space to do interpolation within """ Nx=11 Ny=6 f=zeros((Ny,Nx)) x=linspace(0,Nx-1,Nx) y=linspace(0,Ny-1,Ny) for i in arange(Nx): for j in arange(Ny): f[j,i]=x[i]+y[j]**2 matx=kron(ones((Ny,1)),x) maty=kron(ones((1,Nx)),y.reshape(-1,1)) print f print matx print maty figure(1) c=contour(matx,maty,f) clabel(c) title('surface which i want to interpolate in') """ Now create an interpolation... """ finter_x=interpolate.RectBivariateSpline(y,x,f,kx=3,ky=3) """ and evaluate using a linspace vector... """ xi1=[5] yi1=linspace(0,5,100) print yi1 figure(2) plot(yi1,finter_x(yi1,xi1)) title('slice at x=5 ordered vector') """ and evaluate using a random vector... """ xi2=[5] yi2=rand(100)*5 #yi2=linspace(0,5,100) print yi2 figure(3) plot(yi1,finter_x(yi2,xi2)) title('slice at x=5 random vector') show() --------------------------- END CODE ----------------------------- From dug.armadale at gmail.com Wed Mar 23 04:05:19 2011 From: dug.armadale at gmail.com (Douglas Macdonald) Date: Wed, 23 Mar 2011 08:05:19 +0000 Subject: [SciPy-User] Out of data Cookbook/InputOutput and load function Message-ID: Hi, I was following the page http://www.scipy.org/Cookbook/InputOutput but it seems to be out of date, "NotImplementedError: pylab no longer provides a load function, though the old pylab function is still available as matplotlib.mlab.load (you can refer to it in pylab as "mlab.load"). However, for plain text files, we recommend numpy.loadtxt, which was inspired by the old pylab.load but now has more features. For loading numpy arrays, we recommend numpy.load, and its analog numpy.save, which are available in pylab as np.load and np.save." I tried to amend the page but don't have permissions. Also, I don't know enough to put more than a note. Would someone who is knowledgeable and with permissions like to update this? Thank you, Douglas From peter at spuhler.net Wed Mar 23 12:58:37 2011 From: peter at spuhler.net (Peter Spuhler) Date: Wed, 23 Mar 2011 10:58:37 -0600 Subject: [SciPy-User] numpy svd Message-ID: I've been porting some IDL code over to scipy and ran into a problem with linalg.svd() The following would give me an error message running in a 32-bit environment (epd 7.0.2). >>>numpy.linalg.svd(numpy.ones((10,10000))) Traceback (most recent call last): File "C:\Program Files (x86)\Wing IDE 4.0\src\debug\tserver\_sandbox.py", line 1, in # Used internally for debug sandbox under external interpreter File "C:\Python27\Lib\site-packages\numpy\linalg\linalg.py", line 1324, in svd vt = vt.transpose().astype(result_t) MemoryError: The same function in 32-bit IDL seems to work fine (as well as in Matlab and Mathematica) IDL>la_svd,dblarr(10,10000)+1,w,u,v They both use the gesdd lapack function on the backend. Why would the numpy routine have problems with this calculation when the seemingly similar calculation works fine using IDL or Matlab? -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielstefanmader at googlemail.com Wed Mar 23 13:39:09 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Wed, 23 Mar 2011 18:39:09 +0100 Subject: [SciPy-User] OT warning! Re: ANN: Spyder v2.0.8 In-Reply-To: References: <4D6AD891.3020304@gmail.com> <4D6FEC94.6000003@noaa.gov> <4D6FFF66.4010706@gmail.com> <4D701899.5090409@noaa.gov> <4D703809.403@gmail.com> Message-ID: I would SO love it if Spyder would make that configurable. Honestly :) I'll open an issue on this, I guess. 2011/3/4 Krystian Rosi?ski : > Guido van Rossum wrote in 2006: > > "(...) Fortunately it's easy to separate the two. If it uses two-space > indents, it's corporate code; if it uses four-space indents, it's open > source. (If it uses tabs, I didn't write it! :-)" > > http://www.artima.com/weblogs/viewpost.jsp?thread=143947 > > On 4 Mar, 01:53, Stef Mientki wrote: >> On 03-03-2011 23:39, Christopher Barker wrote:> On 3/3/11 12:51 PM, Alan G Isaac wrote: >> >> On 3/3/2011 2:31 PM, Christopher Barker wrote: >> >>> four spaces is a well established standard >> >> ... for the standard library. ?Individual projects >> >> set their own standards. >> > OK -- PEP 8 is only _official_ for the standard library, but if you >> > define "standard" as "the way most people do it", then four spaces is it. >> >> >> ?(Unfortunately, PEP 8 came >> >> down on the wrong side of tabs vs. spaces.) >> > clearly debatable, but my point is that it is a good idea for all >> > projects to use the same conventions, and the ONLY one that makes any >> > sense at this point in that context is four spaces. >> >> Using a standard might be a good idea, >> but the standard is depending on the environment. >> Is Python the environment or the set of actually used tools. >> For me and the people around me, >> the programs we make, are the environment. >> We use PHP, Delphi, C, JAL, JS, Matlab, Labview, .... >> and in all these languages me and my environment uses 2 spaces. >> So the standard for Python is also 2 spaces. >> >> Secondly, the libraries and programs that we put in the open source community, >> by who will they be (probably) changed and maintained? >> So it seems to me perfectly legal to ?use 2 spaces as th? standard. >> >> cheers, >> Stef Mientki >> >> > Pythons "there should be only one obvious way to do it" philosophy >> > applies here. >> >> > -Chris >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-U... at scipy.orghttp://mail.scipy.org/mailman/listinfo/scipy-user > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From mwtoews at gmail.com Wed Mar 23 14:11:07 2011 From: mwtoews at gmail.com (Mike Toews) Date: Thu, 24 Mar 2011 07:11:07 +1300 Subject: [SciPy-User] Mistake in the scipy-wiki/cookbook/KDTree In-Reply-To: <002201cbdedf$6d9a8eb0$48cfac10$@com.au> References: <002201cbdedf$6d9a8eb0$48cfac10$@com.au> Message-ID: On 10 March 2011 17:55, Gero Putzar wrote: > Preferably to Mike Toews or whoever supplied this wiki page. I just corrected a single typo; Sturla Molden supplied the wiki page content. The edit history can be viewed with the "info" button: http://www.scipy.org/Cookbook/KDTree?action=info -Mike From sturla at molden.no Wed Mar 23 14:50:30 2011 From: sturla at molden.no (Sturla Molden) Date: Wed, 23 Mar 2011 19:50:30 +0100 Subject: [SciPy-User] numpy svd In-Reply-To: References: Message-ID: <4D8A40F6.30609@molden.no> Den 23.03.2011 17:58, skrev Peter Spuhler: > I've been porting some IDL code over to scipy and ran into a problem > with linalg.svd() > The following would give me an error message running in a 32-bit > environment (epd 7.0.2). > >>>numpy.linalg.svd(numpy.ones((10,10000))) > Traceback (most recent call last): > File "C:\Program Files (x86)\Wing IDE > 4.0\src\debug\tserver\_sandbox.py", line 1, in > # Used internally for debug sandbox under external interpreter > File "C:\Python27\Lib\site-packages\numpy\linalg\linalg.py", line > 1324, in svd > vt = vt.transpose().astype(result_t) > MemoryError: > > The same function in 32-bit IDL seems to work fine (as well as in > Matlab and Mathematica) > IDL>la_svd,dblarr(10,10000)+1,w,u,v > > They both use the gesdd lapack function on the backend. > Why would the numpy routine have problems with this calculation when > the seemingly similar calculation works fine using IDL or Matlab? > Your largest returned array is 762 MB, so it should not run out of memory. But the transposition creates one or two temporary arrays (one for transpose, an possibly another for astype), which is above or close to the 2 GB limit for 32-bit systems. Try to pass a Fortran contiguous array to linalg.svd to suppress this. If that does not help, you can call LAPACK gesdd directly (see below). Remember to pass a Fortran contiguous array to avoid a temporary copy by f2py. import scipy as sp import scipy.linalg from sp.linalg.flapack import dgesdd 64-bit will probably also solve the problem, if you have enough RAM. Sturla From sturla at molden.no Wed Mar 23 14:54:05 2011 From: sturla at molden.no (Sturla Molden) Date: Wed, 23 Mar 2011 19:54:05 +0100 Subject: [SciPy-User] Mistake in the scipy-wiki/cookbook/KDTree In-Reply-To: References: <002201cbdedf$6d9a8eb0$48cfac10$@com.au> Message-ID: <4D8A41CD.2080005@molden.no> Den 23.03.2011 19:11, skrev Mike Toews: > > I just corrected a single typo; Sturla Molden supplied the wiki page > content. The edit history can be viewed with the "info" button: > http://www.scipy.org/Cookbook/KDTree?action=info We should probably take this cookbook page away to encourage scipy.spatial.KDTree and cKDTree instead. It was written previous to those. Sturla From cweisiger at msg.ucsf.edu Wed Mar 23 18:00:39 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 23 Mar 2011 15:00:39 -0700 Subject: [SciPy-User] 2D slice of transformed data Message-ID: In preface, I'm not remotely an expert at array manipulation here. I'm an experienced programmer, but not an experienced *scientific* programmer. I'm sure what I want to do is possible, and I'm pretty certain it's even possible to do efficiently, but figuring out the actual implementation is giving me fits. I have two four-dimensional arrays of data: time, Z, Y, X. These represent microscopy data taken of the same sample with two different cameras. Their views don't quite match up if you overlay them, so we have a three-dimensional transform to align one array with the other. That transformation consists of X, Y, and Z translations (shifts), rotation about the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 parameters. I can perform the transformation on the data without difficulty with ndimage.affine_transform, but because we typically have hundreds of millions of pixels in one array, it takes a moderately long time. A representative array would be 30x50x512x512 or thereabouts. I'm writing a program to allow users to adjust the transformation and see how well-aligned the data looks from several perspectives. In addition to the traditional XY view, we also want to show XZ and YZ views, as well as kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D slices of the transformed data in a timely fashion. These slices are always perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems like the fast way to do this would be to take each pixel in the desired slice, apply the reverse transform, and figure out where in the original data it came from. But I'm having trouble figuring out how to efficiently do this. I could construct a 3D array with shape (length of axis 1), (length of axis 2), (4), such that each position in the array is a 4-tuple of the coordinates of the pixel in the desired slice. For example, if doing a YX slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then perhaps there'd be some way to efficiently apply the inverse transform to each coordinate tuple, then using ndimage.map_coordinates to turn those into pixel data. But I haven't managed to figure that out yet. By any chance is this already solved? If not, any suggestions / assistance would be wonderful. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From gruben at bigpond.net.au Wed Mar 23 19:00:54 2011 From: gruben at bigpond.net.au (gary ruben) Date: Thu, 24 Mar 2011 10:00:54 +1100 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: I'm not really sure of the best approach here, but you might consider downsampling your images to speed things up. Then if you can get valid parameters for the affine transfomation, apply these to the full volume. Take a look at SIFT registration. scikits-image can read sift features generated by an external program http://stefanv.github.com/scikits.image/api/scikits.image.io.html#load-sift This may be able to register your images. I have a vague memory that there may also be an approach using the radon transform that might work for your case. Gary R. On Thu, Mar 24, 2011 at 9:00 AM, Chris Weisiger wrote: > In preface, I'm not remotely an expert at array manipulation here. I'm an > experienced programmer, but not an experienced *scientific* programmer. I'm > sure what I want to do is possible, and I'm pretty certain it's even > possible to do efficiently, but figuring out the actual implementation is > giving me fits. > > I have two four-dimensional arrays of data: time, Z, Y, X. These represent > microscopy data taken of the same sample with two different cameras. Their > views don't quite match up if you overlay them, so we have a > three-dimensional transform to align one array with the other. That > transformation consists of X, Y, and Z translations (shifts), rotation about > the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 > parameters. I can perform the transformation on the data without difficulty > with ndimage.affine_transform, but because we typically have hundreds of > millions of pixels in one array, it takes a moderately long time. A > representative array would be 30x50x512x512 or thereabouts. > > I'm writing a program to allow users to adjust the transformation and see > how well-aligned the data looks from several perspectives. In addition to > the traditional XY view, we also want to show XZ and YZ views, as well as > kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D > slices of the transformed data in a timely fashion. These slices are always > perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, > or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems > like the fast way to do this would be to take each pixel in the desired > slice, apply the reverse transform, and figure out where in the original > data it came from. But I'm having trouble figuring out how to efficiently do > this. > > I could construct a 3D array with shape (length of axis 1), (length of axis > 2), (4), such that each position in the array is a 4-tuple of the > coordinates of the pixel in the desired slice. For example, if doing a YX > slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, > 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then > perhaps there'd be some way to efficiently apply the inverse transform to > each coordinate tuple, then using ndimage.map_coordinates to turn those into > pixel data. But I haven't managed to figure that out yet. > > By any chance is this already solved? If not, any suggestions / assistance > would be wonderful. > > -Chris > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From david_baddeley at yahoo.com.au Wed Mar 23 20:07:42 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 23 Mar 2011 17:07:42 -0700 (PDT) Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: <104256.41409.qm@web113413.mail.gq1.yahoo.com> I think you are on to the right idea with using transformed coordinates with map_coordinates. I think the easiest way (and fastest way) to get your set of transformed coordinates might be something like the following: assuming Ai is the inverse of your transformation matrix ... X, Y, Z, T = mgrid[whatever your goal slices are] Xt = Ai[0,0]*X + Ai[0,1]*Y + Ai[0,2]*Z ..... Yt = Ai[1,0]*X ........ Tt = ... Tt = ... this lets you still exploit scipys vector operations on your arrays of coordinates, rather than having to separately multiply each tuple with the matrix. I'm you can probably write that as a loop to avoid all the redundant code. eg: def transformCoords(coords, A): #coords is a list of coordinate arrays eg [X, Y, Z, T] #A is the transformation matrix ret = [] for i in range(len(coords)): r_i = 0 for j in range(len(coords)): r_j += A[i,j]*coords[j] ret.append(r_i) return ret cheers, David ________________________________ From: Chris Weisiger To: scipy-user at scipy.org Sent: Thu, 24 March, 2011 11:00:39 AM Subject: [SciPy-User] 2D slice of transformed data In preface, I'm not remotely an expert at array manipulation here. I'm an experienced programmer, but not an experienced *scientific* programmer. I'm sure what I want to do is possible, and I'm pretty certain it's even possible to do efficiently, but figuring out the actual implementation is giving me fits. I have two four-dimensional arrays of data: time, Z, Y, X. These represent microscopy data taken of the same sample with two different cameras. Their views don't quite match up if you overlay them, so we have a three-dimensional transform to align one array with the other. That transformation consists of X, Y, and Z translations (shifts), rotation about the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 parameters. I can perform the transformation on the data without difficulty with ndimage.affine_transform, but because we typically have hundreds of millions of pixels in one array, it takes a moderately long time. A representative array would be 30x50x512x512 or thereabouts. I'm writing a program to allow users to adjust the transformation and see how well-aligned the data looks from several perspectives. In addition to the traditional XY view, we also want to show XZ and YZ views, as well as kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D slices of the transformed data in a timely fashion. These slices are always perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems like the fast way to do this would be to take each pixel in the desired slice, apply the reverse transform, and figure out where in the original data it came from. But I'm having trouble figuring out how to efficiently do this. I could construct a 3D array with shape (length of axis 1), (length of axis 2), (4), such that each position in the array is a 4-tuple of the coordinates of the pixel in the desired slice. For example, if doing a YX slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then perhaps there'd be some way to efficiently apply the inverse transform to each coordinate tuple, then using ndimage.map_coordinates to turn those into pixel data. But I haven't managed to figure that out yet. By any chance is this already solved? If not, any suggestions / assistance would be wonderful. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From cweisiger at msg.ucsf.edu Wed Mar 23 21:21:45 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Wed, 23 Mar 2011 18:21:45 -0700 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: <104256.41409.qm@web113413.mail.gq1.yahoo.com> References: <104256.41409.qm@web113413.mail.gq1.yahoo.com> Message-ID: One thought that occurred to me shortly after leaving work: I know that the transformation will produce a rectangle (right? There should be no way to use the transformations specified to turn a rectangle into anything but a different rectangle). So why not transform the four corners of the rectangle, then step between them to generate coordinates to use with map_coordinates? So for example, if I want to generate a slice that has dimensions 40x512, and the transformed corners are A, B, C, and D, then I can generate the reverse-transformed grid coordinates as A, A + AB/40, A + 2AB/40, ... A + AC/512, A + AC/512 + AB/40, A + AC/512 + 2AB/40, ... A + 2AC/512, ... That would require iterating over every pixel in the desired slice (unless there's some way to vectorize that), but I don't see why it otherwise wouldn't work. Then just toss those coordinates into map_coordinates and I have my interpolated slice data. -Chris On Wed, Mar 23, 2011 at 5:07 PM, David Baddeley wrote: > I think you are on to the right idea with using transformed coordinates > with map_coordinates. I think the easiest way (and fastest way) to get your > set of transformed coordinates might be something like the following: > > assuming Ai is the inverse of your transformation matrix ... > > X, Y, Z, T = mgrid[whatever your goal slices are] > > Xt = Ai[0,0]*X + Ai[0,1]*Y + Ai[0,2]*Z ..... > Yt = Ai[1,0]*X ........ > Tt = ... > Tt = ... > > this lets you still exploit scipys vector operations on your arrays of > coordinates, rather than having to separately multiply each tuple with the > matrix. I'm you can probably write that as a loop to avoid all the redundant > code. eg: > > def transformCoords(coords, A): > #coords is a list of coordinate arrays eg [X, Y, Z, T] > #A is the transformation matrix > > ret = [] > for i in range(len(coords)): > r_i = 0 > for j in range(len(coords)): > r_j += A[i,j]*coords[j] > > ret.append(r_i) > > return ret > > cheers, > David > > ------------------------------ > *From:* Chris Weisiger > *To:* scipy-user at scipy.org > *Sent:* Thu, 24 March, 2011 11:00:39 AM > *Subject:* [SciPy-User] 2D slice of transformed data > > In preface, I'm not remotely an expert at array manipulation here. I'm an > experienced programmer, but not an experienced *scientific* programmer. I'm > sure what I want to do is possible, and I'm pretty certain it's even > possible to do efficiently, but figuring out the actual implementation is > giving me fits. > > I have two four-dimensional arrays of data: time, Z, Y, X. These represent > microscopy data taken of the same sample with two different cameras. Their > views don't quite match up if you overlay them, so we have a > three-dimensional transform to align one array with the other. That > transformation consists of X, Y, and Z translations (shifts), rotation about > the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 > parameters. I can perform the transformation on the data without difficulty > with ndimage.affine_transform, but because we typically have hundreds of > millions of pixels in one array, it takes a moderately long time. A > representative array would be 30x50x512x512 or thereabouts. > > I'm writing a program to allow users to adjust the transformation and see > how well-aligned the data looks from several perspectives. In addition to > the traditional XY view, we also want to show XZ and YZ views, as well as > kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D > slices of the transformed data in a timely fashion. These slices are always > perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, > or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems > like the fast way to do this would be to take each pixel in the desired > slice, apply the reverse transform, and figure out where in the original > data it came from. But I'm having trouble figuring out how to efficiently do > this. > > I could construct a 3D array with shape (length of axis 1), (length of axis > 2), (4), such that each position in the array is a 4-tuple of the > coordinates of the pixel in the desired slice. For example, if doing a YX > slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, > 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then > perhaps there'd be some way to efficiently apply the inverse transform to > each coordinate tuple, then using ndimage.map_coordinates to turn those into > pixel data. But I haven't managed to figure that out yet. > > By any chance is this already solved? If not, any suggestions / assistance > would be wonderful. > > -Chris > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From isaiah.norton at gmail.com Wed Mar 23 21:22:46 2011 From: isaiah.norton at gmail.com (Isaiah Norton) Date: Wed, 23 Mar 2011 21:22:46 -0400 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: Hi Chris, It's not strictly Python, but VTK and ITK are the heavy-iron for this sort of thing (py wrappings available). There are several tools built on these libraries to provide user-friendly 3D/4D registration, visualization, etc. GoFigure2: http://gofigure2.sourceforge.net/ - very microscopy oriented. 4D support. linux/mac/win V3D: http://penglab.janelia.org/proj/v3d/V3D/About_V3D.html - also 4D and triplatform. BioImageXD - mostly written in Python glue for vtk/itk. If you want to build something custom in Python, check out MayaVi - it uses VTK under the hood so the transforms will be handled fast in C++, but has nice pythonic tvtk syntax and native numpy support. -Isaiah On Wed, Mar 23, 2011 at 6:00 PM, Chris Weisiger wrote: > In preface, I'm not remotely an expert at array manipulation here. I'm an > experienced programmer, but not an experienced *scientific* programmer. I'm > sure what I want to do is possible, and I'm pretty certain it's even > possible to do efficiently, but figuring out the actual implementation is > giving me fits. > > I have two four-dimensional arrays of data: time, Z, Y, X. These represent > microscopy data taken of the same sample with two different cameras. Their > views don't quite match up if you overlay them, so we have a > three-dimensional transform to align one array with the other. That > transformation consists of X, Y, and Z translations (shifts), rotation about > the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 > parameters. I can perform the transformation on the data without difficulty > with ndimage.affine_transform, but because we typically have hundreds of > millions of pixels in one array, it takes a moderately long time. A > representative array would be 30x50x512x512 or thereabouts. > > I'm writing a program to allow users to adjust the transformation and see > how well-aligned the data looks from several perspectives. In addition to > the traditional XY view, we also want to show XZ and YZ views, as well as > kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D > slices of the transformed data in a timely fashion. These slices are always > perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, > or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems > like the fast way to do this would be to take each pixel in the desired > slice, apply the reverse transform, and figure out where in the original > data it came from. But I'm having trouble figuring out how to efficiently do > this. > > I could construct a 3D array with shape (length of axis 1), (length of axis > 2), (4), such that each position in the array is a 4-tuple of the > coordinates of the pixel in the desired slice. For example, if doing a YX > slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, > 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then > perhaps there'd be some way to efficiently apply the inverse transform to > each coordinate tuple, then using ndimage.map_coordinates to turn those into > pixel data. But I haven't managed to figure that out yet. > > By any chance is this already solved? If not, any suggestions / assistance > would be wonderful. > > -Chris > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From garfieldb at gmail.com Wed Mar 23 21:50:34 2011 From: garfieldb at gmail.com (Gar Brown) Date: Wed, 23 Mar 2011 21:50:34 -0400 Subject: [SciPy-User] scikits timeseries tofile Message-ID: Hi All, While trying to write a timeseries object to file I the following error: File "C:\Python27\lib\site-packages\scikits\timeseries\tseries.py", line 1527, in tofile return scipy.io.write_array(fileobject, tmpfiller, **optpars) AttributeError: 'module' object has no attribute 'write_array' I noticed that the scipy.io.write_array function has been deprecated for some time, is there a fix or workaround for avoiding this error. I really like the time series functionality of scikits.timeseries timeseries version: __author__ = "Pierre GF Gerard-Marchant & Matt Knox ($Author: mattknox_ca $)" __revision__ = "$Revision: 2213 $" __date__ = '$Date: 2009-08-23 12:43:40 -0400 (Sun, 23 Aug 2009) $' From ralf.gommers at googlemail.com Thu Mar 24 04:51:21 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Mar 2011 09:51:21 +0100 Subject: [SciPy-User] ndimage docs In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 4:33 PM, K.-Michael Aye wrote: > Just wanted to let peeps know that there is a point in the ndimage > documentation that seems to use an old arange implementation, at least > it doesn't work currenty like stated. > > Page: > http://docs.scipy.org/doc/scipy/reference/tutorial/ndimage.html > > Several examples there use something like this: > > array = arange(12, shape=(4,3), type = Float64) > > and arange does not support the shape keyword. > > Maybe should be updated to avoid confusion/frustration. Thanks, this is now fixed in master. Ralf From pgmdevlist at gmail.com Thu Mar 24 06:13:01 2011 From: pgmdevlist at gmail.com (Pierre GM) Date: Thu, 24 Mar 2011 11:13:01 +0100 Subject: [SciPy-User] scikits timeseries tofile In-Reply-To: References: Message-ID: <00E38D9D-EE12-43C9-BCA6-521C826F0AA7@gmail.com> On Mar 24, 2011, at 2:50 AM, Gar Brown wrote: > Hi All, > > While trying to write a timeseries object to file I the following error: > > File "C:\Python27\lib\site-packages\scikits\timeseries\tseries.py", > line 1527, in tofile > return scipy.io.write_array(fileobject, tmpfiller, **optpars) > AttributeError: 'module' object has no attribute 'write_array' > > > I noticed that the scipy.io.write_array function has been deprecated > for some time, is there a fix or workaround for avoiding this error. I > really like the time series functionality of scikits.timeseries Dang. I really need to get back on the scikit... Would you mind filing a ticket ? I'll try to get back to you by next week. Don't hesitate to keep on nudging me till I give you a fix and/or workaround. P. From seb.haase at gmail.com Thu Mar 24 08:16:13 2011 From: seb.haase at gmail.com (Sebastian Haase) Date: Thu, 24 Mar 2011 13:16:13 +0100 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: Hi Chris, if I understood correctly, you are foremost interested in visualizing the data after applying the respective pixel transforms. Could you not simply use the OpenGL rotate, translate and scale operations ? -- then it could be done literally instantaneously. There is already code for this in the viewer modules in my Priithon project. For such large data it would be good to have a video card with 1GB (if not 2GB) memory, which is now rather cheap (one to a few 100 $) to buy these days. I'm not sure but it might even be feasible, once the user has confirmed that a given transform parameter set is optimal, to read the transformed pixel values back from the graphics card -- if you really want that; but I would probably suggest to just store the parameters to the image data header, and take those into account for all further visualizations and other image processing you might be doing. Regards, Sebastian On Thu, Mar 24, 2011 at 2:22 AM, Isaiah Norton wrote: > Hi Chris, > > It's not strictly Python, but VTK and ITK are the heavy-iron for this sort > of thing (py wrappings available). There are several tools built on these > libraries to provide user-friendly 3D/4D registration, visualization, etc. > > GoFigure2: http://gofigure2.sourceforge.net/ > - very microscopy oriented. 4D support. linux/mac/win > > ?V3D: http://penglab.janelia.org/proj/v3d/V3D/About_V3D.html > - also 4D and triplatform. > > BioImageXD > - mostly written in Python glue for vtk/itk. > > If you want to build something custom in Python, check out MayaVi - it uses > VTK under the hood so the transforms will be handled fast in C++, but has > nice pythonic tvtk syntax and native numpy support. > > -Isaiah > > > > > On Wed, Mar 23, 2011 at 6:00 PM, Chris Weisiger > wrote: >> >> In preface, I'm not remotely an expert at array manipulation here. I'm an >> experienced programmer, but not an experienced *scientific* programmer. I'm >> sure what I want to do is possible, and I'm pretty certain it's even >> possible to do efficiently, but figuring out the actual implementation is >> giving me fits. >> >> I have two four-dimensional arrays of data: time, Z, Y, X. These represent >> microscopy data taken of the same sample with two different cameras. Their >> views don't quite match up if you overlay them, so we have a >> three-dimensional transform to align one array with the other. That >> transformation consists of X, Y, and Z translations (shifts), rotation about >> the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 >> parameters. I can perform the transformation on the data without difficulty >> with ndimage.affine_transform, but because we typically have hundreds of >> millions of pixels in one array, it takes a moderately long time. A >> representative array would be 30x50x512x512 or thereabouts. >> >> I'm writing a program to allow users to adjust the transformation and see >> how well-aligned the data looks from several perspectives. In addition to >> the traditional XY view, we also want to show XZ and YZ views, as well as >> kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D >> slices of the transformed data in a timely fashion. These slices are always >> perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, >> or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems >> like the fast way to do this would be to take each pixel in the desired >> slice, apply the reverse transform, and figure out where in the original >> data it came from. But I'm having trouble figuring out how to efficiently do >> this. >> >> I could construct a 3D array with shape (length of axis 1), (length of >> axis 2), (4), such that each position in the array is a 4-tuple of the >> coordinates of the pixel in the desired slice. For example, if doing a YX >> slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, >> 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then >> perhaps there'd be some way to efficiently apply the inverse transform to >> each coordinate tuple, then using ndimage.map_coordinates to turn those into >> pixel data. But I haven't managed to figure that out yet. >> >> By any chance is this already solved? If not, any suggestions / assistance >> would be wonderful. >> >> -Chris >> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > From ralf.gommers at googlemail.com Thu Mar 24 09:59:03 2011 From: ralf.gommers at googlemail.com (Ralf Gommers) Date: Thu, 24 Mar 2011 14:59:03 +0100 Subject: [SciPy-User] [scipy] GSoC 2011 In-Reply-To: References: Message-ID: On Wed, Mar 23, 2011 at 5:39 PM, Charles R Harris wrote: > > > On Wed, Mar 23, 2011 at 9:29 AM, Hector wrote: >> >> >> On Wed, Mar 23, 2011 at 6:59 PM, Ralf Gommers >> wrote: >>> >>> On Mon, Mar 21, 2011 at 10:41 AM, Hector wrote: >>> > Hello everyone, >>> > >>> > List of organizations participating in GSoC 2011 has been out for 3 >>> > days and >>> > as everyone expected, PSF is one of it. >>> > I was waiting for SciPy to pop up under PSF umbrella, it nothing has >>> > been >>> > updated there since then. >>> > Is SciPy planning to participate in GSoC this year? >>> >>> I certainly hope so. Jarrod knows what is (or isn't) going on, hope he >>> lets us know soon. >>> >> >> I was really waiting for the reply of this mail or some update on python >> site for GSoC 2011. >> >>> >>> Did you have something in mind to work on? If so, it can't hurt to >>> share your idea already. >>> >> >> Being a mathematics students, I did courses on some of the advanced topics >> and wrote code for the well known algorithms in that field. The topics ( in >> the descending order of number of codes written ) are - >> 1) Numerical Analysis >> 2) Operational Research >> 3) Abstract Algebra >> 4) Graph theory ( Cliques) >> >> Unfortunately I was not aware of Python and FOSS at that time and wrote >> all of the programs in MatLab (except Cliques). I want to contribute these >> code and enhance the tools SciPy contains. And GSoC 2011 will give me a >> structured platform to do that. I would be very happy to work on these if >> someone is will to mentor me. >> >> My works can be seen at - >> https://github.com/hector1618 I've browsed through some of your code on github, and most of the Numerical Analysis methods you implemented are already present in Scipy, probably in a more complete form. So you'd have to be a little more precise as to what you would want to do. Cheers, Ralf > Numbers 3) and 4) might fit better with SAGE. From cimrman3 at ntc.zcu.cz Thu Mar 24 10:02:01 2011 From: cimrman3 at ntc.zcu.cz (Robert Cimrman) Date: Thu, 24 Mar 2011 15:02:01 +0100 (CET) Subject: [SciPy-User] ANN: SfePy 2011.1 Message-ID: I am pleased to announce release 2011.1 of SfePy. Description ----------- SfePy (simple finite elements in Python) is a software for solving systems of coupled partial differential equations by the finite element method. The code is based on NumPy and SciPy packages. It is distributed under the new BSD license. Home page: http://sfepy.org Mailing lists, issue tracking: http://code.google.com/p/sfepy/ Git (source) repository: http://github.com/sfepy Documentation: http://docs.sfepy.org/doc Highlights of this release -------------------------- - discontinuous approximations - user-defined material nonlinearities - improved surface approximations - speed-up mesh reading - extensive clean-up - less code For more information on this release, see http://sfepy.googlecode.com/svn/web/releases/2011.1_RELEASE_NOTES.txt (full release notes, rather long and technical). Best regards, Robert Cimrman and Contributors (*) (*) Contributors to this release (alphabetical order): Vladim?r Luke?, Andre Smit, Logan Sorenson From cweisiger at msg.ucsf.edu Thu Mar 24 11:09:24 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Thu, 24 Mar 2011 08:09:24 -0700 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: That works fine for the XY view with no Z translate, but breaks down completely as soon as you want to look at other views or introduce a Z translation factor. It is possible to read pixel data back after OpenGL has applied transforms, e.g. by using a framebuffer object (FBO) to render to a texture and then using glGetTexImage to read the texture's pixels. Even ignoring the issue of nonapplicability for non-XY views, I suspect scipy's interpolation would be more accurate than OpenGL's. -Chris On Thu, Mar 24, 2011 at 5:16 AM, Sebastian Haase wrote: > Hi Chris, > if I understood correctly, you are foremost interested in visualizing > the data after applying the respective pixel transforms. Could you > not simply use the OpenGL rotate, translate and scale operations ? -- > then it could be done literally instantaneously. > There is already code for this in the viewer modules in my Priithon > project. > For such large data it would be good to have a video card with 1GB (if > not 2GB) memory, which is now rather cheap (one to a few 100 $) to > buy these days. > I'm not sure but it might even be feasible, once the user has > confirmed that a given transform parameter set is optimal, to read the > transformed pixel values back from the graphics card -- if you really > want that; but I would probably suggest to just store the parameters > to the image data header, and take those into account for all further > visualizations and other image processing you might be doing. > > Regards, > Sebastian > > > On Thu, Mar 24, 2011 at 2:22 AM, Isaiah Norton > wrote: > > Hi Chris, > > > > It's not strictly Python, but VTK and ITK are the heavy-iron for this > sort > > of thing (py wrappings available). There are several tools built on these > > libraries to provide user-friendly 3D/4D registration, visualization, > etc. > > > > GoFigure2: http://gofigure2.sourceforge.net/ > > - very microscopy oriented. 4D support. linux/mac/win > > > > V3D: http://penglab.janelia.org/proj/v3d/V3D/About_V3D.html > > - also 4D and triplatform. > > > > BioImageXD > > - mostly written in Python glue for vtk/itk. > > > > If you want to build something custom in Python, check out MayaVi - it > uses > > VTK under the hood so the transforms will be handled fast in C++, but has > > nice pythonic tvtk syntax and native numpy support. > > > > -Isaiah > > > > > > > > > > On Wed, Mar 23, 2011 at 6:00 PM, Chris Weisiger > > wrote: > >> > >> In preface, I'm not remotely an expert at array manipulation here. I'm > an > >> experienced programmer, but not an experienced *scientific* programmer. > I'm > >> sure what I want to do is possible, and I'm pretty certain it's even > >> possible to do efficiently, but figuring out the actual implementation > is > >> giving me fits. > >> > >> I have two four-dimensional arrays of data: time, Z, Y, X. These > represent > >> microscopy data taken of the same sample with two different cameras. > Their > >> views don't quite match up if you overlay them, so we have a > >> three-dimensional transform to align one array with the other. That > >> transformation consists of X, Y, and Z translations (shifts), rotation > about > >> the Z axis, and equal scaling in X and Y -- thus, the transformation has > 5 > >> parameters. I can perform the transformation on the data without > difficulty > >> with ndimage.affine_transform, but because we typically have hundreds of > >> millions of pixels in one array, it takes a moderately long time. A > >> representative array would be 30x50x512x512 or thereabouts. > >> > >> I'm writing a program to allow users to adjust the transformation and > see > >> how well-aligned the data looks from several perspectives. In addition > to > >> the traditional XY view, we also want to show XZ and YZ views, as well > as > >> kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D > >> slices of the transformed data in a timely fashion. These slices are > always > >> perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = > 20, > >> or a TZ slice passing through X = 256, Y = 256), never diagonal. It > seems > >> like the fast way to do this would be to take each pixel in the desired > >> slice, apply the reverse transform, and figure out where in the original > >> data it came from. But I'm having trouble figuring out how to > efficiently do > >> this. > >> > >> I could construct a 3D array with shape (length of axis 1), (length of > >> axis 2), (4), such that each position in the array is a 4-tuple of the > >> coordinates of the pixel in the desired slice. For example, if doing a > YX > >> slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], > [10, > >> 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. > Then > >> perhaps there'd be some way to efficiently apply the inverse transform > to > >> each coordinate tuple, then using ndimage.map_coordinates to turn those > into > >> pixel data. But I haven't managed to figure that out yet. > >> > >> By any chance is this already solved? If not, any suggestions / > assistance > >> would be wonderful. > >> > >> -Chris > >> > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > _______________________________________________ > > SciPy-User mailing list > > SciPy-User at scipy.org > > http://mail.scipy.org/mailman/listinfo/scipy-user > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From danielstefanmader at googlemail.com Thu Mar 24 11:33:15 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 24 Mar 2011 16:33:15 +0100 Subject: [SciPy-User] technical question: normed exponential fit for data? Message-ID: Hi, this is not a software question or scipy problem but rather I have no clue how to tackle this on a mathematical level. I'd like to create a unique fit function for data. Attached is a file which holds three measurements for three different known concentrations, i.e. my "calibration" measurement at low, medium and high concentration. Apparently, the temperature behavior of the chemical reaction is exponential, i.e. the photon yield increases with about 10%/K for the examined range for a given concentration. Now comes the tricky part: I'd like to use this knowledge for a temperature compensation because I only need to determine the concentration. The temperature of the reaction is measured simultaneously but might vary in the range of +-3K. In terms of assay performance, that makes a huge difference due to the 10%/K so that I'd need to compensate for it. How can I use my calibration measurement to find a function which I could use to compensate for varying temperatures? Thanks a lot in advance, best regards, Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: 2011-02-08-Liserl_TempAbhaengigkeit_eval.png Type: image/png Size: 62832 bytes Desc: not available URL: From robert.kern at gmail.com Thu Mar 24 12:09:33 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Mar 2011 11:09:33 -0500 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: On Thu, Mar 24, 2011 at 10:33, Daniel Mader wrote: > Hi, > > this is not a software question or scipy problem but rather I have no > clue how to tackle this on a mathematical level. > > I'd like to create a unique fit function for data. Attached is a file > which holds three measurements for three different known > concentrations, i.e. my "calibration" measurement at low, medium and > high concentration. > > Apparently, the temperature behavior of the chemical reaction is > exponential, i.e. the photon yield increases with about 10%/K for the > examined range for a given concentration. Hmm. I certainly wouldn't have come to that conclusion looking at the data. At least for #0 (high concentration?), the linear fit is substantially better. > Now comes the tricky part: I'd like to use this knowledge for a > temperature compensation because I only need to determine the > concentration. The temperature of the reaction is measured > simultaneously but might vary in the range of +-3K. In terms of assay > performance, that makes a huge difference due to the 10%/K so that I'd > need to compensate for it. > > How can I use my calibration measurement to find a function which I > could use to compensate for varying temperatures? Can you do more calibrations with different concentrations? For any given temperature, you essentially only have three data points with which to determine the relationship between concentration and photon count. That's pretty difficult without any theory to help you fill in the gaps. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From danielstefanmader at googlemail.com Thu Mar 24 13:19:05 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 24 Mar 2011 18:19:05 +0100 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: Hi, > Hmm. I certainly wouldn't have come to that conclusion looking at the > data. At least for #0 (high concentration?), the linear fit is > substantially better. yes, of course, there is always a better fit possible, no doubt. When I perform a fit for only a single data series, I can optimize both c0 and c1 in the equation: f(x) = c[0] * scipy.exp(c[1]*x) It is just my observation that with a factor c[1]=0.1 the fit works very well for a broad range of concentrations. However, c[0] needs to be different, and that is my problem. Can I somehow normalize the measurement values in order to compensate for different temperatures? >> Now comes the tricky part: I'd like to use this knowledge for a >> temperature compensation because I only need to determine the >> concentration. The temperature of the reaction is measured >> simultaneously but might vary in the range of +-3K. In terms of assay >> performance, that makes a huge difference due to the 10%/K so that I'd >> need to compensate for it. >> >> How can I use my calibration measurement to find a function which I >> could use to compensate for varying temperatures? > > Can you do more calibrations with different concentrations? For any > given temperature, you essentially only have three data points with > which to determine the relationship between concentration and photon > count. That's pretty difficult without any theory to help you fill in > the gaps. These experiments are very expensive, a single series from above is about 300?. So, I was hoping there is something I can learn from these already. It needs not be perfect, a procedure outline would be nice so that I could justify to spend even more money on this. But, as stated above, right now I only have a "feeling" that I can do something about it but it's still hidden in the mist... Thanks for taking the time to think about it! From dplepage at gmail.com Thu Mar 24 13:20:14 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Thu, 24 Mar 2011 13:20:14 -0400 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: On Thu, Mar 24, 2011 at 11:33 AM, Daniel Mader wrote: > this is not a software question or scipy problem but rather I have no > clue how to tackle this on a mathematical level. I'd write this in terms of probabilities, but that could be just because I tend to write everything that way :-) Your calibration measurements are samples of a probability distribution P(count, concentration, temperature), which equals P(concentration | count, temperature) P(count, temperature). You can estimate P(count, temperature) from these samples and thus estimate P(concentration | count, temperature). Now you do your experiment - with an unknown concentration, you take some [count, temperature] measurements. Based on the uncertainty of your sensors, you estimate P(count, temperature) for this new sample. You want to find P(concentration), so you write the marginalization: P(concentration) = \int_{temperature, count} P(concentration, count, temperature) = \int_{temperature, count} P(concentration | count, temperature) P(count, temperature) where \int denotes an integral over all possible temperatures and counts. You have P(concentration | count, temperature) from your calibration, so you integrate to get P(concentration), and then choose the concentration that maximizes it (ML estimator) or that minimizes the expected error from it (Bayes estimator). If you assume that all measurements come from some deterministic system corrupted by Gaussian noise and that all concentrations and temperatures are equally likely, and you choose to use a maximum-likelihood (ML) estimator, then this takes a very simple algorithmic form: 1) Fit a surface to your (count, concentration, temperature) calibration points. If you assume that count is a linear function of concentration and temperature, this surface will be a plane (very easy to fit); if instead you expect it to be exponential in temperature and linear in concentration, you'll be fitting a curved surface. 2) Each new [count, temperature] pair defines a line in this 3D space; intersect this line with your surface to get the most probable concentration. As Robert pointed out, step 1 will be a lot more robust if you have calibration samples with more than 3 distinct concentrations. Hope this helps, Dan Lepage From danielstefanmader at googlemail.com Thu Mar 24 14:30:27 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 24 Mar 2011 19:30:27 +0100 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: Dear Dan, thank your very much for this approach, it really sounds very reasonable. However, not being a probability pro, I don't understand the meaning for some terms: P(concentration | count, temperature) P(count, temperature) ? I'd be grateful if you could elaborate a little more, it sounds very promising! Best regards, Daniel 2011/3/24 Daniel Lepage : > On Thu, Mar 24, 2011 at 11:33 AM, Daniel Mader > wrote: > >> this is not a software question or scipy problem but rather I have no >> clue how to tackle this on a mathematical level. > > I'd write this in terms of probabilities, but that could be just > because I tend to write everything that way :-) > > Your calibration measurements are samples of a probability > distribution P(count, concentration, temperature), which equals > P(concentration | count, temperature) P(count, temperature). You can > estimate P(count, temperature) from these samples and thus estimate > P(concentration | count, temperature). > > Now you do your experiment - with an unknown concentration, you take > some [count, temperature] measurements. Based on the uncertainty of > your sensors, you estimate P(count, temperature) for this new sample. > You want to find P(concentration), so you write the marginalization: > > P(concentration) = \int_{temperature, count} P(concentration, count, > temperature) = \int_{temperature, count} P(concentration | count, > temperature) P(count, temperature) > > where \int denotes an integral over all possible temperatures and > counts. You have P(concentration | count, temperature) from your > calibration, so you integrate to get P(concentration), and then choose > the concentration that maximizes it (ML estimator) or that minimizes > the expected error from it (Bayes estimator). > > If you assume that all measurements come from some deterministic > system corrupted by Gaussian noise and that all concentrations and > temperatures are equally likely, and you choose to use a > maximum-likelihood (ML) estimator, then this takes a very simple > algorithmic form: > 1) Fit a surface to your (count, concentration, temperature) > calibration points. If you assume that count is a linear function of > concentration and temperature, this surface will be a plane (very easy > to fit); if instead you expect it to be exponential in temperature and > linear in concentration, you'll be fitting a curved surface. > 2) Each new [count, temperature] pair defines a line in this 3D space; > intersect this line with your surface to get the most probable > concentration. > > As Robert pointed out, step 1 will be a lot more robust if you have > calibration samples with more than 3 distinct concentrations. > > Hope this helps, > Dan Lepage From josef.pktd at gmail.com Thu Mar 24 14:35:49 2011 From: josef.pktd at gmail.com (josef.pktd at gmail.com) Date: Thu, 24 Mar 2011 14:35:49 -0400 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: On Thu, Mar 24, 2011 at 1:19 PM, Daniel Mader wrote: > Hi, > >> Hmm. I certainly wouldn't have come to that conclusion looking at the >> data. At least for #0 (high concentration?), the linear fit is >> substantially better. > > yes, of course, there is always a better fit possible, no doubt. When > I perform a fit for only a single data series, I can optimize both c0 > and c1 in the equation: > ?f(x) = c[0] * scipy.exp(c[1]*x) what I would do use loglinear specification, the error looks increasing for larger counts. use a factor encoding for the concentration that allows for different intercepts, constrain c[1] to be the same for all observations. log(f(x)) = c0 * dL + c1*dM + c2*dH + c3*x dL, dM, dH are dummy variables, that are 1 if the observation has L (or M or H) concentration and 0 for the other concentration. run a linear regression on this, and run some tests to see whether the assumptions look ok. Josef > > It is just my observation that with a factor c[1]=0.1 the fit works > very well for a broad range of concentrations. > > However, c[0] needs to be different, and that is my problem. Can I > somehow normalize the measurement values in order to compensate for > different temperatures? > >>> Now comes the tricky part: I'd like to use this knowledge for a >>> temperature compensation because I only need to determine the >>> concentration. The temperature of the reaction is measured >>> simultaneously but might vary in the range of +-3K. In terms of assay >>> performance, that makes a huge difference due to the 10%/K so that I'd >>> need to compensate for it. >>> >>> How can I use my calibration measurement to find a function which I >>> could use to compensate for varying temperatures? >> >> Can you do more calibrations with different concentrations? For any >> given temperature, you essentially only have three data points with >> which to determine the relationship between concentration and photon >> count. That's pretty difficult without any theory to help you fill in >> the gaps. > > These experiments are very expensive, a single series from above is > about 300?. So, I was hoping there is something I can learn from these > already. It needs not be perfect, a procedure outline would be nice so > that I could justify to spend even more money on this. > > But, as stated above, right now I only have a "feeling" that I can do > something about it but it's still hidden in the mist... > > Thanks for taking the time to think about it! > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From dplepage at gmail.com Thu Mar 24 15:12:11 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Thu, 24 Mar 2011 15:12:11 -0400 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: On Thu, Mar 24, 2011 at 2:30 PM, Daniel Mader wrote: > Dear Dan, > > thank your very much for this approach, it really sounds very reasonable. > > However, not being a probability pro, I don't understand the meaning > for some terms: > > P(concentration | count, temperature) P(count, temperature) ? > > I'd be grateful if you could elaborate a little more, it sounds very promising! Sure! If A is a random variable, then P(A) (the "probability of A") is a function assigning probabilities to all possible values of A. For a given value "a", you'll often see people write P(A=a) to denote the specific probability that A will take that value. For example, let T be the random variable for the temperature of your system. Then P(T) is a function assigning a probability to each temperature; the probability that the temperature is 3K would be written P(T=3K). P(A,B) (the "joint probability of A and B") is a two-parameter function that tells the probability of seeing particular pair of values. Again letting T be temperature, let C be concentration; P(T=t, C=c) tells you the probability that the temperature would be t and the concentration would be c. Note that the order doesn't matter: P(A, B) = P(B, A). P(A | B) (the "conditional probability of A given B") is a two-parameter function that gives you the probability that A would take some value given that B had taken another. So P(T=t | C=c) tells you the probability that the temperature would be t if the concentration were c. Note that P(A, B) and P(A | B) are both two-argument functions, but satisfy different constraints - P(A,B) is a probability distribution, so if you integrate over all possible values of a and b you should get 1, whereas P(A | B) defines a set of a probability distributions, so that for any given choice of b integrating P(A | B=b) over all possible values of a will yield 1. These functions are related by two fundamental theorems: The law of conditional probability: P(A=a, B=b) = P(A=a | B=b) P(B=b) This says that the probability that you'd see get a pair of observations (a,b) is equal to the probability that you'd see A=a given that B=b times the probability that B would equal b in the first place. The law of marginalization: P(A=a) = \int_b P(A=a, B=b) This says that the probability that you'd observe A=a is equal to the integral over all possible values of b of the probability that you'd see (A=a, B=b). Another oft-cited theorem, called Bayes' Law, follows from the law of conditional probability and from the fact that P(A,B) = P(B,A): P(A | B) = P(B | A) * P(A)/P(B) In practice, you can solve a lot of problems without ever writing them in this form. For example, the algorithm I described of fitting a surface to your calibration and then intersecting new measurements with this surface makes intuitive sense without looking at the underlying probabilities: you assume that the correct values lie on some manifold, estimate the manifold from your calibration data, and then use the manifold to look up concentration as a function of temperature and count. However, I find it helpful to think about things in probabilities because it forces you to explicitly spell out your assumptions, such as the assumption that your data is corrupted by Gaussian noise (this is an implicit assumption any time you use a least-squares fitting technique such as linear regression). If you'd like to learn a little bit more about this, Wikipedia gives some reasonable descriptions: http://en.wikipedia.org/wiki/Joint_probability http://en.wikipedia.org/wiki/Conditional_probability http://en.wikipedia.org/wiki/Marginal_probability If you'd like to learn a LOT more about this, I recommend the book "Data Analysis: A Bayesian Tutorial" by Devinderjit Sivia. -- Dan Lepage From danielstefanmader at googlemail.com Thu Mar 24 15:27:28 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 24 Mar 2011 20:27:28 +0100 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: Thanks a million times! I'll take the time and chance to dive into that! 2011/3/24 Daniel Lepage : > If you'd like to learn a little bit more about this, Wikipedia gives > some reasonable descriptions: > http://en.wikipedia.org/wiki/Joint_probability > http://en.wikipedia.org/wiki/Conditional_probability > http://en.wikipedia.org/wiki/Marginal_probability > > If you'd like to learn a LOT more about this, I recommend the book > "Data Analysis: A Bayesian Tutorial" by Devinderjit Sivia. From david_baddeley at yahoo.com.au Thu Mar 24 15:59:02 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Thu, 24 Mar 2011 12:59:02 -0700 (PDT) Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: Message-ID: <891105.10091.qm@web113405.mail.gq1.yahoo.com> A really silly question - you are using temperature values in Kelvin (rather than centigrade) aren't you? The chemical/physical assumption is probably that rates are an exponential function of the temperature in Kelvin, not in C. Your high concentration curve is looking awfully like it's heading for an intercept at 0C. cheers, David ----- Original Message ---- From: Robert Kern To: SciPy Users List Sent: Fri, 25 March, 2011 5:09:33 AM Subject: Re: [SciPy-User] technical question: normed exponential fit for data? On Thu, Mar 24, 2011 at 10:33, Daniel Mader wrote: > Hi, > > this is not a software question or scipy problem but rather I have no > clue how to tackle this on a mathematical level. > > I'd like to create a unique fit function for data. Attached is a file > which holds three measurements for three different known > concentrations, i.e. my "calibration" measurement at low, medium and > high concentration. > > Apparently, the temperature behavior of the chemical reaction is > exponential, i.e. the photon yield increases with about 10%/K for the > examined range for a given concentration. Hmm. I certainly wouldn't have come to that conclusion looking at the data. At least for #0 (high concentration?), the linear fit is substantially better. > Now comes the tricky part: I'd like to use this knowledge for a > temperature compensation because I only need to determine the > concentration. The temperature of the reaction is measured > simultaneously but might vary in the range of +-3K. In terms of assay > performance, that makes a huge difference due to the 10%/K so that I'd > need to compensate for it. > > How can I use my calibration measurement to find a function which I > could use to compensate for varying temperatures? Can you do more calibrations with different concentrations? For any given temperature, you essentially only have three data points with which to determine the relationship between concentration and photon count. That's pretty difficult without any theory to help you fill in the gaps. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From danielstefanmader at googlemail.com Thu Mar 24 16:14:29 2011 From: danielstefanmader at googlemail.com (Daniel Mader) Date: Thu, 24 Mar 2011 21:14:29 +0100 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: <891105.10091.qm@web113405.mail.gq1.yahoo.com> References: <891105.10091.qm@web113405.mail.gq1.yahoo.com> Message-ID: Hi David, no, x is ?C, but I use to specify temperature differences in K sind I find ?C only useful as an absolute value. Maybe you are right about the intercept, but you need to keep in mind that this is a complicated chemical assay. I wouldn't make any assumption for temperatures lower than 15?C and higner than 35?C. In my experiment, temperatures are typically between 20 and 25?C... Judging from the data, I would say that an exponential behavior is justified, i.e. the light emission increases with some percent per delta T. To my understanding, that is the exact description for an exponential curve, or am I mistaken? Thanks in advance, I really enjoy the discussion here, Daniel 2011/3/24 David Baddeley : > A really silly question - you are using temperature values in Kelvin (rather > than centigrade) aren't you? The chemical/physical assumption is probably that > rates are an exponential function of the temperature in Kelvin, not in C. Your > high concentration curve is looking awfully like it's heading for an intercept > at 0C. > > cheers, > David > > ----- Original Message ---- > From: Robert Kern > To: SciPy Users List > Sent: Fri, 25 March, 2011 5:09:33 AM > Subject: Re: [SciPy-User] technical question: normed exponential fit for data? > > On Thu, Mar 24, 2011 at 10:33, Daniel Mader > wrote: >> Hi, >> >> this is not a software question or scipy problem but rather I have no >> clue how to tackle this on a mathematical level. >> >> I'd like to create a unique fit function for data. Attached is a file >> which holds three measurements for three different known >> concentrations, i.e. my "calibration" measurement at low, medium and >> high concentration. >> >> Apparently, the temperature behavior of the chemical reaction is >> exponential, i.e. the photon yield increases with about 10%/K for the >> examined range for a given concentration. > > Hmm. I certainly wouldn't have come to that conclusion looking at the > data. At least for #0 (high concentration?), the linear fit is > substantially better. > >> Now comes the tricky part: I'd like to use this knowledge for a >> temperature compensation because I only need to determine the >> concentration. The temperature of the reaction is measured >> simultaneously but might vary in the range of +-3K. In terms of assay >> performance, that makes a huge difference due to the 10%/K so that I'd >> need to compensate for it. >> >> How can I use my calibration measurement to find a function which I >> could use to compensate for varying temperatures? > > Can you do more calibrations with different concentrations? For any > given temperature, you essentially only have three data points with > which to determine the relationship between concentration and photon > count. That's pretty difficult without any theory to help you fill in > the gaps. From robert.kern at gmail.com Thu Mar 24 16:28:03 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Mar 2011 15:28:03 -0500 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: <891105.10091.qm@web113405.mail.gq1.yahoo.com> Message-ID: On Thu, Mar 24, 2011 at 15:14, Daniel Mader wrote: > Hi David, > > no, x is ?C, but I use to specify temperature differences in K sind I > find ?C only useful as an absolute value. > > Maybe you are right about the intercept, but you need to keep in mind > that this is a complicated chemical assay. I wouldn't make any > assumption for temperatures lower than 15?C and higner than 35?C. In > my experiment, temperatures are typically between 20 and 25?C... > > Judging from the data, I would say that an exponential behavior is > justified, i.e. the light emission increases with some percent per > delta T. To my understanding, that is the exact description for an > exponential curve, or am I mistaken? The #0 data really looks much more like the line than the exponential fit, at least in this temperature regime. That is, for a certain delta T, you get a certain number of extra photons regardless of where you are on the line. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From dplepage at gmail.com Thu Mar 24 16:55:08 2011 From: dplepage at gmail.com (Daniel Lepage) Date: Thu, 24 Mar 2011 16:55:08 -0400 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: <891105.10091.qm@web113405.mail.gq1.yahoo.com> References: <891105.10091.qm@web113405.mail.gq1.yahoo.com> Message-ID: On Thu, Mar 24, 2011 at 3:59 PM, David Baddeley wrote: > A really silly question - you are using temperature values in Kelvin (rather > than centigrade) aren't you? The chemical/physical assumption is probably that > rates are an exponential function of the temperature in Kelvin, not in C. Your > high concentration curve is looking awfully like it's heading for an intercept > at 0C. Looks to me like the x-axis goes from 15C to about 28.5C, so the intercept is actually around 15C. -- Dan Lepage From robert.kern at gmail.com Thu Mar 24 17:15:34 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Mar 2011 16:15:34 -0500 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: <891105.10091.qm@web113405.mail.gq1.yahoo.com> Message-ID: On Thu, Mar 24, 2011 at 15:55, Daniel Lepage wrote: > On Thu, Mar 24, 2011 at 3:59 PM, David Baddeley > wrote: >> A really silly question - you are using temperature values in Kelvin (rather >> than centigrade) aren't you? The chemical/physical assumption is probably that >> rates are an exponential function of the temperature in Kelvin, not in C. Your >> high concentration curve is looking awfully like it's heading for an intercept >> at 0C. > > Looks to me like the x-axis goes from 15C to about 28.5C, so the > intercept is actually around 15C. It doesn't really matter. If you shift all of the temperature readings by a constant amount, the shift just gets folded into the coefficient out in front. The shape of the fitted exponential will look exactly the same. c0 * exp(c1*(x+273)) == c0 * exp(c1*273) * exp(c1*x) == c2 * exp(c1*x) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From warren.weckesser at enthought.com Thu Mar 24 17:17:00 2011 From: warren.weckesser at enthought.com (Warren Weckesser) Date: Thu, 24 Mar 2011 16:17:00 -0500 Subject: [SciPy-User] technical question: normed exponential fit for data? In-Reply-To: References: <891105.10091.qm@web113405.mail.gq1.yahoo.com> Message-ID: Hi Daniel, On Thu, Mar 24, 2011 at 3:14 PM, Daniel Mader < danielstefanmader at googlemail.com> wrote: > Hi David, > > no, x is ?C, but I use to specify temperature differences in K sind I > find ?C only useful as an absolute value. > I don't understand that statement. > > Maybe you are right about the intercept, but you need to keep in mind > that this is a complicated chemical assay. I wouldn't make any > assumption for temperatures lower than 15?C and higner than 35?C. In > my experiment, temperatures are typically between 20 and 25?C... > > Judging from the data, I would say that an exponential behavior is > justified, i.e. the light emission increases with some percent per > delta T. To my understanding, that is the exact description for an > exponential curve, or am I mistaken? > > If the photon yield is known to be correlated with the reaction rate, then an exponential behavior is plausible: see the Arrhenius equation ( http://en.wikipedia.org/wiki/Arrhenius_equation). However, you must use Kelvin for that equation. However, the exponential fit that you show in your plots is highly questionable. Notice that in both the #0 and #1 plots, the residuals show the same non-random pattern--on the left, the data points are all below the fitted curve, and on the right that are all (as far as I can tell) above the curve. This suggest a bad model. Linear seems much more effective for the fairly limited range of temperatures that you are using. Warren > Thanks in advance, I really enjoy the discussion here, > Daniel > > 2011/3/24 David Baddeley : > > A really silly question - you are using temperature values in Kelvin > (rather > > than centigrade) aren't you? The chemical/physical assumption is probably > that > > rates are an exponential function of the temperature in Kelvin, not in C. > Your > > high concentration curve is looking awfully like it's heading for an > intercept > > at 0C. > > > > cheers, > > David > > > > ----- Original Message ---- > > From: Robert Kern > > To: SciPy Users List > > Sent: Fri, 25 March, 2011 5:09:33 AM > > Subject: Re: [SciPy-User] technical question: normed exponential fit for > data? > > > > On Thu, Mar 24, 2011 at 10:33, Daniel Mader > > wrote: > >> Hi, > >> > >> this is not a software question or scipy problem but rather I have no > >> clue how to tackle this on a mathematical level. > >> > >> I'd like to create a unique fit function for data. Attached is a file > >> which holds three measurements for three different known > >> concentrations, i.e. my "calibration" measurement at low, medium and > >> high concentration. > >> > >> Apparently, the temperature behavior of the chemical reaction is > >> exponential, i.e. the photon yield increases with about 10%/K for the > >> examined range for a given concentration. > > > > Hmm. I certainly wouldn't have come to that conclusion looking at the > > data. At least for #0 (high concentration?), the linear fit is > > substantially better. > > > >> Now comes the tricky part: I'd like to use this knowledge for a > >> temperature compensation because I only need to determine the > >> concentration. The temperature of the reaction is measured > >> simultaneously but might vary in the range of +-3K. In terms of assay > >> performance, that makes a huge difference due to the 10%/K so that I'd > >> need to compensate for it. > >> > >> How can I use my calibration measurement to find a function which I > >> could use to compensate for varying temperatures? > > > > Can you do more calibrations with different concentrations? For any > > given temperature, you essentially only have three data points with > > which to determine the relationship between concentration and photon > > count. That's pretty difficult without any theory to help you fill in > > the gaps. > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert.kern at gmail.com Thu Mar 24 23:32:48 2011 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 24 Mar 2011 22:32:48 -0500 Subject: [SciPy-User] Problem with IndexError In-Reply-To: References: Message-ID: 2011/3/8 Krystian Rosi?ski : > Hi, > I've been using SciPy for some time, but I still have problems with indexing > and the Python convention relating to the numbering from zero in more > complex cases. > I'm trying to translate excellent example "Matrix Structural Analysis of > Plane Frames?using Scilab"?to Python.?I've spent on this code a few hours > and I still can't find the cause of the problem of IndexError. Here is my > code. > I would be very grateful if someone could look at it and give me any advice. 1. Unrelated, but important: on line 205, use zeros((1,6), dtype=int). Be sure to do this with other zeros() calls that are intended to hold indices. The default is dtype=float. 2. In pf_getdof(), I'm sure you want dof[0, 0:3] and dof[0, 3:6]. 3. In pf_ssm(), you want range(nmem), not (0, nmem+1). 4. In pf_calclm(), since numpy is 0-indexed, you want to increment nd *after* you assign it into lm. 5. Typos: ilod -> iload, iloads -> iload. 6. In pf_assemloadvec(): am = - dot(r.T, memloads[iload, 1:7]) ii = dof[0, i] 7. Never use np.dot(np.linalg.inv(A), b). Use np.linalg.solve(A, b). You still have a singular matrix. This may or may not indicate further errors in your code, but that requires more knowledge of the problem and algorithm than I have. 8. As a matter of good style, don't use "from numpy import *". Use "import numpy as np" and use the dotted notation to get to numpy functions. I figured most of this out using pdb, Python's standard debugger. I didn't even look at the original. You can see a tutorial on using the debugger from the Software Carpentry site: http://software-carpentry.org/4_0/python/debugger/ -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From krystian.rosinski at gmail.com Fri Mar 25 15:26:29 2011 From: krystian.rosinski at gmail.com (=?UTF-8?Q?Krystian_Rosi=C5=84ski?=) Date: Fri, 25 Mar 2011 12:26:29 -0700 (PDT) Subject: [SciPy-User] Problem with IndexError In-Reply-To: References: Message-ID: <75d52f9d-84c8-4c0d-88f9-b08d3d5ee615@k22g2000yqh.googlegroups.com> Thank you, your advice is very helpful and explains a lot. By the way I've learned to use debugger in Spyder IDE. On 25 Mar, 04:32, Robert Kern wrote: > 1. Unrelated, but important: on line 205, use zeros((1,6), dtype=int). > Be sure to do this with other zeros() calls that are intended to hold > indices. The default is dtype=float. > > 2. In pf_getdof(), I'm sure you want dof[0, 0:3] and dof[0, 3:6]. > > 3. In pf_ssm(), you want range(nmem), not (0, nmem+1). > > 4. In pf_calclm(), since numpy is 0-indexed, you want to increment nd > *after* you assign it into lm. > > 5. Typos: ilod -> iload, iloads -> iload. > > 6. In pf_assemloadvec(): > > ? ? am = - dot(r.T, memloads[iload, 1:7]) > ? ? ? ? ii = dof[0, i] > > 7. Never use np.dot(np.linalg.inv(A), b). Use np.linalg.solve(A, b). > You still have a singular matrix. This may or may not indicate further > errors in your code, but that requires more knowledge of the problem > and algorithm than I have. > > 8. As a matter of good style, don't use "from numpy import *". Use > "import numpy as np" and use the dotted notation to get to numpy > functions. > > I figured most of this out using pdb, Python's standard debugger. I > didn't even look at the original. You can see a tutorial on using the > debugger from the Software Carpentry site: > > ?http://software-carpentry.org/4_0/python/debugger/ From jturner at gemini.edu Fri Mar 25 16:17:37 2011 From: jturner at gemini.edu (James Turner) Date: Fri, 25 Mar 2011 17:17:37 -0300 Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy Message-ID: <4D8CF861.9030302@gemini.edu> I'm getting a segmentation fault with no traceback from my Python script, even after upgrading to the latest stable NumPy 1.5.1 and SciPy 0.9.0. I have managed to whittle the code down to 30 lines of Python in a single file that still reproduces the error, with only NumPy, SciPy and PyFITS as dependencies (previously there was a whole chain of PyRAF scripts). However, my test script still requires a 109Mb input file. Would someone mind pointing me in the right direction either to report this properly or troubleshoot it further, please? Should I make a Trac ticket on the SciPy developers' page? I can put the input file on an sftp server if that helps (and perhaps reduce it to 40Mb). I have figured out how to run gdb on my script (thanks to STScI) and am pasting the output from "run" and "where" below, but my familiarity with the debugger is otherwise very limited. As you can see from gdb, the crash occurs in numpy, but I suspect the cause of the problem may be the array that is produced at the previous line by interp.splev(). Printing **des in gdb shows some funny characters and an apparently-huge reference count, but I don't really know what I'm doing / looking for. Thanks a lot! James. ------ (gdb) run scripts/test.py Starting program: /astro/iraf/i686/gempylocal/bin/python scripts/test.py [Thread debugging using libthread_db enabled] [New Thread 0x5555ae10 (LWP 27190)] Program received signal SIGSEGV, Segmentation fault. 0x55a811b8 in _update_descr_and_dimensions (des=0xffecee74, newdims=0xffeced50, newstrides=0x0, oldnd=0, isfortran=0) at numpy/core/src/multiarray/ctors.c:222 222 numpy/core/src/multiarray/ctors.c: No such file or directory. in numpy/core/src/multiarray/ctors.c (gdb) where #0 0x55a811b8 in _update_descr_and_dimensions (des=0xffecee74, newdims=0xffeced50, newstrides=0x0, oldnd=0, isfortran=0) at numpy/core/src/multiarray/ctors.c:222 #1 0x55a4b869 in PyArray_NewFromDescr (subtype=0x55aa6940, descr=0x9196920, nd=0, dims=0x0, strides=0x0, data=0x0, flags=0, obj=0x0) at numpy/core/src/multiarray/ctors.c:1401 #2 0x55a81ac9 in Array_FromPyScalar (op=0x8d08fdc, typecode=0x55aa6940) at numpy/core/src/multiarray/ctors.c:1027 #3 0x55a4c066 in PyArray_FromAny (op=0x8d08fdc, newtype=0x9196920, min_depth=0, max_depth=0, flags=1281, context=0x0) at numpy/core/src/multiarray/ctors.c:1833 #4 0x55a5ddbf in PyArray_Clip (self=0x91144d0, min=0x8d08fdc, max=0x0, out=0x0) at numpy/core/src/multiarray/calculation.c:894 #5 0x55a8b9fb in array_clip (self=0x91144d0, args=0x5555d02c, kwds=0x9205934) at numpy/core/src/multiarray/methods.c:1970 #6 0x08113847 in PyCFunction_Call (func=0x91e762c, arg=0x5555d02c, kw=0x9205934) at Objects/methodobject.c:77 #7 0x0805e227 in PyObject_Call (func=0x91e762c, arg=0x5555d02c, kw=0x9205934) at Objects/abstract.c:1860 #8 0x080c4b67 in do_call (func=0x91e762c, pp_stack=0xffecf118, na=0, nk=1) at Python/ceval.c:3775 #9 0x080c45d8 in call_function (pp_stack=0xffecf118, oparg=256) at Python/ceval.c:3587 ---Type to continue, or q to quit--- #10 0x080c19de in PyEval_EvalFrameEx (f=0x8d28664, throwflag=0) at Python/ceval.c:2267 #11 0x080c2d49 in PyEval_EvalCodeEx (co=0x555ae1d0, globals=0x55576acc, locals=0x55576acc, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2831 #12 0x080be00f in PyEval_EvalCode (co=0x555ae1d0, globals=0x55576acc, locals=0x55576acc) at Python/ceval.c:494 #13 0x080e4a73 in run_mod (mod=0x8d2c6b0, filename=0xffed1495 "scripts/test.py", globals=0x55576acc, locals=0x55576acc, flags=0xffecf380, arena=0x8cd7a20) at Python/pythonrun.c:1271 #14 0x080e4a17 in PyRun_FileExFlags (fp=0x8cd2008, filename=0xffed1495 "scripts/test.py", start=257, globals=0x55576acc, locals=0x55576acc, closeit=1, flags=0xffecf380) at Python/pythonrun.c:1257 #15 0x080e3af2 in PyRun_SimpleFileExFlags (fp=0x8cd2008, filename=0xffed1495 "scripts/test.py", closeit=1, flags=0xffecf380) at Python/pythonrun.c:877 #16 0x080e3482 in PyRun_AnyFileExFlags (fp=0x8cd2008, filename=0xffed1495 "scripts/test.py", closeit=1, flags=0xffecf380) at Python/pythonrun.c:696 #17 0x08057269 in Py_Main (argc=2, argv=0xffecf474) at Modules/main.c:523 #18 0x0805666e in main (argc=2, argv=0xffecf474) at Modules/python.c:23 (gdb) ----- import pyfits import numpy as np import scipy.interpolate as interp input = pyfits.open("brgN20080607S0485.fits")[1].data gain = 1.0 readn = 3.5 xdim = input.shape[1] ydim = input.shape[0] xorder = 2*xdim/4.3 yorder = 2*ydim/4.5 imgx = range(xdim) imgy = range(ydim) xstep = (xdim-1) / xorder ystep = (ydim-1) / yorder knots = np.arange(xstep, xdim-1-xstep, xstep) oldoutput = input.copy() objfit = oldoutput.copy() for yc in range(ydim): tck = interp.splrep(imgx, oldoutput[yc], k=3, t=knots) objfit[yc] = interp.splev(imgx, tck=tck) noise = np.sqrt(objfit.clip(min=0.0)*gain + readn*readn) / gain From elmiller at ece.tufts.edu Fri Mar 25 16:23:42 2011 From: elmiller at ece.tufts.edu (Eric Miller) Date: Fri, 25 Mar 2011 16:23:42 -0400 Subject: [SciPy-User] Image processing class Message-ID: <4D8CF9CE.3060800@ece.tufts.edu> Hello, I teach a class in digital image processing in the ECE department at Tufts University. It has a pretty large programming component which has been in Matlab. I am thinking about moving to Python and was wondering if anyone else out there has done this? Thanks Eric -- ========================================================== Prof. Eric Miller Dept. of Electrical and Computer Engineering Associate Dean of Research, Tufts School of Engineering Email: elmiller at ece.tufts.edu Web: http://www.ece.tufts.edu/~elmiller/elmhome/ Phone: 617.627.0835 FAX: 617.627.3220 Ground: Halligan Hall, 161 College Ave., Medford Ma, 02155 ========================================================== From jturner at gemini.edu Fri Mar 25 16:36:59 2011 From: jturner at gemini.edu (James Turner) Date: Fri, 25 Mar 2011 17:36:59 -0300 Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy In-Reply-To: <4D8CF861.9030302@gemini.edu> References: <4D8CF861.9030302@gemini.edu> Message-ID: <4D8CFCEB.3010502@gemini.edu> Better news -- I have got my input file down to 1.1Mb and still get the error, if anyone wants to see it. Thanks. On 25/03/11 17:17, James Turner wrote: > I'm getting a segmentation fault with no traceback from my Python > script, even after upgrading to the latest stable NumPy 1.5.1 and > SciPy 0.9.0. I have managed to whittle the code down to 30 lines > of Python in a single file that still reproduces the error, with > only NumPy, SciPy and PyFITS as dependencies (previously there was > a whole chain of PyRAF scripts). However, my test script still > requires a 109Mb input file. > > Would someone mind pointing me in the right direction either to > report this properly or troubleshoot it further, please? Should I > make a Trac ticket on the SciPy developers' page? I can put the > input file on an sftp server if that helps (and perhaps reduce it > to 40Mb). I have figured out how to run gdb on my script (thanks > to STScI) and am pasting the output from "run" and "where" below, > but my familiarity with the debugger is otherwise very limited. > > As you can see from gdb, the crash occurs in numpy, but I suspect > the cause of the problem may be the array that is produced at the > previous line by interp.splev(). Printing **des in gdb shows some > funny characters and an apparently-huge reference count, but I > don't really know what I'm doing / looking for. > > Thanks a lot! > > James. > > [... inline attachements removed ...] -- James E.H. Turner Gemini Observatory Southern Operations Centre, Casilla 603, Tel. (+56) 51 205609 La Serena, Chile. Fax. (+56) 51 205650 From pav at iki.fi Fri Mar 25 17:35:38 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Mar 2011 21:35:38 +0000 (UTC) Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy References: <4D8CF861.9030302@gemini.edu> <4D8CFCEB.3010502@gemini.edu> Message-ID: On Fri, 25 Mar 2011 17:36:59 -0300, James Turner wrote: > Better news -- I have got my input file down to 1.1Mb and still get the > error, if anyone wants to see it. Thanks. I'd suggest filing a bug ticket in the trac: http://projects.scipy.org/scipy/newticket You need to register an account there (click Register on the upper right) first, though. If you can upload the sample file somewhere public, that would be useful, in case Trac complains that the attachment is too large. (If you don't have any public http space at hand, you can send the attachment to me, and I'll put it publicly available somewhere.) -- Pauli Virtanen From jturner at gemini.edu Fri Mar 25 18:25:01 2011 From: jturner at gemini.edu (James Turner) Date: Fri, 25 Mar 2011 19:25:01 -0300 Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy In-Reply-To: References: <4D8CF861.9030302@gemini.edu> <4D8CFCEB.3010502@gemini.edu> Message-ID: <4D8D163D.6060309@gemini.edu> > I'd suggest filing a bug ticket in the trac: > > http://projects.scipy.org/scipy/newticket Done, thanks: ticket #1414. > If you can upload the sample file somewhere public, that would be useful, > in case Trac complains that the attachment is too large. (If you don't > have any public http space at hand, you can send the attachment to me, > and I'll put it publicly available somewhere.) I've put it on an sftp site. I hope that's OK. It will expire after 1 month. Let me know if you'd rather just have me email it to you. Cheers, James. From pav at iki.fi Fri Mar 25 18:41:35 2011 From: pav at iki.fi (Pauli Virtanen) Date: Fri, 25 Mar 2011 22:41:35 +0000 (UTC) Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy References: <4D8CF861.9030302@gemini.edu> <4D8CFCEB.3010502@gemini.edu> <4D8D163D.6060309@gemini.edu> Message-ID: On Fri, 25 Mar 2011 19:25:01 -0300, James Turner wrote: [clip] > I've put it on an sftp site. I hope that's OK. It will expire after 1 > month. Let me know if you'd rather just have me email it to you. Thanks. I overrode the size limit on the Trac and uploaded the file also there. -- Pauli Virtanen From jtravs at gmail.com Sat Mar 26 07:49:34 2011 From: jtravs at gmail.com (John Travers) Date: Sat, 26 Mar 2011 12:49:34 +0100 Subject: [SciPy-User] Problem with scipy.interpolate.RectBivariateSpline In-Reply-To: References: Message-ID: <9B2CFE55-65E1-4507-A646-87CDD3D4F313@gmail.com> On Mar 22, 2011, at 11:07 AM, Peter Baek wrote: > Hi, > > I find it strange that scipy.interpolate.RectBivariateSpline cannot > evaluate a random vector. When i evaluate an ordered vector using e.g. > linspace it works fine, but when i try a random vector it crashes. > > Please help me find a way to evaluate an unordered vector. > > Thanks, > Peter. Hi Peter, This is a restriction of the underlying implementation. To evaluate an unordered vector use argsort: i = argsort(yi2) ii = argsort(i) figure(3) plot(yi1,finter_x(yi2[i],xi2)[ii]) where we have used a second argsort to re-order the output of the interpolation routine. Hope this helps, John From wkerzendorf at googlemail.com Sat Mar 26 08:30:32 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 26 Mar 2011 23:30:32 +1100 Subject: [SciPy-User] LinearNDInterpolator crashing Message-ID: <4D8DDC68.40503@gmail.com> Hey, With some of my grids, the LinearNDInterpolator is crashing. Well it just hangs and still uses CPU. The grids can be quite large and complex so it is hard for me to just upload it. What ways do I have to debug what qhull does? Is there some debug mode that I can enable? Ah I sort the points, before I give them to the grid. Cheers Wolfgang From wkerzendorf at googlemail.com Sat Mar 26 08:52:55 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sat, 26 Mar 2011 23:52:55 +1100 Subject: [SciPy-User] variable smoothing kernel Message-ID: <4D8DE1A7.4090603@gmail.com> Hello, I'm interested in having a gaussian smoothing where the kernel depends (linearly in this case) on the index where it is operating on. I implemented it myself (badly probably) and it takes for ever, compared to the gaussian smoothing with a fixed kernel in ndimage. I could interpolate the array to be smoothed onto a log space and not change the kernel, but that is complicated and I'd rather avoid it. Is there a good way of doing that? Cheers Wolfgang From pav at iki.fi Sat Mar 26 10:30:10 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sat, 26 Mar 2011 14:30:10 +0000 (UTC) Subject: [SciPy-User] LinearNDInterpolator hanging References: <4D8DDC68.40503@gmail.com> Message-ID: On Sat, 26 Mar 2011 23:30:32 +1100, Wolfgang Kerzendorf wrote: > With some of my grids, the LinearNDInterpolator is crashing. Well it > just hangs and still uses CPU. Probably same problem as http://projects.scipy.org/scipy/ticket/1412 I'm looking into it. The problem seems to be that if there are very thin sliver triangles in the triangulation, they can cause rounding errors in the barycentric transforms in the triangle walking algorithm and make it think two triangles are both closer to the target point. This can be solved by limiting the iteration count, and falling back to brute force if it fails. I'll make that change. > Ah I sort the points, before I give them to the grid. That's needed for the 1-D problem only. -- Pauli Virtanen From Chris.Barker at noaa.gov Sat Mar 26 14:45:49 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Sat, 26 Mar 2011 11:45:49 -0700 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8DE1A7.4090603@gmail.com> References: <4D8DE1A7.4090603@gmail.com> Message-ID: <4D8E345D.2010402@noaa.gov> On 3/26/11 5:52 AM, Wolfgang Kerzendorf wrote: > I'm interested in having a gaussian smoothing where the kernel depends > (linearly in this case) on the index where it is operating on. I > implemented it myself (badly probably) and it takes for ever, compared > to the gaussian smoothing with a fixed kernel in ndimage. I don't know of any code that does this out of the box. If you post you code here, folks may be able suggest ways to improve the performance. There is a limit to how fast you can do this with pure python, if you want do better than that, give Cython a try -- this would be pretty easy to write with Cython: http://wiki.cython.org/tutorials/numpy -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From nwerneck at gmail.com Sat Mar 26 21:09:34 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Sat, 26 Mar 2011 22:09:34 -0300 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8DE1A7.4090603@gmail.com> References: <4D8DE1A7.4090603@gmail.com> Message-ID: If I understand correctly, you want a filter that varies on "time". This non-linearity will cause it to be inherently more complicated to calculate than a normal linear time-invariant filter. I second Christopher's suggestion, try Cython out, it's great for this kind of thing. Or perhaps scipy.weave. ++nic On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf wrote: > Hello, > > I'm interested in having a gaussian smoothing where the kernel depends > (linearly in this case) on the index where it is operating on. I > implemented it myself (badly probably) and it takes for ever, compared > to the gaussian smoothing with a fixed kernel in ndimage. > > I could interpolate the array to be smoothed onto a log space and not > change the kernel, but that is complicated and I'd rather avoid it. > > Is there a good way of doing that? > > Cheers > ? ? Wolfgang > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -- Nicolau Werneck ? ? ? ?? C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck? ? ? ? ? ? ? ? ?? 7B9E D6C4 37BB DA64 6F15 Linux user #460716 From wkerzendorf at googlemail.com Sat Mar 26 21:54:58 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sun, 27 Mar 2011 12:54:58 +1100 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: References: <4D8DE1A7.4090603@gmail.com> Message-ID: <4D8E98F2.8090109@gmail.com> Well, your time is my wavelength. It should vary on wavelength. I agree implementing it with cython makes it probably faster. I do suspect however, that it won't be as fast as the normal smoothing function. I have learned that multiplying functions in fourier space is the same as convoluting them. I believe that is how the ndimage kernels work so incredibly fast. I wanted to see if there's a similar shortcut for a variable kernel. I have copied my previous attempts (which were very simply written and take a long time) into this pastebin: http://pastebin.com/KkcEATs7 Thanks for your help Wolfgang On 27/03/11 12:09 PM, Nicolau Werneck wrote: > If I understand correctly, you want a filter that varies on "time". > This non-linearity will cause it to be inherently more complicated to > calculate than a normal linear time-invariant filter. > > I second Christopher's suggestion, try Cython out, it's great for this > kind of thing. Or perhaps scipy.weave. > > ++nic > > On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf > wrote: >> Hello, >> >> I'm interested in having a gaussian smoothing where the kernel depends >> (linearly in this case) on the index where it is operating on. I >> implemented it myself (badly probably) and it takes for ever, compared >> to the gaussian smoothing with a fixed kernel in ndimage. >> >> I could interpolate the array to be smoothed onto a log space and not >> change the kernel, but that is complicated and I'd rather avoid it. >> >> Is there a good way of doing that? >> >> Cheers >> Wolfgang >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user >> > > From nwerneck at gmail.com Sat Mar 26 23:01:10 2011 From: nwerneck at gmail.com (Nicolau Werneck) Date: Sun, 27 Mar 2011 00:01:10 -0300 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8E98F2.8090109@gmail.com> References: <4D8DE1A7.4090603@gmail.com> <4D8E98F2.8090109@gmail.com> Message-ID: <20110327030110.GA6400@spirit> On Sun, Mar 27, 2011 at 12:54:58PM +1100, Wolfgang Kerzendorf wrote: > Well, your time is my wavelength. It should vary on wavelength. OK, but what kind of data do you have? It is a 1-dimensional signal, and you have taken its Fourier transform and now you are filtering in the frequency domain?... > I agree implementing it with cython makes it probably faster. I do > suspect however, that it won't be as fast as the normal smoothing function. > > I have learned that multiplying functions in fourier space is the same > as convoluting them. I believe that is how the ndimage kernels work so > incredibly fast. > I wanted to see if there's a similar shortcut for a variable kernel. Implementing in Cython will make it _definitely_ better!... Whenever you have large loops running over vectors or arrays Cython will give you great speedups. And as I was saying later, it will never be as fast because applying a linear filter is something inherently easier... Because you can use the FFT and multiply in the transform domain, as you said. In your case maybe you could consider to filter the signal with a filter bank, and then pick up the values from the result according to the formula you use for calculating your kernel. It may or may not be quicker, but it's not possible if you need infinite precision in the parameters of your filter. > I have copied my previous attempts (which were very simply written and > take a long time) into this pastebin: http://pastebin.com/KkcEATs7 Thanks for sending it... But it's not clear to me how it works in a first glance. Can you send a small sample with a synthetic signal (randn, whatever) showing how to run the procedures? And question: is there any chance you could in your problem first apply some mapping of your signal, a change of variables (like x->log(x) )and then apply a normal linear time-invariant filter with this transform, and then apply the inverse transform? In that case you would first use interpolation to perform the mapping, then apply the fast filtering procedure, and do the inverse interpolation... ++nic > > Thanks for your help > Wolfgang > On 27/03/11 12:09 PM, Nicolau Werneck wrote: > > If I understand correctly, you want a filter that varies on "time". > > This non-linearity will cause it to be inherently more complicated to > > calculate than a normal linear time-invariant filter. > > > > I second Christopher's suggestion, try Cython out, it's great for this > > kind of thing. Or perhaps scipy.weave. > > > > ++nic > > > > On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf > > wrote: > >> Hello, > >> > >> I'm interested in having a gaussian smoothing where the kernel depends > >> (linearly in this case) on the index where it is operating on. I > >> implemented it myself (badly probably) and it takes for ever, compared > >> to the gaussian smoothing with a fixed kernel in ndimage. > >> > >> I could interpolate the array to be smoothed onto a log space and not > >> change the kernel, but that is complicated and I'd rather avoid it. > >> > >> Is there a good way of doing that? > >> > >> Cheers > >> Wolfgang > >> _______________________________________________ > >> SciPy-User mailing list > >> SciPy-User at scipy.org > >> http://mail.scipy.org/mailman/listinfo/scipy-user > >> > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- Nicolau Werneck C3CF E29F 5350 5DAA 3705 http://www.lti.pcs.usp.br/~nwerneck 7B9E D6C4 37BB DA64 6F15 Linux user #460716 "A huge gap exists between what we know is possible with today's machines and what we have so far been able to finish." -- Donald Knuth From wkerzendorf at googlemail.com Sun Mar 27 03:16:21 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sun, 27 Mar 2011 18:16:21 +1100 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <20110327030110.GA6400@spirit> References: <4D8DE1A7.4090603@gmail.com> <4D8E98F2.8090109@gmail.com> <20110327030110.GA6400@spirit> Message-ID: <4D8EE445.7040709@gmail.com> So I have now considered interpolating on a logarithmic spacing. This is very fast (spectrum with 8000 points > 1 ms). I think for now this is a good method. I have also checked the errors associated with interpolating to log space and back. They seem to be relatively small. Thanks for all your suggestions and help. Cheers Wolfgang On 27/03/11 2:01 PM, Nicolau Werneck wrote: > On Sun, Mar 27, 2011 at 12:54:58PM +1100, Wolfgang Kerzendorf wrote: >> Well, your time is my wavelength. It should vary on wavelength. > OK, but what kind of data do you have? It is a 1-dimensional signal, > and you have taken its Fourier transform and now you are filtering in > the frequency domain?... > >> I agree implementing it with cython makes it probably faster. I do >> suspect however, that it won't be as fast as the normal smoothing function. >> >> I have learned that multiplying functions in fourier space is the same >> as convoluting them. I believe that is how the ndimage kernels work so >> incredibly fast. >> I wanted to see if there's a similar shortcut for a variable kernel. > Implementing in Cython will make it _definitely_ better!... Whenever > you have large loops running over vectors or arrays Cython will give > you great speedups. > > And as I was saying later, it will never be as fast because applying a > linear filter is something inherently easier... Because you can use the FFT > and multiply in the transform domain, as you said. > > In your case maybe you could consider to filter the signal with a > filter bank, and then pick up the values from the result according to > the formula you use for calculating your kernel. It may or may not be > quicker, but it's not possible if you need infinite precision in the > parameters of your filter. > >> I have copied my previous attempts (which were very simply written and >> take a long time) into this pastebin: http://pastebin.com/KkcEATs7 > Thanks for sending it... But it's not clear to me how it works in a > first glance. Can you send a small sample with a synthetic signal > (randn, whatever) showing how to run the procedures? > > And question: is there any chance you could in your problem first > apply some mapping of your signal, a change of variables (like > x->log(x) )and then apply a normal linear time-invariant filter with > this transform, and then apply the inverse transform? In that case you > would first use interpolation to perform the mapping, then apply the > fast filtering procedure, and do the inverse interpolation... > > > ++nic > > >> Thanks for your help >> Wolfgang >> On 27/03/11 12:09 PM, Nicolau Werneck wrote: >>> If I understand correctly, you want a filter that varies on "time". >>> This non-linearity will cause it to be inherently more complicated to >>> calculate than a normal linear time-invariant filter. >>> >>> I second Christopher's suggestion, try Cython out, it's great for this >>> kind of thing. Or perhaps scipy.weave. >>> >>> ++nic >>> >>> On Sat, Mar 26, 2011 at 9:52 AM, Wolfgang Kerzendorf >>> wrote: >>>> Hello, >>>> >>>> I'm interested in having a gaussian smoothing where the kernel depends >>>> (linearly in this case) on the index where it is operating on. I >>>> implemented it myself (badly probably) and it takes for ever, compared >>>> to the gaussian smoothing with a fixed kernel in ndimage. >>>> >>>> I could interpolate the array to be smoothed onto a log space and not >>>> change the kernel, but that is complicated and I'd rather avoid it. >>>> >>>> Is there a good way of doing that? >>>> >>>> Cheers >>>> Wolfgang >>>> _______________________________________________ >>>> SciPy-User mailing list >>>> SciPy-User at scipy.org >>>> http://mail.scipy.org/mailman/listinfo/scipy-user >>>> >>> >> _______________________________________________ >> SciPy-User mailing list >> SciPy-User at scipy.org >> http://mail.scipy.org/mailman/listinfo/scipy-user From wkerzendorf at googlemail.com Sun Mar 27 03:19:47 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Sun, 27 Mar 2011 18:19:47 +1100 Subject: [SciPy-User] LinearNDInterpolator hanging In-Reply-To: References: <4D8DDC68.40503@gmail.com> Message-ID: <4D8EE513.3000807@gmail.com> Thanks a lot Paul. I'm looking forward to the fix. Ah one other thing: Is there a way to limit interpolation if the triangle is a certain size. Meaning, that it will return nan or the fill_value if the values are too far away. I can send you a plot if that helps. Cheers Wolfgang On 27/03/11 1:30 AM, Pauli Virtanen wrote: > On Sat, 26 Mar 2011 23:30:32 +1100, Wolfgang Kerzendorf wrote: >> With some of my grids, the LinearNDInterpolator is crashing. Well it >> just hangs and still uses CPU. > Probably same problem as > > http://projects.scipy.org/scipy/ticket/1412 > > I'm looking into it. The problem seems to be that if there are very thin > sliver triangles in the triangulation, they can cause rounding errors in > the barycentric transforms in the triangle walking algorithm and make it > think two triangles are both closer to the target point. > > This can be solved by limiting the iteration count, and falling back to > brute force if it fails. I'll make that change. > >> Ah I sort the points, before I give them to the grid. > That's needed for the 1-D problem only. > From sturla at molden.no Sun Mar 27 08:24:07 2011 From: sturla at molden.no (Sturla Molden) Date: Sun, 27 Mar 2011 14:24:07 +0200 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8E98F2.8090109@gmail.com> References: <4D8DE1A7.4090603@gmail.com> <4D8E98F2.8090109@gmail.com> Message-ID: <4D8F2C67.3000003@molden.no> Den 27.03.2011 03:54, skrev Wolfgang Kerzendorf: > I have learned that multiplying functions in fourier space is the same > as convoluting them. Yes. The best strategy is to do this chunk-wise, however, instead of FFT'ing the whole signal. This leads tot he so-called "overlap-and-add" method, used by e.g. scipy.signal.fftfilt. fftfilt tries to guess the optimum chunk-size to use for filtering. For short FIR-filters it will be faster to filter in the time-domain. For moving average filters one can use a very fast recursive filter. Bartlet filter can be implemented by appying MA twice, Gaussian can be approximated by applying MA four times. (A better IIR-approximation for the Gaussian is available, howver, see below.) > I believe that is how the ndimage kernels work so > incredibly fast. No. They truncate the Gaussian, which is why compute time depends on filter size. Cf. the comment on short FIR-filters above. For the Gaussian it is possible to use a recursive IIR filter instead, for which compute time does not depend on filter size. > I wanted to see if there's a similar shortcut for a variable kernel. Not in general, because the filter will be non-linear. Sturla -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: _gaussian.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gaussian.h URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: gaussian.pyx URL: From wkerzendorf at googlemail.com Sun Mar 27 09:41:45 2011 From: wkerzendorf at googlemail.com (Wolfgang Kerzendorf) Date: Mon, 28 Mar 2011 00:41:45 +1100 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8F2C67.3000003@molden.no> References: <4D8DE1A7.4090603@gmail.com> <4D8E98F2.8090109@gmail.com> <4D8F2C67.3000003@molden.no> Message-ID: <4D8F3E99.1040704@gmail.com> I guess by FIR-filters you don't mean Far InfraRed ;-). I guess if I describe my specific case in more detail than that might help: My Signal is an optical spectrum which has nm on the x axis and intensity on the y. I am trying to simulate the effects of the optical properties of a real spectrograph on synthetic spectra. Basically we describe this as the resolution (R) = lambda/(fwhm(lambda). That's why I have a variable kernel size. So for the moment I interpolate the evenly linear spaced spectrum on log space spectrum (I use splrep and splev with k=1, rather than interp1d. I hope that's a good idea) and then calculate the kernel size and use ndimage.gaussian_filter1d. Ah while we are on the topic: I sometimes have synthetic spectra which are irregularly sampled. This means what I am doing is resampling it on a linear grid using smallest wavelength delta (which can be quite small). This obviously expands the spectra to many times its original size. Then I would resample it on a log spacing where the largest delta is equal to the smallest delta on the linear scale. This makes it even larger. There is probably a much smarter way. I should also mention that my knowledge about signal processing is limited. Your help is very much appreciated, Wolfgang On 27/03/11 11:24 PM, Sturla Molden wrote: > Den 27.03.2011 03:54, skrev Wolfgang Kerzendorf: >> I have learned that multiplying functions in fourier space is the same >> as convoluting them. > > Yes. > > The best strategy is to do this chunk-wise, however, instead of > FFT'ing the whole signal. This leads tot he so-called > "overlap-and-add" method, used by e.g. scipy.signal.fftfilt. fftfilt > tries to guess the optimum chunk-size to use for filtering. > > For short FIR-filters it will be faster to filter in the time-domain. > > For moving average filters one can use a very fast recursive filter. > Bartlet filter can be implemented by appying MA twice, Gaussian can be > approximated by applying MA four times. (A better IIR-approximation > for the Gaussian is available, howver, see below.) > > >> I believe that is how the ndimage kernels work so >> incredibly fast. > > No. They truncate the Gaussian, which is why compute time depends on > filter size. Cf. the comment on short FIR-filters above. > > For the Gaussian it is possible to use a recursive IIR filter instead, > for which compute time does not depend on filter size. > > > >> I wanted to see if there's a similar shortcut for a variable kernel. > > Not in general, because the filter will be non-linear. > > > Sturla > > > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From pav at iki.fi Sun Mar 27 10:42:24 2011 From: pav at iki.fi (Pauli Virtanen) Date: Sun, 27 Mar 2011 14:42:24 +0000 (UTC) Subject: [SciPy-User] LinearNDInterpolator hanging References: <4D8DDC68.40503@gmail.com> <4D8EE513.3000807@gmail.com> Message-ID: On Sun, 27 Mar 2011 18:19:47 +1100, Wolfgang Kerzendorf wrote: > Thanks a lot Paul. I'm looking forward to the fix. It's fixed now in the Git tree. > Ah one other thing: > Is there a way to limit interpolation if the triangle is a certain size. > Meaning, that it will return nan or the fill_value if the values are too > far away. I can send you a plot if that helps. This would require adding to the algorithm a check whether a triangle is "masked" or not --- then the user could specify the mask using whatever criterion they deem useful. This would not be too difficult to do, at least if one does not think about gradient estimation, but I don't know how widely useful this would be. You can almost emulate this by going through the simplices in the triangulation, and setting the data values to `nan` for the vertices that belong to triangles that are too large. This will kill also the neighbouring triangles, though. Pauli From sturla at molden.no Sun Mar 27 10:55:23 2011 From: sturla at molden.no (Sturla Molden) Date: Sun, 27 Mar 2011 16:55:23 +0200 Subject: [SciPy-User] variable smoothing kernel In-Reply-To: <4D8F3E99.1040704@gmail.com> References: <4D8DE1A7.4090603@gmail.com> <4D8E98F2.8090109@gmail.com> <4D8F2C67.3000003@molden.no> <4D8F3E99.1040704@gmail.com> Message-ID: <4D8F4FDB.3080006@molden.no> Den 27.03.2011 15:41, skrev Wolfgang Kerzendorf: > I guess by FIR-filters you don't mean Far InfraRed ;-). Finite Impulse Response :) Sturla -------------- next part -------------- An HTML attachment was scrubbed... URL: From crmpeter at gmail.com Mon Mar 28 03:19:37 2011 From: crmpeter at gmail.com (=?UTF-8?B?0J/RgNC10LTQtdC40L0g0J8uINCQLg==?=) Date: Mon, 28 Mar 2011 16:19:37 +0900 Subject: [SciPy-User] How-to move guiqwt plots in code? Message-ID: guiqwt graphics library is excellent for embedding in my apllication on PyQt4 for visualization purposes. But I think it's way to move\drag canvas by mouse (clicking both buttons and moving) is quite uneasy. I attached "keyPressEvent" for this purposes to my program's class definition, but still dont know how-to make programmed moving guiqwt plot. Consider any help, please. From bsouthey at gmail.com Mon Mar 28 10:23:22 2011 From: bsouthey at gmail.com (Bruce Southey) Date: Mon, 28 Mar 2011 09:23:22 -0500 Subject: [SciPy-User] Seg. fault from scipy.interpolate or numpy In-Reply-To: <4D8CF861.9030302@gemini.edu> References: <4D8CF861.9030302@gemini.edu> Message-ID: On Fri, Mar 25, 2011 at 3:17 PM, James Turner wrote: > I'm getting a segmentation fault with no traceback from my Python > script, even after upgrading to the latest stable NumPy 1.5.1 and > SciPy 0.9.0. I have managed to whittle the code down to 30 lines > of Python in a single file that still reproduces the error, with > only NumPy, SciPy and PyFITS as dependencies (previously there was > a whole chain of PyRAF scripts). However, my test script still > requires a 109Mb input file. > > Would someone mind pointing me in the right direction either to > report this properly or troubleshoot it further, please? Should I > make a Trac ticket on the SciPy developers' page? I can put the > input file on an sftp server if that helps (and perhaps reduce it > to 40Mb). I have figured out how to run gdb on my script (thanks > to STScI) and am pasting the output from "run" and "where" below, > but my familiarity with the debugger is otherwise very limited. > > As you can see from gdb, the crash occurs in numpy, but I suspect > the cause of the problem may be the array that is produced at the > previous line by interp.splev(). Printing **des in gdb shows some > funny characters and an apparently-huge reference count, but I > don't really know what I'm doing / looking for. > > Thanks a lot! > > James. > [snip] > > import pyfits > import numpy as np > import scipy.interpolate as interp > > input = pyfits.open("brgN20080607S0485.fits")[1].data > > gain = 1.0 > readn = 3.5 > > xdim = input.shape[1] > ydim = input.shape[0] > xorder = 2*xdim/4.3 > yorder = 2*ydim/4.5 > imgx = range(xdim) > imgy = range(ydim) > xstep = (xdim-1) / xorder > ystep = (ydim-1) / yorder > knots = np.arange(xstep, xdim-1-xstep, xstep) > > oldoutput = input.copy() > objfit = oldoutput.copy() > > for yc in range(ydim): > ? ? tck = interp.splrep(imgx, oldoutput[yc], k=3, t=knots) > ? ? objfit[yc] = interp.splev(imgx, tck=tck) > > noise = np.sqrt(objfit.clip(min=0.0)*gain + readn*readn) / gain > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > [I did see the ticket comments and it doesn't crash for me on Linux and Python2.7.] Why do you use 'input' for as the variable name? This could be a problem because input is a builtin function. If the error goes away then it is most likely a Python bug. Second, is there a reason for using copy() for 'objfit' instead of one of these? zeros_like : Return an array of zeros with shape and type of input. ones_like : Return an array of ones with shape and type of input. empty_like : Return an empty array with shape and type of input. ones : Return a new array setting values to one. empty : Return a new uninitialized array. So can you determine exactly which line it crashes on (especially the value yc) and the shapes of the arrays? Bruce From kiyo at cita.utoronto.ca Mon Mar 28 12:28:42 2011 From: kiyo at cita.utoronto.ca (Kiyoshi Masui) Date: Mon, 28 Mar 2011 12:28:42 -0400 Subject: [SciPy-User] Making an array subclass, were all all shape changing operations return a normal array. Message-ID: <4D90B73A.3050904@cita.utoronto.ca> Hi all, I have an array subclass, where some of the extra attributes are only valid for the object's original shape. Is there a way to make sure that all array shape changing operations return a normal numpy array instead of an instance of my class? I've already written array_wrap, but this doesn't seem to have any effect on operations like mean(), sum() and rollaxis(). def __array_wrap__(self, out_arr, context=None) : if out_arr.shape == self.shape : out = out_arr.view(new_array) # Do a bunch of class dependant initialization . . . return out else : return sp.asarray(out_arr) Thanks, Kiyo From nouiz at nouiz.org Mon Mar 28 16:48:34 2011 From: nouiz at nouiz.org (=?ISO-8859-1?Q?Fr=E9d=E9ric_Bastien?=) Date: Mon, 28 Mar 2011 16:48:34 -0400 Subject: [SciPy-User] efficient way to store and use a 4D redundant matrix In-Reply-To: <4D832D7A.3080308@relativita.com> References: <4D80DA13.2060109@relativita.com> <4D832D7A.3080308@relativita.com> Message-ID: If the data is easily compressible maybe you can use carray[1]? It keep data compressed in memory and allow to make some operation direction on the compressed data. Fr?d?ric Bastien [1] http://pypi.python.org/pypi/carray/0.4 On Fri, Mar 18, 2011 at 6:01 AM, Emanuele Olivetti wrote: > On 03/16/2011 08:31 PM, Daniel Lepage wrote: >> [...] >> >> The sparse matrix formats will only help you if you can rewrite A in >> terms of matrices that are mostly 0. >> > > Correct. This is not my case, you are right. > >> Do you need the results of slicing, reshaping, etc. to also be >> similarly compressed? If so, I can't see any way to implement this >> without an index array, because once you reshape or slice A you won't >> know which cells correspond to which indices in the original A. >> > > I will have a deeper look to a solution with index array. Thanks for > pointing it out. > >> If you're only taking small slices of this and then applying linear >> algebra operations to those, you might be better off writing a class >> that looks up the relevant values on the fly; you could overload >> __getitem__ so that e.g. A[:,1,:,3] would generate the correct float64 >> array on the fly and return it. >> > > Unfortunately I am not playing with small slices. So I guess that overloading > __getitem__ would be impractical. > >> However, if the nonredundant part takes only ~4MB, then maybe I don't >> understand your layout - for a 100x100x100x100 and 64-bit floats, I >> think the nonredundant part should take ((100 choose 4) + ((100 choose >> 3) * 3) + ((100 choose 2) * 3) + (100 choose 1)) * 8 bytes = about >> 34MB. Was that a math error, or am I misunderstanding the question? >> > > My fault. It is indeed ~34Mb. I missed one order of magnitude when > computing (100**4 * 8byte) / 24 . > > Thanks again, > > Emanuele > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > From cweisiger at msg.ucsf.edu Mon Mar 28 18:22:03 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Mon, 28 Mar 2011 15:22:03 -0700 Subject: [SciPy-User] Strange memory limits Message-ID: (This is unrelated to my earlier question about 2D data slicing) We have a 32-bit Windows program that has Python bindings which do most of the program logic, reserving the C++ side for heavy lifting. This program needs to reserve buffers of memory to accept incoming image data from our different cameras -- it waits until it has received an image from all active cameras, then saves the image to disk, repeat until all images are in. So the Python side uses numpy to allocate a block of memory, then hands it off to the C++ side where images are written to it and then later stored. Ordinarily all of our cameras are operating in sync so the delay between the first and last cameras is small, so we can keep the memory buffer small. I'm working on a modified data collection mode where each camera does a lengthy independent sequence, though, requiring me to either rewrite the data saving system or simply increase the buffer size. Increasing the buffer size works just fine until I try to allocate about a 3x735x512x512 array (camera/Z/X/Y) of 16-bit ints, at which point I get a MemoryError. This is only a bit over 1GB worth of memory (out of 12GB on the computer), and according to Windows' Task Manager the program was only using about 100MB before I tried the allocation -- of course, I've no idea how the Task Manager maps to how much RAM I've actually requested. So that's a bit strange. I ought to have 4GB worth of space (or at the very least 3GB), which is more than enough for what I need. Short of firing up a memory debugger, any suggestions for tracking down big allocations? Numpy *should* be our only major offender here aside from the C++ portion of the program, which is small enough for me to examine by hand. Would it be reasonable to expect to see this problem go away if we rebuilt as a 64-bit program with 64-bit numpy et al? Thanks for your time. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_baddeley at yahoo.com.au Mon Mar 28 19:51:27 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Mon, 28 Mar 2011 16:51:27 -0700 (PDT) Subject: [SciPy-User] Strange memory limits In-Reply-To: References: Message-ID: <588427.67506.qm@web113412.mail.gq1.yahoo.com> Hi Chris, what you're probably running into is a problem with allocating a continuous block of memory / a memory fragmentation issue. Depending on how windows has scattered the bits of you're program, (and how much you've allocated and deleted) you might have lots of small chunks of memory allocated throughout your 3GB address space. When python asks for a contiguous block, it finds that there are none of that size available, despite the fact that the required total amount of memory is. This doesn't just affect python/numpy - I've had major issues with this in Matlab as well (If anything, Matlab seems worse). I've generally found I've been unable to reliably allocate contiguous blocks over ~ 1/4 of the total memory size. This also gets worse the longer windows (and your program) has been running. Compiling as 64 bit might solve your problem, as, with 12 GB of memory, there will be a larger address space to look for contiguous blocks in, but probably doesn't address the fundamental issue. I suspect you could probably get away with having much smaller contiguous blocks (eg have 3 separate arrays for the 3 different cameras) or even a new array for each image. cheers, David ________________________________ From: Chris Weisiger To: SciPy Users List Sent: Tue, 29 March, 2011 11:22:03 AM Subject: [SciPy-User] Strange memory limits (This is unrelated to my earlier question about 2D data slicing) We have a 32-bit Windows program that has Python bindings which do most of the program logic, reserving the C++ side for heavy lifting. This program needs to reserve buffers of memory to accept incoming image data from our different cameras -- it waits until it has received an image from all active cameras, then saves the image to disk, repeat until all images are in. So the Python side uses numpy to allocate a block of memory, then hands it off to the C++ side where images are written to it and then later stored. Ordinarily all of our cameras are operating in sync so the delay between the first and last cameras is small, so we can keep the memory buffer small. I'm working on a modified data collection mode where each camera does a lengthy independent sequence, though, requiring me to either rewrite the data saving system or simply increase the buffer size. Increasing the buffer size works just fine until I try to allocate about a 3x735x512x512 array (camera/Z/X/Y) of 16-bit ints, at which point I get a MemoryError. This is only a bit over 1GB worth of memory (out of 12GB on the computer), and according to Windows' Task Manager the program was only using about 100MB before I tried the allocation -- of course, I've no idea how the Task Manager maps to how much RAM I've actually requested. So that's a bit strange. I ought to have 4GB worth of space (or at the very least 3GB), which is more than enough for what I need. Short of firing up a memory debugger, any suggestions for tracking down big allocations? Numpy *should* be our only major offender here aside from the C++ portion of the program, which is small enough for me to examine by hand. Would it be reasonable to expect to see this problem go away if we rebuilt as a 64-bit program with 64-bit numpy et al? Thanks for your time. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesr.harris at gmail.com Mon Mar 28 23:01:05 2011 From: charlesr.harris at gmail.com (Charles R Harris) Date: Mon, 28 Mar 2011 21:01:05 -0600 Subject: [SciPy-User] Strange memory limits In-Reply-To: References: Message-ID: On Mon, Mar 28, 2011 at 4:22 PM, Chris Weisiger wrote: > (This is unrelated to my earlier question about 2D data slicing) > > We have a 32-bit Windows program that has Python bindings which do most of > the program logic, reserving the C++ side for heavy lifting. This program > needs to reserve buffers of memory to accept incoming image data from our > different cameras -- it waits until it has received an image from all active > cameras, then saves the image to disk, repeat until all images are in. So > the Python side uses numpy to allocate a block of memory, then hands it off > to the C++ side where images are written to it and then later stored. > Ordinarily all of our cameras are operating in sync so the delay between the > first and last cameras is small, so we can keep the memory buffer small. I'm > working on a modified data collection mode where each camera does a lengthy > independent sequence, though, requiring me to either rewrite the data saving > system or simply increase the buffer size. > > Increasing the buffer size works just fine until I try to allocate about a > 3x735x512x512 array (camera/Z/X/Y) of 16-bit ints, at which point I get a > MemoryError. This is only a bit over 1GB worth of memory (out of 12GB on the > computer), and according to Windows' Task Manager the program was only using > about 100MB before I tried the allocation -- of course, I've no idea how the > Task Manager maps to how much RAM I've actually requested. So that's a bit > strange. I ought to have 4GB worth of space (or at the very least 3GB), > which is more than enough for what I need. > > Windows 32 bit gives you 2GB and keeps the rest for itself. Chuck -------------- next part -------------- An HTML attachment was scrubbed... URL: From meelmaarten at gmail.com Tue Mar 29 11:20:10 2011 From: meelmaarten at gmail.com (Maarten Smoorenburg) Date: Tue, 29 Mar 2011 17:20:10 +0200 Subject: [SciPy-User] scikits.timeseries: Using convert on hourly and minutely data aggregates forward? Message-ID: Hi, I am very happy with the scikits.timeseries package, but have some issues when I convert the data from 1 frequency to another if these frequencies are higher than daily. More exactly: I do not like it that data is aggregated 'forward' For example: I have constructed a TimeSeries with Minutely frequency and 1 data point every 10 minutes and all other timestamps masked out. In this case the data for example at time stamp 09:10 contains measurements between 09:01 and 09:10 that are summed (typical logger data of a rain gauge). If I know want to compute hourly values from this object, using the convert function with func=mean, the new TimeSeries object will have stored the average value observed between 08:50 and 09:49 at time stamp 09:00. I observed that the same happens if one computes the daily average from hourly values. I see that this can be very useful, but would not like to use it as my default procedure though. I here would need however to have the data average computed over the hour before 09:00 and after. I can do this of course by shifting the TimeSeries 50 minutes before do the ?onvert step, but this is no so convenient and error prone if one has to deal with many different frequencies of sampling. So the question is: Is it possible with the scikits.timeseries package to perform the aggregation function over the 'past' instead of over the 'future'? Thanks a lot for the help! Maarten -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Barker at noaa.gov Tue Mar 29 13:14:13 2011 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Tue, 29 Mar 2011 10:14:13 -0700 Subject: [SciPy-User] Strange memory limits In-Reply-To: <588427.67506.qm@web113412.mail.gq1.yahoo.com> References: <588427.67506.qm@web113412.mail.gq1.yahoo.com> Message-ID: <4D921365.2040406@noaa.gov> On 3/28/11 4:51 PM, David Baddeley wrote: > what you're probably running into is a problem with allocating a > continuous block of memory / a memory fragmentation issue. On 3/28/11 8:01 PM, Charles R Harris wrote: > Windows 32 bit gives you 2GB and keeps the rest for itself. right -- I've had no problems running the same code with 32 bit python on OS-X that crashes out with memory errors on Windows -- similar hardware. > Compiling as 64 bit might solve your problem, as, with 12 GB of memory, > there will be a larger address space to look for contiguous blocks in, > but probably doesn't address the fundamental issue. Ah, but while you still may only have 12GB memory, with 64 bit Windows and Python, the virtual memory space is massive, so I suspect you'll be fine. Using 1GB memory buffers on 32bit is certainly pushing it. > I suspect you could > probably get away with having much smaller contiguous blocks (eg have 3 > separate arrays for the 3 different cameras) or even a new array for > each image. That would make it easier for the OS to manage the memory well. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From cweisiger at msg.ucsf.edu Tue Mar 29 13:31:23 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Tue, 29 Mar 2011 10:31:23 -0700 Subject: [SciPy-User] Strange memory limits In-Reply-To: <4D921365.2040406@noaa.gov> References: <588427.67506.qm@web113412.mail.gq1.yahoo.com> <4D921365.2040406@noaa.gov> Message-ID: Thanks for the information, all. Annoyingly, I can't allocate 3 individual 735x512x512 arrays (each ~360MB) -- the third gives a memory error. If I allocate 1024x1024 byte arrays (thus, each 1MB), I can make 1500 before getting a memory error. So I'm definitely running into *some* issue that prevents larger blocks from being allocated, but I'm also hitting a ceiling well before I should be. I had thought that Windows allowed for 3GB address spaces for 32-bit processes, but apparently (per http://msdn.microsoft.com/en-us/library/aa366778%28v=vs.85%29.aspx#memory_limits) that only applies if the program has IMAGE_FILE_LARGE_ADDRESS_AWARE and 4GT set...sounds like I'd need a recompile and some system tweaks to set those. The proper, and more work-intensive, solution would be to make a 64-bit build. My (admittedly limited) understanding of memory fragmentation was that it's a per-process problem. I'm seeing this issue immediately on starting up the program, so the program's virtual memory address space should be pretty clean. -Chris On Tue, Mar 29, 2011 at 10:14 AM, Christopher Barker wrote: > On 3/28/11 4:51 PM, David Baddeley wrote: > > what you're probably running into is a problem with allocating a > > continuous block of memory / a memory fragmentation issue. > > On 3/28/11 8:01 PM, Charles R Harris wrote: > > > Windows 32 bit gives you 2GB and keeps the rest for itself. > > right -- I've had no problems running the same code with 32 bit python > on OS-X that crashes out with memory errors on Windows -- similar hardware. > > > Compiling as 64 bit might solve your problem, as, with 12 GB of memory, > > there will be a larger address space to look for contiguous blocks in, > > but probably doesn't address the fundamental issue. > > Ah, but while you still may only have 12GB memory, with 64 bit Windows > and Python, the virtual memory space is massive, so I suspect you'll be > fine. Using 1GB memory buffers on 32bit is certainly pushing it. > > > I suspect you could > > probably get away with having much smaller contiguous blocks (eg have 3 > > separate arrays for the 3 different cameras) or even a new array for > > each image. > > That would make it easier for the OS to manage the memory well. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cgohlke at uci.edu Tue Mar 29 13:39:35 2011 From: cgohlke at uci.edu (Christoph Gohlke) Date: Tue, 29 Mar 2011 10:39:35 -0700 Subject: [SciPy-User] Strange memory limits In-Reply-To: References: <588427.67506.qm@web113412.mail.gq1.yahoo.com> <4D921365.2040406@noaa.gov> Message-ID: <4D921957.9090901@uci.edu> On 3/29/2011 10:31 AM, Chris Weisiger wrote: > Thanks for the information, all. Annoyingly, I can't allocate 3 > individual 735x512x512 arrays (each ~360MB) -- the third gives a memory > error. If I allocate 1024x1024 byte arrays (thus, each 1MB), I can make > 1500 before getting a memory error. So I'm definitely running into > *some* issue that prevents larger blocks from being allocated, but I'm > also hitting a ceiling well before I should be. > > I had thought that Windows allowed for 3GB address spaces for 32-bit > processes, but apparently (per > http://msdn.microsoft.com/en-us/library/aa366778%28v=vs.85%29.aspx#memory_limits > ) that only applies if the program has IMAGE_FILE_LARGE_ADDRESS_AWARE > and 4GT set...sounds like I'd need a recompile and some system tweaks to > set those. The proper, and more work-intensive, solution would be to > make a 64-bit build. > > My (admittedly limited) understanding of memory fragmentation was that > it's a per-process problem. I'm seeing this issue immediately on > starting up the program, so the program's virtual memory address space > should be pretty clean. > > -Chris > > On Tue, Mar 29, 2011 at 10:14 AM, Christopher Barker > > wrote: > > On 3/28/11 4:51 PM, David Baddeley wrote: > > what you're probably running into is a problem with allocating a > > continuous block of memory / a memory fragmentation issue. > > On 3/28/11 8:01 PM, Charles R Harris wrote: > > > Windows 32 bit gives you 2GB and keeps the rest for itself. > > right -- I've had no problems running the same code with 32 bit python > on OS-X that crashes out with memory errors on Windows -- similar > hardware. > > > Compiling as 64 bit might solve your problem, as, with 12 GB of > memory, > > there will be a larger address space to look for contiguous blocks in, > > but probably doesn't address the fundamental issue. > > Ah, but while you still may only have 12GB memory, with 64 bit Windows > and Python, the virtual memory space is massive, so I suspect you'll be > fine. Using 1GB memory buffers on 32bit is certainly pushing it. > > > I suspect you could > > probably get away with having much smaller contiguous blocks (eg > have 3 > > separate arrays for the 3 different cameras) or even a new array for > > each image. > > That would make it easier for the OS to manage the memory well. > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov Try VMMap . The software lists, among other useful information, the sizes of contiguous blocks of memory available to a process. You'll probably find that 64 bit Python lets you use a much larger contiguous block than 32 bit Python. It could help to create large numpy arrays early in the program, e.g. before importing packages or creating other arrays. Christoph From cweisiger at msg.ucsf.edu Tue Mar 29 14:12:39 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Tue, 29 Mar 2011 11:12:39 -0700 Subject: [SciPy-User] Strange memory limits In-Reply-To: <4D921957.9090901@uci.edu> References: <588427.67506.qm@web113412.mail.gq1.yahoo.com> <4D921365.2040406@noaa.gov> <4D921957.9090901@uci.edu> Message-ID: On Tue, Mar 29, 2011 at 10:39 AM, Christoph Gohlke wrote: > > Try VMMap > . The > software lists, among other useful information, the sizes of contiguous > blocks of memory available to a process. You'll probably find that 64 > bit Python lets you use a much larger contiguous block than 32 bit Python. > > It could help to create large numpy arrays early in the program, e.g. > before importing packages or creating other arrays. > > Ah, thanks. Looks like there's some very loosely-packed "image" allocations at one end of the heap that are basically precluding allocations of large arrays in that area without actually using up all that much total memory. I wonder if maybe they're for imported Python modules...well, at least now I have a tool to help me figure out where memory's going. The right answer's probably still to just make a 64-bit version though. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at sun.ac.za Tue Mar 29 18:20:51 2011 From: stefan at sun.ac.za (=?ISO-8859-1?Q?St=E9fan_van_der_Walt?=) Date: Wed, 30 Mar 2011 00:20:51 +0200 Subject: [SciPy-User] Image processing class In-Reply-To: <4D8CF9CE.3060800@ece.tufts.edu> References: <4D8CF9CE.3060800@ece.tufts.edu> Message-ID: Dear Eric On Fri, Mar 25, 2011 at 10:23 PM, Eric Miller wrote: > I teach a class in digital image processing in the ECE department at > Tufts University. ?It has a pretty large programming component which has > been in Matlab. ?I am thinking about moving to Python and was wondering > if anyone else out there has done this? I've taught from Gonzales & Woods in Python: http://dip.sun.ac.za/~stefan/TW793/ This was one of the reasons I started work on the Image Processing scikit: http://stefanv.github.com/scikits.image Unfortunately, that library does not contain nearly all of MATLAB's image processing functionality yet, but we're working on it (and welcome contributions and advice!) Regards St?fan From cweisiger at msg.ucsf.edu Tue Mar 29 19:08:38 2011 From: cweisiger at msg.ucsf.edu (Chris Weisiger) Date: Tue, 29 Mar 2011 16:08:38 -0700 Subject: [SciPy-User] 2D slice of transformed data In-Reply-To: References: Message-ID: I just wanted to let you all know that I have this working now, and while it's moderately slow when it has to calculate three slices (of sizes 2x512x512, 2x512x50, and 2x50x512 in my test case), it's not so bad as to be unusable -- and my users have more powerful computers than I do. ...hm, for that matter, I should be able to split off the three slice calculations into separate threads... Anyway, I put my solution online on the off-chance that someone finds it useful: http://paste.ubuntu.com/587110/ I can't guarantee that the comments are especially germaine, as this took a lot of fiddling and I haven't gone through to clean everything up yet, but the basic flow should be intelligible: * Figure out which axes the slice cuts across, and generate an array of the appropriate shape to hold the results. * Create an array of similar size augmented with a length-4 dimension. This array holds XYZ coordinates for each pixel in the slice; the 4th index holds a 1 (so that we can use a 4x4 affine transformation matrix to do rotation and offsets in the same pass). For example, an XY slice at Z = 5 would look something like this: [ [0, 0, 5] [0, 1, 5] [0, 2, 5] ... [ [1, 0, 5] ... [ [2, 0, 5] [ ... [ * Subtract the XYZ center off of the coordinates so that when we apply the rotation transformation, it's done about the center of the dataset instead of the corner. * Multiply the inverse transformation matrix by the coordinates. * Add the center back on. * Chop off the dummy 1 coordinate, reorder to ZYX, and prepend the time dimension. * Pass the list of coordinates off to numpy.map_coordinates so it can look up actual pixel values. Thanks again for your help, everyone! -Chris On Wed, Mar 23, 2011 at 3:00 PM, Chris Weisiger wrote: > In preface, I'm not remotely an expert at array manipulation here. I'm an > experienced programmer, but not an experienced *scientific* programmer. I'm > sure what I want to do is possible, and I'm pretty certain it's even > possible to do efficiently, but figuring out the actual implementation is > giving me fits. > > I have two four-dimensional arrays of data: time, Z, Y, X. These represent > microscopy data taken of the same sample with two different cameras. Their > views don't quite match up if you overlay them, so we have a > three-dimensional transform to align one array with the other. That > transformation consists of X, Y, and Z translations (shifts), rotation about > the Z axis, and equal scaling in X and Y -- thus, the transformation has 5 > parameters. I can perform the transformation on the data without difficulty > with ndimage.affine_transform, but because we typically have hundreds of > millions of pixels in one array, it takes a moderately long time. A > representative array would be 30x50x512x512 or thereabouts. > > I'm writing a program to allow users to adjust the transformation and see > how well-aligned the data looks from several perspectives. In addition to > the traditional XY view, we also want to show XZ and YZ views, as well as > kymographs (e.g. TX, TY, TZ views). Thus, I need to be able to show 2D > slices of the transformed data in a timely fashion. These slices are always > perpendicular to two axes (e.g. an XY slice passing through T = 0, Z = 20, > or a TZ slice passing through X = 256, Y = 256), never diagonal. It seems > like the fast way to do this would be to take each pixel in the desired > slice, apply the reverse transform, and figure out where in the original > data it came from. But I'm having trouble figuring out how to efficiently do > this. > > I could construct a 3D array with shape (length of axis 1), (length of axis > 2), (4), such that each position in the array is a 4-tuple of the > coordinates of the pixel in the desired slice. For example, if doing a YX > slice at T = 10, Z = 20, the array would look like [[[10, 20, 0, 0], [10, > 20, 1, 0], [10, 20, 2, 0], ...], [[10, 20, 0, 1], 10, 20, 1, 1], ...]]. Then > perhaps there'd be some way to efficiently apply the inverse transform to > each coordinate tuple, then using ndimage.map_coordinates to turn those into > pixel data. But I haven't managed to figure that out yet. > > By any chance is this already solved? If not, any suggestions / assistance > would be wonderful. > > -Chris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elmiller at ece.tufts.edu Tue Mar 29 20:52:33 2011 From: elmiller at ece.tufts.edu (Eric Miller) Date: Tue, 29 Mar 2011 20:52:33 -0400 Subject: [SciPy-User] Image processing class In-Reply-To: References: <4D8CF9CE.3060800@ece.tufts.edu> Message-ID: <4D927ED1.6060804@ece.tufts.edu> Thanks Stefan That is very helpful. Best Eric On 3/29/11 6:20 PM, St?fan van der Walt wrote: > Dear Eric > > On Fri, Mar 25, 2011 at 10:23 PM, Eric Miller wrote: >> I teach a class in digital image processing in the ECE department at >> Tufts University. It has a pretty large programming component which has >> been in Matlab. I am thinking about moving to Python and was wondering >> if anyone else out there has done this? > I've taught from Gonzales& Woods in Python: > > http://dip.sun.ac.za/~stefan/TW793/ > > This was one of the reasons I started work on the Image Processing scikit: > > http://stefanv.github.com/scikits.image > > Unfortunately, that library does not contain nearly all of MATLAB's > image processing functionality yet, but we're working on it (and > welcome contributions and advice!) > > Regards > St?fan > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -- ========================================================== Prof. Eric Miller Dept. of Electrical and Computer Engineering Associate Dean of Research, Tufts School of Engineering Email: elmiller at ece.tufts.edu Web: http://www.ece.tufts.edu/~elmiller/elmhome/ Phone: 617.627.0835 FAX: 617.627.3220 Ground: Halligan Hall, 161 College Ave., Medford Ma, 02155 ========================================================== From henrylindsaysmith at gmail.com Wed Mar 30 13:22:08 2011 From: henrylindsaysmith at gmail.com (henry lindsay smith) Date: Wed, 30 Mar 2011 18:22:08 +0100 Subject: [SciPy-User] yulewalk or filter design by desired frequency reponse Message-ID: I'm looking to designing an iir filter to have a given frequency response. so take fft of my signal. design a filter to approximate this response. it seems that in matlab yulewalk() will do this for me. in scipy yulewalk() appears as an empty function prototype in signal.filter_design can anyone point me to a way to do this? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevan07 at gmail.com Wed Mar 30 19:17:23 2011 From: bevan07 at gmail.com (Bevan Jenkins) Date: Wed, 30 Mar 2011 23:17:23 +0000 (UTC) Subject: [SciPy-User] Fit function to multiple datasets Message-ID: Hello, I have a function that I would like to fit to multiple datasets. The function has 4 parameters. I would like to find the best fit across all datasets for 2 (scale and shape) and I would like to fit the remaining 2 (loc and arbit_multi) to each set. I have searched the scipy and numpy mailing lists but despite finding some information I haven't managed to piece it together. The code below shows what I am currently doing. There will also be the situation where the datasets are different lengths and I would like to weight the results by the length. import numpy as np from scipy import optimize def gen_pdf(time, loc, scale, shape, arbit_multi): '''define the 3-param Weibull distn f(x) with arbitary positive multipler ''' return arbit_multi*(shape/scale)*((time-loc)/scale)**(shape-1)*np.exp(- (((time-loc)/scale)**shape)) def solve(time, est_loc, est_scale, est_shape, est_arbit_multi): return (np.log(est_arbit_multi*(est_shape/est_scale))+ (est_shape-1)*np.log((time-est_loc)/est_scale)- ((time-est_loc)/est_scale)**est_shape) def objfunc(params,time,Q): '''error func ''' return (solve(time, params[0],params[1],params[2],params[3])- np.log(Q))**2 n=30 time = np.linspace(1,n,n) a = gen_pdf(time, loc=-15.0, scale=10.0, shape=0.5, arbit_multi=100.0) b = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.5, arbit_multi=10.0) c = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.5, arbit_multi=25.0) alldata= np.array((a,b,c)) est_loc = 0.0 est_scale = 1.0 est_shape = 1.0 est_arbit_multi = 1.0 p0 = [est_loc,est_scale, est_shape, est_arbit_multi] p1, success = optimize.leastsq(objfunc, p0, args=(time, a)) print 'a ests=',p1 p1, success = optimize.leastsq(objfunc, p0, args=(time, b)) print 'b ests=',p1 p1, success = optimize.leastsq(objfunc, p0, args=(time, c)) print 'c ests=',p1 Any help would be appreciated. From david_baddeley at yahoo.com.au Wed Mar 30 20:08:35 2011 From: david_baddeley at yahoo.com.au (David Baddeley) Date: Wed, 30 Mar 2011 17:08:35 -0700 (PDT) Subject: [SciPy-User] Fit function to multiple datasets In-Reply-To: References: Message-ID: <957314.40032.qm@web113409.mail.gq1.yahoo.com> how about something like: def objFunc(params, time, Q): scale =params[0] loc = params[1] res = [] extra_params = params[2:].reshape(-1, 2) for p_n, t_n, Q_n in zip(extra_params, time, Q): res.append(solve(t_n, loc, scale, p_n[0], p_n[1]) - np.log(Q_n)) return np.hstack(res)**2 p0 = [est_loc,est_scale, est_shape_1, est_arbit_multi_1, est_shape_2, est_arbit_multi_2, .....] optimize.leastsq(objfunc, p0, args=([time_1, time_2 ....], [a_1, a_2, ...])) cheers, David ----- Original Message ---- From: Bevan Jenkins To: scipy-user at scipy.org Sent: Thu, 31 March, 2011 12:17:23 PM Subject: [SciPy-User] Fit function to multiple datasets Hello, I have a function that I would like to fit to multiple datasets. The function has 4 parameters. I would like to find the best fit across all datasets for 2 (scale and shape) and I would like to fit the remaining 2 (loc and arbit_multi) to each set. I have searched the scipy and numpy mailing lists but despite finding some information I haven't managed to piece it together. The code below shows what I am currently doing. There will also be the situation where the datasets are different lengths and I would like to weight the results by the length. import numpy as np from scipy import optimize def gen_pdf(time, loc, scale, shape, arbit_multi): '''define the 3-param Weibull distn f(x) with arbitary positive multipler ''' return arbit_multi*(shape/scale)*((time-loc)/scale)**(shape-1)*np.exp(- (((time-loc)/scale)**shape)) def solve(time, est_loc, est_scale, est_shape, est_arbit_multi): return (np.log(est_arbit_multi*(est_shape/est_scale))+ (est_shape-1)*np.log((time-est_loc)/est_scale)- ((time-est_loc)/est_scale)**est_shape) def objfunc(params,time,Q): '''error func ''' return (solve(time, params[0],params[1],params[2],params[3])- np.log(Q))**2 n=30 time = np.linspace(1,n,n) a = gen_pdf(time, loc=-15.0, scale=10.0, shape=0.5, arbit_multi=100.0) b = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.5, arbit_multi=10.0) c = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.5, arbit_multi=25.0) alldata= np.array((a,b,c)) est_loc = 0.0 est_scale = 1.0 est_shape = 1.0 est_arbit_multi = 1.0 p0 = [est_loc,est_scale, est_shape, est_arbit_multi] p1, success = optimize.leastsq(objfunc, p0, args=(time, a)) print 'a ests=',p1 p1, success = optimize.leastsq(objfunc, p0, args=(time, b)) print 'b ests=',p1 p1, success = optimize.leastsq(objfunc, p0, args=(time, c)) print 'c ests=',p1 Any help would be appreciated. _______________________________________________ SciPy-User mailing list SciPy-User at scipy.org http://mail.scipy.org/mailman/listinfo/scipy-user From bevan07 at gmail.com Wed Mar 30 21:56:22 2011 From: bevan07 at gmail.com (bevan j) Date: Wed, 30 Mar 2011 18:56:22 -0700 (PDT) Subject: [SciPy-User] [SciPy-user] Fit function to multiple datasets In-Reply-To: <957314.40032.qm@web113409.mail.gq1.yahoo.com> References: <957314.40032.qm@web113409.mail.gq1.yahoo.com> Message-ID: <31282699.post@talk.nabble.com> Thanks, your solution works. However, there are still three things (at least...) that I need to get my head around: - a quick test in which I vary the shape parameter (in one dataset) does not give me sensible answers. I have been more concerned with getting this working than thinking about the actual answer. - also for any given site, I will have an unknown ( in advance) number of datasets but I should just be able to put them all into one array to pass to the function and iterate through the array. - how to weight the answer based on the length of each data set. Thanks again, Bevan code at present: import numpy as np from scipy import optimize def gen_pdf(time, loc, scale, shape, arbit_multi): '''define the 3-param Weibull distn f(x) with arbitary positive multipler ''' return arbit_multi*(shape/scale)*((time-loc)/scale)**(shape-1)*np.exp(-(((time-loc)/scale)**shape)) def solve(time, est_loc, est_scale, est_shape, est_arbit_multi): return (np.log(est_arbit_multi*(est_shape/est_scale))+ (est_shape-1)*np.log((time-est_loc)/est_scale)- ((time-est_loc)/est_scale)**est_shape) def objFunc(params, time, Q): scale =params[0] shape = params[1] res = [] extra_params = params[2:].reshape(-1, 2) for p_n, t_n, Q_n in zip(extra_params, time, Q): res.append(solve(t_n, p_n[0], scale, shape, p_n[1]) - np.log(Q_n)) return np.hstack(res)**2 n=30 time = np.linspace(1,n,n) time_3 = time_1 = time_2 = time a = gen_pdf(time, loc=-15.0, scale=10.0, shape=0.5, arbit_multi=100.0) b = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.8, arbit_multi=10.0) c = gen_pdf(time, loc=-10.0, scale=10.0, shape=0.5, arbit_multi=25.0) alldata= np.array((a,b,c)) est_loc = 0.0 est_scale = 1.0 est_shape = 1.0 est_arbit_multi = 1.0 est_loc_1 = est_loc_2 = est_loc_3 = est_loc est_arbit_multi_1 = est_arbit_multi_2 = est_arbit_multi_3 = est_arbit_multi p0 = [est_scale, est_shape, est_loc_1, est_arbit_multi_1, est_loc_2, est_arbit_multi_2, est_loc_3, est_arbit_multi_3] p1 = optimize.leastsq(objFunc, p0, args=([time_1, time_2,time_3], [a,b,c])) print p1 David Baddeley wrote: > > how about something like: > > def objFunc(params, time, Q): > scale =params[0] > loc = params[1] > > res = [] > > extra_params = params[2:].reshape(-1, 2) > for p_n, t_n, Q_n in zip(extra_params, time, Q): > res.append(solve(t_n, loc, scale, p_n[0], p_n[1]) - np.log(Q_n)) > > return np.hstack(res)**2 > > p0 = [est_loc,est_scale, est_shape_1, est_arbit_multi_1, est_shape_2, > est_arbit_multi_2, .....] > optimize.leastsq(objfunc, p0, args=([time_1, time_2 ....], [a_1, a_2, > ...])) > > > cheers, > David > > -- View this message in context: http://old.nabble.com/Fit-function-to-multiple-datasets-tp31282087p31282699.html Sent from the Scipy-User mailing list archive at Nabble.com. From robert.kern at gmail.com Wed Mar 30 23:00:18 2011 From: robert.kern at gmail.com (Robert Kern) Date: Wed, 30 Mar 2011 22:00:18 -0500 Subject: [SciPy-User] [SciPy-user] Fit function to multiple datasets In-Reply-To: <31282699.post@talk.nabble.com> References: <957314.40032.qm@web113409.mail.gq1.yahoo.com> <31282699.post@talk.nabble.com> Message-ID: On Wed, Mar 30, 2011 at 20:56, bevan j wrote: > > Thanks, your solution works. > > However, there are still three things (at least...) that I need to get my > head around: > > - a quick test in which I vary the shape parameter (in one dataset) does not > give me sensible answers. ?I have been more concerned with getting this > working than thinking about the actual answer. > > - also for any given site, I will have an unknown ( in advance) number of > datasets but I should just be able to put them all into one array to pass to > the function and iterate through the array. Yes. > - how to weight the answer based on the length of each data set. It's already weighted by the length of each data set. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." ? -- Umberto Eco From cournape at gmail.com Thu Mar 31 05:19:39 2011 From: cournape at gmail.com (David Cournapeau) Date: Thu, 31 Mar 2011 18:19:39 +0900 Subject: [SciPy-User] [Numpy-discussion] ANN: Numpy 1.6.0 beta 1 In-Reply-To: References: Message-ID: On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen wrote: > In article > , > ?Ralf Gommers wrote: > >> Hi, >> >> I am pleased to announce the availability of the first beta of NumPy >> 1.6.0. Due to the extensive changes in the Numpy core for this >> release, the beta testing phase will last at least one month. Please >> test this beta and report any problems on the Numpy mailing list. >> >> Sources and binaries can be found at: >> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/ >> For (preliminary) release notes see below. I see a segfault on Ubuntu 64 bits for the test TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py. Am I the only one seeing it ? David From pearu.peterson at gmail.com Thu Mar 31 05:37:24 2011 From: pearu.peterson at gmail.com (Pearu Peterson) Date: Thu, 31 Mar 2011 12:37:24 +0300 Subject: [SciPy-User] [SciPy-Dev] [Numpy-discussion] ANN: Numpy 1.6.0 beta 1 In-Reply-To: References: Message-ID: On Thu, Mar 31, 2011 at 12:19 PM, David Cournapeau wrote: > On Wed, Mar 30, 2011 at 7:22 AM, Russell E. Owen wrote: > > In article > > , > > Ralf Gommers wrote: > > > >> Hi, > >> > >> I am pleased to announce the availability of the first beta of NumPy > >> 1.6.0. Due to the extensive changes in the Numpy core for this > >> release, the beta testing phase will last at least one month. Please > >> test this beta and report any problems on the Numpy mailing list. > >> > >> Sources and binaries can be found at: > >> http://sourceforge.net/projects/numpy/files/NumPy/1.6.0b1/ > >> For (preliminary) release notes see below. > > I see a segfault on Ubuntu 64 bits for the test > TestAssumedShapeSumExample in numpy/f2py/tests/test_assumed_shape.py. > Am I the only one seeing it ? > > The test work here ok on Ubuntu 64 with numpy master. Could you try the maintenance/1.6.x branch where the related bugs are fixed. Pearu -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Thu Mar 31 06:45:40 2011 From: sturla at molden.no (Sturla Molden) Date: Thu, 31 Mar 2011 12:45:40 +0200 Subject: [SciPy-User] yulewalk or filter design by desired frequency reponse In-Reply-To: References: Message-ID: <4D945B54.20009@molden.no> If you filter a Dirac delta function, you get the impulse response, from which the frequency response can be computed by the rfft. That will give you the residuals between the desired and realized frequency response. You can then use scipy.optimize.leastsq (Levenberg-Marquardt) to fit the IIR coefficients. The same strategy can also be used in the time-domain to approximate a template impulse response. Sturla Den 30.03.2011 19:22, skrev henry lindsay smith: > I'm looking to designing an iir filter to have a given frequency > response. so take fft of my signal. design a filter to approximate > this response. > > it seems that in matlab yulewalk() will do this for me. in scipy > yulewalk() appears as an empty function prototype in signal.filter_design > > can anyone point me to a way to do this? > > thanks > > > > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From yury at shurup.com Thu Mar 31 14:21:42 2011 From: yury at shurup.com (Yury V. Zaytsev) Date: Thu, 31 Mar 2011 20:21:42 +0200 Subject: [SciPy-User] Normalization of window functions (documentation suggestion) Message-ID: <1301595702.6965.71.camel@mypride> Hi! In the documentation of the return values of the window functions in NumPy, such as: bartlett(M) blackman(M) hamming(M) hanning(M) kaiser(M, beta) it always says something like "The window, normalized to one". I find this slightly confusing, because I would actually read that as if the values were normalized such that the area under the curve is one. I think it would make sense to change it to something like "The window, with the maximum value normalized to one" to state it more explicitly. Thanks! -- Sincerely yours, Yury V. Zaytsev From brockp at umich.edu Thu Mar 31 23:46:04 2011 From: brockp at umich.edu (Brock Palen) Date: Thu, 31 Mar 2011 23:46:04 -0400 Subject: [SciPy-User] scipy On HPC Podcast Message-ID: <2825DD30-8236-463F-BF9A-681C71CD5F47@umich.edu> We host an HPC focused podcast (www.rce-cast.com) We recently had NumPy on the show: http://www.rce-cast.com/Podcast/rce-48-numpy.html Now we would like to dig into the details of SciPy. It only takes about an hour over the phone and is friendly and informative. We would like a SciPy dev or two to be on the show, feel free to contact me off list. Brock Palen www.umich.edu/~brockp Center for Advanced Computing brockp at umich.edu (734)936-1985 From luis at luispedro.org Fri Mar 25 21:50:14 2011 From: luis at luispedro.org (Luis Pedro Coelho) Date: Sat, 26 Mar 2011 01:50:14 -0000 Subject: [SciPy-User] Image processing class In-Reply-To: <4D8CF9CE.3060800@ece.tufts.edu> References: <4D8CF9CE.3060800@ece.tufts.edu> Message-ID: <201103252150.29473.luis@luispedro.org> Hello, I don't know exactly what you are looking for, but I'll give you my $.02: A while back I tried to build up a website about computer vision and python: http://pythonvision.org I ran out of steam a bit, but I might pick it up again (it's an open project, you can fork it on github https://github.com/luispedro/pythonvision_org to add content). In general, Python is on par with matlab for image processing, but it is a bit more scattered, as there are a few relevant packages instead of having it all in one. I'd list mahotas and scikits.image as the two most relevant. We have a mailing list too: http://groups.google.com/group/pythonvision HTH, Luis On Friday, March 25, 2011 04:23:42 pm Eric Miller wrote: > Hello, > > I teach a class in digital image processing in the ECE department at > Tufts University. It has a pretty large programming component which has > been in Matlab. I am thinking about moving to Python and was wondering > if anyone else out there has done this? > > Thanks > > Eric -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From luis at luispedro.org Sun Mar 27 12:53:54 2011 From: luis at luispedro.org (Luis Pedro Coelho) Date: Sun, 27 Mar 2011 16:53:54 -0000 Subject: [SciPy-User] Image processing class Message-ID: <201103271254.14944.luis@luispedro.org> Hello, I don't know exactly what you are looking for, but I'll give you my $.02: A while back I tried to build up a website about computer vision and python: http://pythonvision.org I ran out of steam a bit, but I might pick it up again (it's an open project, you can fork it on github https://github.com/luispedro/pythonvision_org to add content). In general, Python is on par with matlab for image processing, but it is a bit more scattered, as there are a few relevant packages instead of having it all in one. I'd list mahotas and scikits.image as the two most relevant. We have a mailing list too: http://groups.google.com/group/pythonvision HTH, Luis On Friday, March 25, 2011 04:23:42 pm Eric Miller wrote: > Hello, > > I teach a class in digital image processing in the ECE department at > Tufts University. It has a pretty large programming component which has > been in Matlab. I am thinking about moving to Python and was wondering > if anyone else out there has done this? > > Thanks > > Eric From rafiki38 at hotmail.com Thu Mar 24 05:03:07 2011 From: rafiki38 at hotmail.com (rafiki38) Date: Thu, 24 Mar 2011 09:03:07 -0000 Subject: [SciPy-User] [SciPy-user] Using Pyinstaller or cx_freeze with scipy In-Reply-To: References: Message-ID: <31227146.post@talk.nabble.com> Hi Craig, I am wondering whether you were able to solve your problem, since I have exactly the same one! (under linux platform) ----------------------------------------------- Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/cx_Freeze/initscripts/Console.py", line 29, in exec code in m.__dict__ File "at.py", line 12, in from scipy.signal import butter File "/usr/lib/python2.6/dist-packages/scipy/signal/__init__.py", line 11, in from ltisys import * File "/usr/lib/python2.6/dist-packages/scipy/signal/ltisys.py", line 9, in import scipy.interpolate as interpolate File "/usr/lib/python2.6/dist-packages/scipy/interpolate/__init__.py", line 15, in from polyint import * File "/usr/lib/python2.6/dist-packages/scipy/interpolate/polyint.py", line 2, in from scipy import factorial ImportError: cannot import name factorial ----------------------------------------------------- Thanks a lot! Rafael Craig Howard-2 wrote: > > Hello: > > I was attempting to use cx_freeze to create a stand-alone application > that uses scipy, but ran into a problem. When I attempt to run the > frozen application, I get the error: > scipy\interpolate\polyint.py > Cannot import name factorial > > I looked at the scipy package and the factorial function is in > scipy.misc.common.py but the file scipy.interpolate.polyint.py has the > line: > from scipy import factorial > > So the polyint script is looking for the factorial function in some > other place than its actual physical location. I guess scipy somehow > inserts the factorial function in the correct namespace so that it > runs correctly. Is there a way to get scipy and cx_freeze to get > along? Thanks for your help in advance. > > Regards, > Craig Howard > _______________________________________________ > SciPy-User mailing list > SciPy-User at scipy.org > http://mail.scipy.org/mailman/listinfo/scipy-user > > -- View this message in context: http://old.nabble.com/Using-Pyinstaller-or-cx_freeze-with-scipy-tp27786679p31227146.html Sent from the Scipy-User mailing list archive at Nabble.com. From ghostility at gmail.com Tue Mar 29 20:31:15 2011 From: ghostility at gmail.com (Ghostly) Date: Wed, 30 Mar 2011 02:31:15 +0200 Subject: [SciPy-User] Fw: Connecting Windows PCs for parallel computing with iPython? Message-ID: <20110330023112.043A.5FE01CF9@gmail.com> Forwarded by Ghostly ----------------------- Original Message ----------------------- From: Ghostly To: ipython-user at scipy.org Date: Tue, 29 Mar 2011 06:36:24 +0200 Subject: Connecting Windows PCs for parallel computing with iPython? ---- Hi, I really like iPython - been using it for couple of weeks, and I feel great by it's interaction features. I'm thinking about my meteorology graduate thesis (which will be couple of months from now) and wanted to try with Python/Scipy instead FORTRAN or Matlab/Octave I was thinking about simple one-column model (perhaps importing already known FORTRAN programs), and I want to be able to demonstrate parallel computing with iPython. I've read some articles and watched some on-line courses, but I'm still not sure how (or if it's possible) to connect 4 Windows PCs for this task. I think I've read somewhere (not sure where) that maybe it's not supported for Windows but only Unix-like systems. Also I would like to know this away from the theses, but I wanted to provide brief introduction. Thanks in advance --------------------- Original Message Ends -------------------- Hi, I posted yesterday above forwarded message to IPython mailing list, but the message never showed there, so I thought to try this mailing list and possibly get some advice. Thanks From dirknbr at gmail.com Thu Mar 31 08:54:15 2011 From: dirknbr at gmail.com (Dirk Nachbar) Date: Thu, 31 Mar 2011 05:54:15 -0700 (PDT) Subject: [SciPy-User] fmin_bfgs Message-ID: I am using fmin_bfgs to minimise a function, I put a print in the function so I can see the steps. It does show me that the value goes down but then goes back to the starting value and shows that it is optimised. But from the print I know it's not the true min, how can I make the function stop at the right min. I am not specifying a fprime function. Dirk