> Ideas I've tossed around include creating an interface similar to the one
> surface objects have for audio buffers, adding adaptors for PIL and PST,
> creating an overly complex system for 'compiling' surface operations for speed,
> and some other things.
> I don't know. So I'm asking. What is it you want?
interfacing with PIL should be pretty easy with the "fromstring" and
"tostring" style functions that PIL uses. Numpy interfaces with PIL
in this manner. (in fact with my unofficial numpy extension, you can
already use numpy to get the images into PIL (or so i assume, have
yet to test something like that :] ))
i also like the idea of easier integration with "C" extensions,
but this will prove a bigger challenge than breaking up the
source i'd imagine. (although it will likely require it, since
currently, all the global objects are created in the header file
well, you asked for it...
here's a sampling things i'd like to see for pysdl
first is throwing exceptions instead of returning -value error codes
now that i'm into python, i love the flexibility of exceptions, instead
of checking every function call for its return code, you just call
all the functions, and put the error handling at the end
a wrapper for some line/polygon filling library. i assume SDL has
something like this already, but everynow and then i keep thinking
i'll want this. i'd especially like this if it was done on a library
tuned for SDL, so i can get fast-as-possible-filled-polygons
break the sourcecode into smaller files. :]
actually i really do want this one. i would love to hear a discussion
of the best possible ways to get this done. i've done this in a
reasonably clean and efficient way. it isn't thoroughly planned out
but i was pleased with how it turned out. if it can be used as a
starting point for discussion it will have served its purpose!
numeric python implementation. i've got a crude sample of this
going. (the other) peter and i were able to attain some amazing
speed gains. it's not for everyone (and i think well discussed :] )
but it beats the pants off trying to drum up a C extension to do
basic image operations.
anyways, those speedups we saw were on pretty basic operations.
things like a 640x480 radial gradient went from about 20 seconds
to under 2 seconds (and that was still with python doing the color
conversions, and a generic non-optimized numpy-2-sdl transfer).
i also have my basic "fire" demo running under numpy that would
be foolhardy without numpy
cleanup of the "event" handling routines. currently i think it's
too much "C" and too little "python". i've thought out a couple
simple changes that make it much smaller and more graceful than
event handling with C-SDL
some higher-level classes written in python. simple base classes
for things like sprites, eventhandlers (i'm enjoying mine!).
the reason these should be in python to begin with is so they can
be easily inherited and extended.
i still haven't got to using the sound routines in pysdl, but
it seems like a much cleaner implementation could be made from
what there is in C-SDL (and we've currently duplicated for pysdl).
instead of the current "audio.play(mysound)" something more like
"mysound.play()". i realize there's some issues involved, and
i haven't really looked at the audio interfaces (yet), but from
what i currently see it seems a little backwards
heh, this one just occurred to me, but check with PERL and
other SDL language bindings to make sure we're not missing
out on any great ideas
at least the thought of switching to using distutils. i've
scanned through their docs and examples, and it seems powerful.
perhaps best to wait for the next python release which includes
distutils standard. but i would love to hear anyone's experience
if they've maintained any packages with this.
this will probably just have to wait until someone out there
needs something like this and just goes ahead and creates it.
but to help a pysdl game rely on 3rd-party extensions written
in python there could be some prebuild "rexec" modes that
can run python code in a 'gaming-oriented' restricted environment.
i dunno, it's just something that pops into my head once in awhile
i looked into the SDL_net library to see if it was worth including
with pysdl. after checking it all out i saw nothing that wasn't
offered by the standard python "socket" libraries. i'd say this
library is worth ignoring, until someone can bring forth some
facts that i overlooked
finally, i'll end with a simple one. just a global pysdl function
to get the current screen. when writing my game between multiple
modules i'm always having to pass the screen (and other info)
around between all the modules. there may be other pysdl
"state" info like this that is usefull to be able to access from
any of the modules in my game (i just haven't found em yet)
i like peter nicolai's idea for the web plugin. but i fear that
SDL is not the base library of choice for this type of project,
since SDL has little control over the display window.
(ie, impossible to "embed" into other windows)
as for other platforms, i have done limited testing on IRIX
which has worked great for me. (IRIX being another platform
that SDL has recently supported)
> A side note to Pete: I haven't been ignoring you, I've just been
> trying to decide what I think of it before giving you a response
no trouble. i've decided to just start writing stuff and deal
with issues as they come up. i figure with this approach there are
two benefits. first, i'll have real-world experience to make my
"suggestions" a lot more worthwhile. second, there is a top
notch pysdl game out there!
I just wanted to let you all know that I have made a first hack at integrating
Numpy with GDAL (my geospatial raster access library), with the purpose of
integrating with OpenEV (a geospatial data viewing application). The work is
still relatively preliminary, but might be of interest to some.
OpenEV and GDAL are raster orientated, and will only be of interest for those
working with 2D matrices and an interest in treating them as rasters. A few
notes of interest.
o OpenEV and GDAL support a variety of data types. Unlike most raster
packages they can support UnsignedInt8, Int16, Int32, Float32, Float64,
ComplexInt16, ComplexInt32, ComplexFloat32 and ComplexFloat64 data types.
o OpenEV will scale for display, but will also show real underlying data
It also includes special display modes for complex rasters to show phase,
magnitude, phase & magnitude, real and imaginary views of complex layers.
o GDAL can be used to save real, and complex data to TIFF (and a few other
less well known formats), as well as loading various raster formats common
in remote sensing and GIS.
The following is a minimal example of using GDAL and numpy together:
from Numeric import *
x = gdalnumeric.LoadFile( "/u/data/png/guy.png" )
x = 255.0 - x
gdalnumeric.SaveArray( x, "out.tif", "GTiff" )
More information is available at:
Finally, a thanks to those who have developed and maintained Numeric Python.
It is a great package, and I look forward to using it more.
I set the clouds in motion - turn up | Frank Warmerdam, warmerda(a)home.com
light and sound - activate the windows | http://members.home.com/warmerda
and watch the world go round - Rush | Geospatial Programmer for Rent
I accidentally deleted your original message, but found it again in the
Numerical list archives. I believe that there are free software packages
that would do what you need. For example, look at Maxima which is under
the GPL now (free'd version of Macsyma). If you are non-commercial, you
can get a free version of MuPad. Also, there are number theory packages
available that can probably do polynomial factorization. I suggest that
you look at the Scientific Applications for Linux page:
Note that a lot of the software there can also be run on other platforms
including both UNIX and Windows. You can even search for "polynomial
factorization" and get a number of hits.
Syrus C Nemat-Nasser, PhD | Center of Excellence for Advanced Materials
UCSD Department of Physics | UCSD Department of Mechanical
<syrus(a)ucsd.edu> | and Aerospace Engineering
Hi - I need to do prime factorization of some polynomials with integer
coefficients. It would seem the Berlekamp algorithm would be what I
want but I also admit to almost total ignorance on the subject. I'm
having trouble even understanding how the algorithm works so I'm
probably not the best person to be implementing it. But I may give it
Is anyone already working on a Python implementation of this? Or any
tips on how to proceed? Pointers to other freeware solutions? I
prefer something that can be re-written in or modularized into Python
but even if not, that's no big deal.
The alternative is to purchase one of the symbolic algebra packages
like Maple or Mathematica etc.
Judah Milgram milgram(a)cgpp.com
P.O. Box 8376, Langley Park, MD 20787
(301) 422-4626 (-3047 fax)
I tried updating my distutils and numpy from their CVS sites last night, to
try building and found the following problems. Distutils from
:pserver:firstname.lastname@example.org:/projects/cvsroot has a most recent
tag of Distutils-0_8_2. Where can I get CVS access to Distutils 0.9, or is
this a tar-only distribution.
On a related note, the cvs of Numerical Python from SourceForge has a most
recent tag of V15_2. If I get the most recent (untagged) version, of NumPy,
there is not lapack_lite_library directory, so "setup.py install" fails.
Does this mean that Numerical Python 15.3 has not yet been checked into the
It's a bit confusing when the CVS repositories are not up to date with the
most recent releases. Could someone clarify for me, please.
i'm developing with some Numeric stuff and don't have
a solid grasp on what the 'SAVEBIT' stuff is. it's not
mentioned in the docs at all.
what's also strange is the the numpy testing "test_all.py"
fails in the final sections when testing the savebit
can someone give a quicky description of what this flag
can be used for?
also... i assume someone already knows about the error in
the testing? i ran it on both a MIPS5000 IRIX system and
an x86 win98 system and it errored out consistently on
the test line 506.
I checked in a new version of setup.py for Numerical that corresponds to
Distutils-0.9. This is a modification of the setup_numpy.py that Greg has in
This version REQUIRES the new Distutils. If you install Distutils into
1.6a2, remember (which I didn't at first) to go delete the distutils
directory in the Python library directory. If you don't you will get a
message while running setup informing you that your Distutils must be
This CVS version also separates out LAPACK/BLAS so that using the "lite"
version of the libraries supplied with numpy is now optional. A small
attempt is made to find the library or the user can edit setup.py to set the
locations if that fails.
I have not tested this on Windows and I would bet it needs help; we probably
won't cut a new Numerical release until this is resolved and Python 2.0 is
out so that the needed version of Distutils is standard. Suggestions for
improvements would be most welcome.
i've been throwing my hand and getting more speed out of
numpy. i've birthed a little fruit from my efforts. my area
of use is specifically with 2D arrays with image info.
anyways, i've attached an 'arrayobject.c' file that is
from the 15.3 release and optimized.
in my test case the code ran about twice the speed of
the original 15.3 release. i went and tested out other
uses and found on a 20% speedup pretty consistent.
(for example, i cranked the mandelbrot demo resolution
to 320 x 200 and removed the 'print' command and it
went from a runtime of 5.5 to 4.5)
i'm not sure how people 'officially' make contributions
to the code, but i hope this is easy enough to merge. i
also hope this is accepted (or at least reviewed) for
inclusion in the next release.
i also plan on a few more optimizations. the least is going
to be a 'C' version of 'arrayrange' and probably 'ones'. the
current arrayrange is pretty slow (slower than the standard
python 'range' in all my tests).
the other optimization is a bit more drastic, and i'd like
to hear feedback from more 'numpy experts' before making the
change. in the file 'arraytypes.c' with all the arrays of conversion
functions, i've found that the conversion routines are a little
too 'elaborate'. these routines are only ever called from one line
and the the two "increment/skip" arguments are always hardcoded one.
there are two possible roads to speedup the conversion of array
1-- optimize all the conversion routines so they aren't so generic.
this should be a pretty easy fix and should offer noticeable speed.
2-- do a better job of converting arrays. instead of creating a
whole new array of the new type and simply copying that, create
a conversion method that simply converts the data directly into
the destination array. this would mean using all those conversion
routines to their full power. this would offer more speed than
the first option, but is also a lot more work
well, what do people think? my initial thought is to make a
quick python script to take 'arraytypes.c' and convert all the
functions to be a quicker version.
numpy is amazing, and i'm glad to get a chance to better it!
I have made even more changes to Numeric this morning, separating off FFT and
MA as separate packages and adding the package RNG.
I found an error in the previous setup.py; it was installing headers in
include/python1.6/Numerical instead of Numeric. This apparently gets fixed if
you change the name of the package (which I otherwise thought didn't do
Hm. I found a bug in one of my programs that was due to the difference in
behavior between a 'f' shape () array and a true python scalar:
for i in range(50):
print nb.shape # Expect (50,) but get (50,1)
BTW: Why does "a=Numeric.zeros((50,),'f'); d=a[i]" return a python scalar, and
the above script a shape () array?
===== rob(a)hooft.net http://www.hooft.net/people/rob/ =====
===== R&D, Nonius BV, Delft http://www.nonius.nl/ =====
===== PGPid 0xFA19277D ========================== Use Linux! =========