From dubois1@llnl.gov  Mon Feb  1 05:04:45 1999
From: dubois1@llnl.gov (Paul F. Dubois)
Date: Sun, 31 Jan 1999 21:04:45 -0800
Subject: [Matrix-SIG] Re:  Why is there fast_umathmodule.c and umathmodule.c ?
Message-ID: <001201be4da0$62d22d40$f4160218@c1004579-c.plstn1.sfba.home.com>

The intention was to have the difference mentioned. But it was never
implemented. You don't normally control the behavior of the floating point
unit that way anyway. Unfortunately there is no standard for doing it.

I plan to simply remove fast_umathmodule at the next release unless I get
convinced otherwise.

-----Original Message-----
From: Travis E. Oliphant <Oliphant.Travis@mayo.edu>
To: Alan Grossfield <alan@groucho.med.jhmi.edu>; matrix-sig@python.org
<matrix-sig@python.org>
Date: Saturday, January 30, 1999 1:56 PM
Subject: [Matrix-SIG] Re: Why is there fast_umathmodule.c and umathmodule.c
?


>>
>> :I have been looking at and using the source from the umathmodules and
>> :noticed that fast_umathmodule and umathmodule are nearly identical.
>> :Perhaps somebody could tell me why there two diffeerent versions of
>> :basically the same code??
>> IIRC, umath raises exception on overflows etc, fast_umath just returns
>> NaN.  Some speed vs safety.
>
>Thanks for the quick response.  I see the intention, now.  I'm a bit
>confused at how this is done, though since the latest release of Numeric
>shows only the following differences.
>[olipt@amylou Src]$ diff fast_umathmodule.c umathmodule.c
>6d5
><
>1982c1981
>< void initfast_umath() {
>---
>> void initumath() {
>1986c1985
><   m = Py_InitModule("fast_umath", methods);
>---
>>   m = Py_InitModule("umath", methods);
>
>These seem like only superficial differences.  I don't see how the two
>modules can be functionally different, here.
>
>_______________________________________________
>Matrix-SIG maillist  -  Matrix-SIG@python.org
>http://www.python.org/mailman/listinfo/matrix-sig
>
>



From Oliphant.Travis@mayo.edu  Mon Feb  1 04:36:28 1999
From: Oliphant.Travis@mayo.edu (Travis E. Oliphant)
Date: Sun, 31 Jan 1999 22:36:28 -0600
Subject: [Matrix-SIG] Typo in multiarraymodule.c
Message-ID: <36B52F4C.98C69049@mayo.edu>

This is a multi-part message in MIME format.
--------------C8465BD51365940AFDBF9163
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

I think I found a typo in multiarraymodule.c in the latest 1.9 release
that messes up the functionality of convolve.

Try convolve([1,2],[1,2,3],2) and convolve([1,2,3],[1,2],2).  These
should give the same result but they don't in my NumPy.

The fix is simple (change a 1 to an i) and a one-line diff is attached.

-Travis


-- 

----------------------------------------------------
Travis Oliphant            200 First St SW          
    	                   Rochester MN 55905       
Ultrasound Research Lab	   (507) 286-5923           
Mayo Graduate School	   Oliphant.Travis@mayo.edu
--------------C8465BD51365940AFDBF9163
Content-Type: text/plain; charset=us-ascii; name="multidiff"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="multidiff"

692,693c692
< 	if (n1 < n2) { ret = ap1; ap1 = ap2; ap2 = ret; ret = NULL; i = n1;n1=n2;n2=i;}
< 
---
> 	if (n1 < n2) { ret = ap1; ap1 = ap2; ap2 = ret; ret = NULL; i = n1;n1=n2;n2=1;}
717a717
> 

--------------C8465BD51365940AFDBF9163--



From gathmann@scar.utoronto.ca  Tue Feb  2 14:32:11 1999
From: gathmann@scar.utoronto.ca (Oliver Gathmann)
Date: Tue, 2 Feb 1999 09:32:11 -0500 (EST)
Subject: [Matrix-SIG] NumPy arrays with object typecode
Message-ID: <Pine.GSO.4.05.9902020903010.10070-100000@banks.scar>

Hello,

I am working on something like an S-Plus frame that would allow efficient
access to data of different types (integers, floats, and strings)
organized in colums of an ordinary 2d-array. My idea is to use NumPy
arrays of typecode 'O'; I know this sacrifices a lot of the performance of
genuine numerical arrays, but they still offer their nifty indexing and
slicing capabilities.

Strangely, NumPy (LLNL v.9) does only seem to allow assigning to slices
along the second axis, not along the first:

Python 1.5.2b1 (#2, Jan  5 1999, 14:21:49)  [GCC 2.7.2.3] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> from Numeric import *
>>> objArray = zeros((3,2),'O')
>>> objArray
array([[0 , 0 ],
       [0 , 0 ],
       [0 , 0 ]],'O')
>>> intArray = reshape(arange(6),(3,2))
>>> intArray
array([[0, 1],
       [2, 3],
       [4, 5]])
>>> objArray[1,:] = intArray[1,:]
>>> objArray
array([[0 , 0 ],
       [2 , 3 ],
       [0 , 0 ]],'O')
>>> objArray[:,1] = intArray[:,1]
Segmentation fault (core dumped)

Explicitly converting the integer slice into an object slice doesn't help,
either. What's going on here?

I also find this counter-intuitive: 

>>> objArrayFromInits = array(['1','12','123'],'O')
>>> objArrayFromInits
array([[1 , 1 , 1 ],
       [12 , 12 , 12 ],
       [123 , 123 , 123 ]],'O')

Is there any efficient way to have the array constructor treat the
individual strings as objects and not as 1d arrays?

Thanks,

Oliver

F. Oliver Gathmann (gathmann@scar.utoronto.ca)
Surface and Groundwater Ecology Research Group      
University of Toronto
phone: (416) - 287 7420 ; fax: (416) - 287 7423     
web: http://www.scar.utoronto.ca/~gathmann



From hinsen@cnrs-orleans.fr  Tue Feb  2 17:41:06 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Tue, 2 Feb 1999 18:41:06 +0100
Subject: [Matrix-SIG] NumPy arrays with object typecode
In-Reply-To: <Pine.GSO.4.05.9902020903010.10070-100000@banks.scar> (message
 from Oliver Gathmann on Tue, 2 Feb 1999 09:32:11 -0500 (EST))
References: <Pine.GSO.4.05.9902020903010.10070-100000@banks.scar>
Message-ID: <199902021741.SAA13168@dirac.cnrs-orleans.fr>

> Strangely, NumPy (LLNL v.9) does only seem to allow assigning to slices
> along the second axis, not along the first:

That looks like a bug...

> I also find this counter-intuitive: 
> 
> >>> objArrayFromInits = array(['1','12','123'],'O')
> >>> objArrayFromInits
> array([[1 , 1 , 1 ],
>        [12 , 12 , 12 ],
>        [123 , 123 , 123 ]],'O')
> 
> Is there any efficient way to have the array constructor treat the
> individual strings as objects and not as 1d arrays?

Not in the current NumPy. It generates one dimension for each nesting
level of sequence types, and strings are of course sequence types.
The only way out is to create the array with the right shape
but some other content (e.g. None) and then assign the string values
to the elements individually.

Of course the array constructor could be modified, but
1) that creates compatibility problems and
2) there must be a way to construct character arrays from strings.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From barrett@compass.gsfc.nasa.gov  Tue Feb  2 19:29:36 1999
From: barrett@compass.gsfc.nasa.gov (Paul Barrett)
Date: Tue,  2 Feb 1999 14:29:36 -0500 (EST)
Subject: [Matrix-SIG] NumPy arrays with object typecode
In-Reply-To: <Pine.GSO.4.05.9902020903010.10070-100000@banks.scar>
References: <Pine.GSO.4.05.9902020903010.10070-100000@banks.scar>
Message-ID: <14007.17399.515521.723117@compass.gsfc.nasa.gov>

Oliver Gathmann writes:
 > Hello,
 > 
 > I am working on something like an S-Plus frame that would allow efficient
 > access to data of different types (integers, floats, and strings)
 > organized in colums of an ordinary 2d-array. My idea is to use NumPy
 > arrays of typecode 'O'; I know this sacrifices a lot of the performance of
 > genuine numerical arrays, but they still offer their nifty indexing and
 > slicing capabilities.
 > 

I'm currently working on a C extension class to do just this.  I hope
to have a useable, though by no means complete, version by the end of
the week.  Given an array shape and format string (similar to that
used by pack and unpack), a recarray object is created which allows
access to the data by array syntax.  This module is really just a
generalization of the struct module, but allows much more efficient
access to large arrays of binary data.

A paper by Barrett and Bridgman (see
http://www.dpt.stsci.edu/dpt_papers/opus_bib.html) presented last
November at the Astronomical Data Analysis Software and Systems
Conference gives a synopsis of this extension module as it relates to
PyFITS, a Python module for reading and writing FITS format files.

 -- Paul


From Oliphant.Travis@mayo.edu  Tue Feb  2 23:14:24 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Tue, 2 Feb 1999 17:14:24 -0600 (EST)
Subject: [Matrix-SIG] N-D Convolution routine
Message-ID: <Pine.LNX.4.04.9902021711140.19941-100000@us2.mayo.edu>

One of the things that I think NumPy could really benefit from is a good
set of Signal/Image/Array Processing routines.

One fundamental part of that area is a general N-D convolution routine. 
I have a first version of an N-D convolution routine that I am proposing
for inclusion in the multiarraymodule.c source code.

I have tested it and it seems to work for all supported types and is
useable now.  At some time I would like it to be polished off with some 
appropriate broadcasting rules, so that one could, for
example, do 2-D convolution on each plane of a 3-D volume with one
statement.

If anyone is interested in this routine which I have called 
convolveND send me email, or post to this list. 

What would be the method of getting it included in the multiarraymodule
proper??


Thanks,

Travis
Oliphant.Travis@mayo.edu


----------------------------------------------------
Travis Oliphant            200 First St SW          
    	                   Rochester MN 55905       
Ultrasound Research Lab	   (507) 286-5293           
Mayo Graduate School	   Oliphant.Travis@mayo.edu 



From Oliphant.Travis@mayo.edu  Tue Feb  2 23:19:19 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Tue, 2 Feb 1999 17:19:19 -0600 (EST)
Subject: [Matrix-SIG] Another buglet in multiarraymodule?
Message-ID: <Pine.LNX.4.04.9902021717020.19948-200000@us2.mayo.edu>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.
  Send mail to mime@docserver.cac.washington.edu for more info.

---1333709111-817064630-917997559=:19948
Content-Type: TEXT/PLAIN; charset=US-ASCII

While perusing the source to see how to write the N-D convolution routine
I noticed that there appears to be a generally insignificant typo in the 
OBJECT_DotProduct routine but perhaps should be fixed.

The diff file attached shows the change.


----------------------------------------------------
Travis Oliphant            200 First St SW          
    	                   Rochester MN 55905       
Ultrasound Research Lab	   (507) 286-5293           
Mayo Graduate School	   Oliphant.Travis@mayo.edu 

---1333709111-817064630-917997559=:19948
Content-Type: TEXT/PLAIN; charset=US-ASCII; name=multidiffs
Content-Transfer-Encoding: BASE64
Content-ID: <Pine.LNX.4.04.9902021719190.19948@us2.mayo.edu>
Content-Description: Diff file showing buglet and fix.
Content-Disposition: attachment; filename=multidiffs

KioqIG11bHRpYXJyYXltb2R1bGUuYwlUdWUgRmViICAyIDE2OjQyOjI4IDE5
OTkNCi0tLSBtdWx0aWFycmF5bW9kdWxlLmMuYmFrCVR1ZSBGZWIgIDIgMTc6
MTY6MDEgMTk5OQ0KKioqKioqKioqKioqKioqDQoqKiogNTY2LDU3MiAqKioq
DQogIAkJCQkJCQkgIGNoYXIgKm9wLCBpbnQgbikgew0KICAJaW50IGk7DQog
IAlQeU9iamVjdCAqdG1wMSwgKnRtcDIsICp0bXA7DQohIAlmb3IoaT0wO2k8
bjtpKyssaXAxKz1pczEsaXAyKz1pczIpIHsgDQogIAkJdG1wMSA9IFB5TnVt
YmVyX011bHRpcGx5KCooKFB5T2JqZWN0ICoqKWlwMSksICooKFB5T2JqZWN0
ICoqKWlwMikpOw0KICAJCWlmIChpID09IDApIHsNCiAgCQkJdG1wID0gdG1w
MTsNCi0tLSA1NjUsNTcxIC0tLS0NCiAgCQkJCQkJCSAgY2hhciAqb3AsIGlu
dCBuKSB7DQogIAlpbnQgaTsNCiAgCVB5T2JqZWN0ICp0bXAxLCAqdG1wMiwg
KnRtcDsNCiEgCWZvcihpPTA7aTxuO2krKyxpcDErPWlzMixpcDIrPWlzMSkg
eyANCiAgCQl0bXAxID0gUHlOdW1iZXJfTXVsdGlwbHkoKigoUHlPYmplY3Qg
KiopaXAxKSwgKigoUHlPYmplY3QgKiopaXAyKSk7DQogIAkJaWYgKGkgPT0g
MCkgew0KICAJCQl0bXAgPSB0bXAxOw0KLS0tIDExMzYsMTE0MSAtLS0tDQo=
---1333709111-817064630-917997559=:19948--


From Oliphant.Travis@mayo.edu  Tue Feb  2 23:24:23 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Tue, 2 Feb 1999 17:24:23 -0600 (EST)
Subject: [Matrix-SIG] Sorry about last attachment
Message-ID: <Pine.LNX.4.04.9902021722550.19978-100000@us2.mayo.edu>

Here is the attachment:

*** multiarraymodule.c  Tue Feb  2 16:42:28 1999
--- multiarraymodule.c.bak      Tue Feb  2 17:16:01 1999
***************
*** 566,572 ****
                                                          char *op, int n)
{
        int i;
        PyObject *tmp1, *tmp2, *tmp;
!       for(i=0;i<n;i++,ip1+=is1,ip2+=is2) { 
                tmp1 = PyNumber_Multiply(*((PyObject **)ip1), *((PyObject
**)ip2));
                if (i == 0) {
                        tmp = tmp1;
--- 565,571 ----
                                                          char *op, int n)
{
        int i;
        PyObject *tmp1, *tmp2, *tmp;
!       for(i=0;i<n;i++,ip1+=is2,ip2+=is1) { 
                tmp1 = PyNumber_Multiply(*((PyObject **)ip1), *((PyObject
**)ip2));
                if (i == 0) {
                        tmp = tmp1;
--- 1136,1141 ----

--Travis




From Oliphant.Travis@mayo.edu  Thu Feb  4 01:25:52 1999
From: Oliphant.Travis@mayo.edu (Travis E. Oliphant)
Date: Wed, 03 Feb 1999 19:25:52 -0600
Subject: [Matrix-SIG] New and improved N-D convolution in the "up and coming" sigtools toolbox.
Message-ID: <36B8F720.71255795@mayo.edu>

I'm making available a new and improved version of N-D convolution.  It
is now much faster (about 5 times faster on large datasets) and has all
of the mode options of 1-D convolve (0, 1, or 2) for ('valid', 'same',
and 'full').  

The routine is defined as the only method in a new module I would like
to see called sigtools.  I hope this will contain someday all (or most
of) the filter routines some of us commonly use in MATLAB, IDL, etc..

The files are available at
http://oliphant.netpedia.net/

Any contributions to this toolbox would be greatly appreciated.  Anyone
hiding any great signal, image, or array processing routines out there
that could use a nice home?

--Travis


From Oliphant.Travis@mayo.edu  Sun Feb  7 17:08:12 1999
From: Oliphant.Travis@mayo.edu (Travis E. Oliphant)
Date: Sun, 07 Feb 1999 11:08:12 -0600
Subject: [Matrix-SIG] N-D convolution and N-D order filtering with Numeric python
Message-ID: <36BDC87C.35770B37@mayo.edu>

This is to let interested people who use the Numeric Extension to 
python know about an update to my sigtools module to version 0.20 

In addition to the N-D convolution routine, the module now contains a
routine to perform N-D arbitrary order statistic filtering.  A median
filter is an example of an order statistic filter and can now be
implemented in a line of python code provided the data to be filtered is
in a Python Sequence Object.

It is available at 
http://oliphant.netpedia.net

--Travis


From jh@oobleck.tn.cornell.edu  Mon Feb  8 21:55:50 1999
From: jh@oobleck.tn.cornell.edu (Joe Harrington)
Date: Mon, 8 Feb 1999 16:55:50 -0500
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
References: <Pine.LNX.4.04.9902081118250.24-100000@us2.mayo.edu>
Message-ID: <199902082155.QAA08575@oobleck.tn.cornell.edu>

Thanks to Travis Oliphant for urging me on.  I've been dealing with
yet another (and another after that!) IDL core dump that had stopped
my work.  Reminds me why I'm doing this!  This is in response to the
posts of 19Jan99...

I appreciate the sentiments of Perry Greenfield and Paul DuBois, but I
think they misinterpret what I am saying, or are missing a crucial
point.  Of course we should write our science analysis code in the
interactive languages; that's why we're here in the first place!
Nobody is suggesting we go back to C for that.  That code is usually
either fire-and-forget, or is only run in a few places, or both.  It's
only when transportability is the key issue that the logic swings the
other way, and I'd cite IDL in astronomy as a prime piece of evidence
in favor of not repeating that mistake yet again.  IDL is a terrible
language.  The syntax is awful.  It costs a thousand dollars for their
cheapest license (try teaching a class on that!).  The bug fix time is
months, and all the other arguments against non-free software apply:
too slow response from the developers (see above), you can't fix it
yourself if it's broken, if you implement in it only people with a
license to their product can run your code, etc.

My goal here is to get a core of standard routines that are used in
many fields of science into a coherent package with good docs, and to
integrate it initially with Python.  The point is that then *any*
language, now or in the future, can be a "real" science language, just
by interfacing to the package.  At ADASS IV, we heard from a Sun
presenter about how Java would soon rule the world and solve all our
problems.  When I asked about support for numerics, he said to an
audience of several hundred astronomy programmers, "uhhh..use
Fortran".  With this package and a little work, they could easily
extend their language to cover numerics, and do it well.  So could
Guile, Perl, and a host of others.  Without such a package, language
implementors look at the job of doing it themselves, and give up.

Fortunately, it's not a matter (as Perry and Paul say or imply) of
"expecting developers to confine themselves", it's a matter of finding
developers who are interested in this approach.  We don't have to
worry about a "middle layer" of interpreted code from the developers.
That's a winning strategy for science analysis projects, but a losing
one for anything that is to survive long-term (meaning decades).  I'm
in full agreement with points a and d of Paul DuBois's posting, and
the whole reason I'm trying to start this effort here is that I think
it will strengthen NumPy to the point where it can become a usable
language for the general scientist, and that it's the best available
in terms of language capability.  Marble is better than salt for
building monuments, though salt is easier to work and tastes better.

Such a project is significant.  However, gathering and integrating is
not more than twice as large a project as writing everything from
scratch in the current interactive language.  And it only needs to be
done once, ever.  I'm committed to creating a coherent package of
widely-used numerical methods implemented as compiled-language
routines and interfaced eventually to multiple languages.  If another
path is chosen, someone else will have to take on the task of
coordinating it.  Assuming that enough others are interested in this
approach, here's what I see as the next steps:

	-Determine how and where to organize ourselves, and set those
         resources up (mailing lists, web sites, etc.).

	-Figure out how best to lay out the umbrella package, and what
         a "leaf" package looks like.  The key item here is to make it
         flexible enough that when we have future interactive
         environments, we can just modify the install programs and
         have them create the environment around that new interpreter.

	-Identify the components we need in the first release
         (numerics: at least FFT, interpolation, simple fitting, a
         Romberg integrator; graphics: at least plotting image
         display, color table manipulation, cursor readback, and some
         form of widget).

	-Distribute the work of implementing the previous two items
         and of integration and testing.

Looking at the first item, I'd be interested in a very quick list of
software efforts we think have been successful that have been
organized by volunteers working a few hours a week.  Gimp springs to
mind, and perhaps some of the window managers like Afterstep.  Has
anyone been involved in such an effort?  Did they do anything unique
that helped them out?

For the second item, I see the following needs:

documentation with images, formulae, and hyperlinks
	It seems reasonable to make whatever we use generate HTML,
	since all platforms support it.  Do people think TeXinfo will
	stand the test of time?  Its ability to handle formulae is
	attractive.

Windows and Mac support
	Who has any experience building software here?  Can we use
	make/autoconf with these systems and do they work well?  How
	far away from g++ is C++ on the Mac and PC's various
	compilers?  Is g++ a good thing to standardize upon?  Do we
	need to pick a compiler and implement for it, or has C++
	become standard enough not to worry?

"leaf" packages
	Each should have its own docs, code, examples, test scripts,
	and wrapper generation files.  Code and docs get built and
	installed into a location that might be shared with code,
	docs, and examples from other packages, for assembly into an
	intermediate collection of related packages (a "branch"?).
	This lets us distribute precompiled bite-sized package updates
	without recompiling or reinstalling the whole thing.

There's lots more under this item, of course, but this should be
enough to get a discussion started.  

For the third item, what do people see as *crucial* for a first
release?  In my view, the first release should define the structure we
will be implementing into and provide several useful packages that can
get people started with some basic tasks.  It should have good docs,
and generally carry the aura of something professional, not slapdash.
I'd rather it be small and elegant than large and sloppy, as it's
easier to grow larger than to fix a large amount of sloppiness, given
the tendency of net programmers to jump on the successful bandwagons.

--jh--
Joe Harrington
326 Space Sciences Building
Cornell University
Ithaca, NY 14853-6801
(607) 255-5913 office
(607) 255-9002 fax
jh@oobleck.tn.cornell.edu
jh@alum.mit.edu (permanent)


From Oliphant.Travis@mayo.edu  Mon Feb  8 22:34:10 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Mon, 8 Feb 1999 16:34:10 -0600 (EST)
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902082155.QAA08575@oobleck.tn.cornell.edu>
Message-ID: <Pine.LNX.4.04.9902081614320.15625-100000@us2.mayo.edu>

I am beginning now to see Joe's Point here better after having some
experience building different types of modules.

We definitely need to talk about what kind of things we expect from the
language.  For example, MATLAB defines only one datatype but NumPy defines
about 10.  For lowlevel operations which are coded in C this makes a
difference.

It seems that we will need to have separate language-independent and
language-dependent code developed.  I think it is a good idea to establish
the expectations of code contributed to the project in terms of having the
interpreted-language specific and non interpreted-language specific code
separated.  I'm beginning to see my released code in this light better.
Most of what I've done is interpreted-language specific wrapping around
already non-specific code (cephes, fftw).  But it is often desirable to
use language-specific features in the interface (like treating cephes
functions as ufunc objects).  I'm not sure that this should be or even can
be generalized enough to eliminate the use (if not the need) for specific
code that really doesn't contribute to the goal Joe has in mind.  

If this all sounds confusing, it is because I am a little confused as to
how to actually make something like Joe suggests work.  Are we talking
about describing a generic interface to specialized code that will adapt
itself to every future language?  Can we guarantee that.  Some languages
will work, some won't with it, right?  

I guess what I understand now that I think is doable is writing packages
with the idea of general usability in mind.  For example, I'm going to
try to change my N-D convolution and order-filtering routines so that the 
core routine assumes little about how the needed data is stored as an
object in the interpreted language.  This is not necessarily going to be
easy (for me) and may not even be possible without sacrificing speed or
usability in Python.  This is the sort of difficulty I'm talking about in
making a generic package that interfaces (well?) with any future scripting
language.

MATLAB defines their extensibility feature in terms of a GATEWAY procedure
and the code.  The GATEWAY handles all the MATLAB specific stuff and the
procedure itself is portable.  I could see doing something like that for
this package.  We would need a procedure for every supported datatype that
way, but it could be done.   This may not be possible with every desired
addition, however (I'm not sure how this idea would change the
cephesmodule I've released --- I guess all I did was write the GATEWAY for
that one)

I'd love to hear thoughts and ideas from people who have written
code and linked it with interpreted languages.  To move this forward we
definitely need some experienced people to help define generic
expectations from the interpreted language (with Python as a
model, of course :-)  ) and then just publish it somewhere and then start
writing code and documentation.

Best,

Travis Oliphant




From wtbridgman@radix.net  Tue Feb  9 04:31:51 1999
From: wtbridgman@radix.net (W.T. Bridgman)
Date: Mon, 8 Feb 1999 23:31:51 -0500
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis
 environment
In-Reply-To: <199902082155.QAA08575@oobleck.tn.cornell.edu>
References: <Pine.LNX.4.04.9902081118250.24-100000@us2.mayo.edu>
Message-ID: <l03130301b2e56690220b@[209.48.225.6]>

Hello,
>
>	-Determine how and where to organize ourselves, and set those
>         resources up (mailing lists, web sites, etc.).
>
I just wanted to mention that I've been working with Paul Barrett on his
'PyFITS' stuff.  Mostly I've been making sure that his binary table module
will compile and run on a Mac and doing some testing with FITS data.

He and I are in the process of bringing an Astronomical Python mailing
list, WWW and FTP site online at Goddard.  We've got some of the
preliminaries out of the way and hope to send out an announcement at the
end of this week or early next.

As to astronomical analysis with Python, I'm very concerned about the
plotting package issue.  I've played a lot with Konrad Hinsen's
TkPlotCanvas for doing simple plots.  I'm now trying to come up-to-speed on
Tk to really learn how to do a more flexible plotting package.  However, I
feel TkPlotCanvas is a good v0.1 plotting package for my current use.

I'm using my personal e-mail account since I do most of my Python stuff at
home (how much does an IDL license cost for a home system - an additional
$500).  However, I can also be reached at my lab account:
bridgman@lheapop.gsfc.nasa.gov

We'll let you know when something comes online.

Tom




From pfrazao@ualg.pt  Tue Feb  9 15:21:26 1999
From: pfrazao@ualg.pt (Pedro Miguel =?iso-8859-1?Q?Fraz=E3o?= F. Ferreira)
Date: Tue, 09 Feb 1999 15:21:26 +0000
Subject: [Matrix-SIG] Neural Networks
Message-ID: <36C05276.6299952@ualg.pt>

	Hi All,

	Just subscribed. Just started looking to python. I am interested in
code for Neural Networks (specialy RBF, GRBF). Is there any
implementation ?

	Thanks
-- 
------------------------------------------------------------------------
    Pedro Miguel Frazao Fernandes Ferreira, Universidade do Algarve
          U.C.E.H., Campus de Gambelas, 8000 - Faro, Portugal
pfrazao@ualg.pt     Tel.:+351 89 800950 / 872959     Fax: +351 89 818560
                     http://w3.ualg.pt/~pfrazao


From hinsen@cnrs-orleans.fr  Tue Feb  9 17:46:48 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Tue, 9 Feb 1999 18:46:48 +0100
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902082155.QAA08575@oobleck.tn.cornell.edu> (message from Joe
 Harrington on Mon, 8 Feb 1999 16:55:50 -0500)
References: <Pine.LNX.4.04.9902081118250.24-100000@us2.mayo.edu> <199902082155.QAA08575@oobleck.tn.cornell.edu>
Message-ID: <199902091746.SAA02706@dirac.cnrs-orleans.fr>

> My goal here is to get a core of standard routines that are used in
> many fields of science into a coherent package with good docs, and to
> integrate it initially with Python.  The point is that then *any*

Sorry, but I still don't see what you are planning to do...
My impression is that generic low-level libraries already exist.
How exactly does your idea differ from combining LAPACK, FFTPACK, etc.
into one tar file and write wrappers and documentation for all of it?

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From ryszard@moldyn.com  Tue Feb  9 18:47:11 1999
From: ryszard@moldyn.com (Ryszard Czerminski)
Date: Tue, 9 Feb 1999 13:47:11 -0500 (EST)
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902091746.SAA02706@dirac.cnrs-orleans.fr>
Message-ID: <Pine.LNX.3.95.990209132459.831D-100000@mpc3>

It would help a lot to start to develop some document
which would describe what do we want from this new package
(i.e. requirements).

This can be a good basis for further discussion.

It should probably include as well on some general
level analysis and description of existing packages
in this field (matlab, idl, root (http://root.cern.ch),
scilab - to name the few) and what makes them inadequate
enough to justify development of
yet-another-data-analysis-package.

One basic reason - at least for this group - is lack of
Python front-end (:-).

Ryszard

Ryszard Czerminski         phone : (617)354-3124 x 10
Moldyn, Inc.               fax   : (617)491-4522
955 Massachusetts Avenue   e-mail: ryszard@moldyn.com
Cambridge MA, 02139-3180   or      ryszard@photon.com

On Tue, 9 Feb 1999, Konrad Hinsen wrote:

> > My goal here is to get a core of standard routines that are used in
> > many fields of science into a coherent package with good docs, and to
> > integrate it initially with Python.  The point is that then *any*
> 
> Sorry, but I still don't see what you are planning to do...
> My impression is that generic low-level libraries already exist.
> How exactly does your idea differ from combining LAPACK, FFTPACK, etc.
> into one tar file and write wrappers and documentation for all of it?
> 
> Konrad.
> -- 
> -------------------------------------------------------------------------------
> Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
> Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
> Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
> 45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
> France                                   | Nederlands/Francais
> -------------------------------------------------------------------------------
> 
> _______________________________________________
> Matrix-SIG maillist  -  Matrix-SIG@python.org
> http://www.python.org/mailman/listinfo/matrix-sig
> 



From david.buscher@durham.ac.uk  Tue Feb  9 19:40:57 1999
From: david.buscher@durham.ac.uk (david.buscher@durham.ac.uk)
Date: Tue, 9 Feb 1999 19:40:57 +0000 (GMT)
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <Pine.LNX.3.95.990209132459.831D-100000@mpc3>
Message-ID: <Pine.OSF.4.05.9902091930310.11027-100000@dugem1.dur.ac.uk>

On Tue, 9 Feb 1999, Ryszard Czerminski wrote:

> It would help a lot to start to develop some document
> which would describe what do we want from this new package
> (i.e. requirements).
> 
> This can be a good basis for further discussion.
> 
> It should probably include as well on some general
> level analysis and description of existing packages
> in this field (matlab, idl, root (http://root.cern.ch),
> scilab - to name the few) and what makes them inadequate
> enough to justify development of
> yet-another-data-analysis-package.
> 
> One basic reason - at least for this group - is lack of
> Python front-end (:-).

To some extent such documents exist and have been referred to in previous
posts (can't immediately lay my hands on the URLs). What these documents
do not resolve though is the scope of the current project being proposed,
and to some extent we need a way of gauging this major question before
proceeding.

The options I see are:

[1] A python-specific IDL-alike, well documented and integrated, having
the major advantages that have been described (basically all the
specific advantages of python and general interpreted language and open
source advantages).

[2] A non-language specific system which can be rapidly ported to the next
flavour-of-the-month language (not that I think python is
flavour-of-the-month!).

My feeling is that the [2] is a much harder project, and we might only
have a feeling of how to do it by the end of doing [1]. Obviously it
would be a great help if while doing [1] we had [2] in mind. What do
others think?

cheers,
David


----------------------------------------+-------------------------------------
 David Buscher                          |  Phone  +44 191 374 7462
 Dept of Physics, University of Durham, |  Fax    +44 191 374 3709
 South Road, Durham DH1 3LE, UK         |  Email  david.buscher@durham.ac.uk
----------------------------------------+-------------------------------------



From johann@physics.berkeley.edu  Tue Feb  9 20:09:00 1999
From: johann@physics.berkeley.edu (Johann Hibschman)
Date: 09 Feb 1999 12:09:00 -0800
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: david.buscher@durham.ac.uk's message of "Tue, 9 Feb 1999 19:40:57 +0000 (GMT)"
References: <Pine.OSF.4.05.9902091930310.11027-100000@dugem1.dur.ac.uk>
Message-ID: <mthfsvcpb7.fsf@astron.berkeley.edu>

>>>>> "david" == david buscher <david.buscher@durham.ac.uk> writes:

    david> [1] A python-specific IDL-alike, well documented and
    david> integrated, having the major advantages that have been
    david> described (basically all the specific advantages of python
    david> and general interpreted language and open source
    david> advantages).

    david> [2] A non-language specific system which can be rapidly
    david> ported to the next flavour-of-the-month language (not that
    david> I think python is flavour-of-the-month!).

    david> My feeling is that the [2] is a much harder project, and we
    david> might only have a feeling of how to do it by the end of
    david> doing [1]. Obviously it would be a great help if while
    david> doing [1] we had [2] in mind. What do others think?

I agree that [2] is much harder.  I think that if we are to get
anything done, we should just do [1] with [2] in mind, otherwise we'll
have a spurt of theory posts which eventually die down without having
accomplished anything.

[2] seems hard.  For a lot of interesting things (numeric integration,
ODEs, PDEs), we'd need a decent (and fast) way to pass functions
around, whether those functions are native-code, python, or
interpolations on discrete data points.  I've yet to see one which
works well.

FFTPACK and friends certainly exist, but using those on many machines
requries running them through f2c, and that adds complexity (and extra
required libraries).  Ideally, in today's world (assuming we're
targetting typical workstation users, we'd just use straight C.
(C++ still seems dangerous, but that may just be the result of some
bad experiences on my part...)

Well, I'm rambling.  I'd like to see this work, even if all it does is
assemble a recommended set of netlib routines with extra
documentation.  As a numeric novice, I'm often swamped by the sheer
volume of choice and lack of information on how to choose when looking
at netlib.

-- 
Johann Hibschman                           johann@physics.berkeley.edu


From janne@avocado.pc.helsinki.fi  Tue Feb  9 21:59:18 1999
From: janne@avocado.pc.helsinki.fi (Janne Sinkkonen)
Date: 09 Feb 1999 23:59:18 +0200
Subject: [Matrix-SIG] Neural Networks
In-Reply-To: "Pedro Miguel Frazão F. Ferreira"'s message of "Tue, 09 Feb 1999 15:21:26 +0000"
References: <36C05276.6299952@ualg.pt>
Message-ID: <oa7ltr1bnt.fsf@avocado.pc.helsinki.fi>

"Pedro Miguel Frazão F. Ferreira" <pfrazao@ualg.pt> writes:

> 	Just subscribed. Just started looking to python. I am interested in
> code for Neural Networks (specialy RBF, GRBF). Is there any
> implementation ?

I have one in product use (i.e. tested). Has k-means, adaptive choice
of center widths, and regularization built in, and it includes the
beginnings of a regression model class hierarchy, but no
documentation. Send e-mail if you're interested.

-- 
Janne


From ffjhl@uaf.edu  Tue Feb  9 22:05:10 1999
From: ffjhl@uaf.edu (Jonah Lee)
Date: Tue, 9 Feb 1999 13:05:10 -0900 (AKST)
Subject: [Matrix-SIG] Numpy: memory leak in tolist() fixed
Message-ID: <Pine.OSF.3.95.990209125943.16458D-100000@mealpha.engr.uaf.edu>

Hi,

I finally got some time to fix this known memory leak in arrayobject.c.
The fix is based upon the suggestion from David Ascher. The diff file is


1214,1219c1214
<                 PyObject *v=array_item((PyArrayObject *)self, i);
<                 PyList_SetItem(lp, i, PyArray_ToList(v));
<                 if (((PyArrayObject *)self)->nd>1){
<                         Py_DECREF(v);
<                 }
< 
---
> 		PyList_SetItem(lp, i, PyArray_ToList(array_item((PyArrayObject *)self, i)));


Regards,
Jonah




From Oliphant.Travis@mayo.edu  Tue Feb  9 23:05:31 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Tue, 9 Feb 1999 17:05:31 -0600 (EST)
Subject: [Matrix-SIG] Interactive Data Analysis
Message-ID: <Pine.LNX.4.04.9902091643210.17874-100000@us2.mayo.edu>

To clarify some of my previous ramblings.

It is true that there are far too many integrated data analysis packages
all linking basically the same code (especially the free ones) for the
core engines.  Just look at this page...

http://stommel.tamu.edu/~baum/linuxlist/linuxlist/node33.html#numericalenvironments

All of these packages are useful to some people but I was not satisfied
with either the underlying data object or the lack of extensibility on any
of them.

It would be great to have a generic package that includes all of the
"important" free numerical routines that could be easily plugged-in to any
suitable extensible scripting language.   That could definitely be a long
term vision.

But, I agree with David and Johann that this is a hard problem, especially
without a model to work from. I also tend to agree that a good way to work
towards a generic package is to implement (translate: collect, package,
and document the interface) a python-specific collection of good routines
while keeping an eye towards reuse in other environments.  I think this
means either using SWIG as much as possible (without sacrificing real
usability with the NumPy defined objects), or making sure that any code
written is cleanly separated into Python-specific and generic subroutines.

Concentrating on getting lots of useful routines cleanly working with
NumPy early will attract more users of Numpy which will improve the
quality of any generic design that we might come up with later.   Right
now there are not enough "toolkit" routines to persuade many to use NumPy.
That is unfortunate since it is really quite a simple thing to add these
facilities to the python-numpy environment --- It just takes a little
personpower.

Thanks,

Travis



From jh@oobleck.tn.cornell.edu  Wed Feb 10 01:34:09 1999
From: jh@oobleck.tn.cornell.edu (Joe Harrington)
Date: Tue, 9 Feb 1999 20:34:09 -0500
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902091746.SAA02706@dirac.cnrs-orleans.fr> (message from
 Konrad Hinsen on Tue, 9 Feb 1999 18:46:48 +0100)
References: <Pine.LNX.4.04.9902081118250.24-100000@us2.mayo.edu> <199902082155.QAA08575@oobleck.tn.cornell.edu> <199902091746.SAA02706@dirac.cnrs-orleans.fr>
Message-ID: <199902100134.UAA16646@oobleck.tn.cornell.edu>

> Sorry, but I still don't see what you are planning to do...
> My impression is that generic low-level libraries already exist.
> How exactly does your idea differ from combining LAPACK, FFTPACK, etc.
> into one tar file and write wrappers and documentation for all of it?

That is precisely what I want to do.  The shocking thing is that
nobody has done this yet, at least not in a usable way.  Doing it well
is not trivial.

The difference from what has already been done is to do it coherently,
package it well, and provide a place for users and developers to get
it, contribute to it, discuss it, etc., following the model of other
successful free software efforts.  So far, scientific computing has
not taken part in the explosion of capability (and drop in price) that
other kinds of computing have as a result of the open-source software
development model.  The rank and file scientists in most disciplines
use commercial software or hire programmers to write in C.  There are
several decent low-level libraries (and many indecent ones), but there
is no integration.

This project will play the same role as Red Hat or Debian do for
Linux.  Linux didn't take off until there were coherent distributions
that were easy to install and easy for hackers to add to.  Prior to
then, you had to go out and find and compile everything in /usr/bin.
Few did.  Now anyone can go grab a CD and install Linux in 15 minutes.
When you're done, there's a coherent system whose components interact
well.  Now, even if you went through the huge effort of collecting
lots of numerical stuff, getting it to compile on your platform, and
wrapping it, you'd have huge gaps where you couldn't find a package,
the packages wouldn't interact cleanly, and you'd be faced with a
million pages of really awful docs, with no examples of how to use the
stuff together.  You could spend several years full-time getting to
this point, and still not be able to work as easily as you can in a
primitive environment like IDL or even IRAF.

The way I see changing the situation is to decide first what we want
(rather than listing the packages that are available), then going out
and looking for all the free implementations of each component,
weighing each, and selecting packages based on how well they fit into
the overall scheme.  This would include their ease of use, quality of
implementation, level of support, documentation, use of data types
that most match our standard ones, etc.  We'd then pick one to use and
write what we needed to get it to interface well.  That might be a
SWIG wrapper or a piece of C/C++ code that translated the data to our
interchange formats.  We'd write tutorial docs that described using
the routines together, with examples.  Where needed, we'd write docs
that described packages whose documentation was deficient.  Finally,
we'd package the whole thing up so that it's easy for a novice to
install it.

Of course, for novices to install and begin using it, the main package
has to include at least one interactive environment.  I chose Python
because it's the clear winner among the few languages that handle
arrays.  Such a package would make Python usable for science by anyone
who can use IDL, AIPS, IRAF, etc., without spending months struggling
against the lack of good numerical routines and graphical display.
The assembled package would be free and it would quickly become better
than its competition.  People in various disciplines (including some
on this list, I'm sure) would soon contribute stuff to support work in
specific fields.

I hope this better explains what I'm getting at.  If you have more
questions and haven't done so already, please take the time to read
our article and visit the Interactive Data Analysis Environments web
site, if you haven't:

http://oobleck.tn.cornell.edu/jh/ast/papers/idae96.ps
http://lheawww.gsfc.nasa.gov/users/barrett/IDAE/table.1.html

Thanks,

--jh--
Joe Harrington
326 Space Sciences Building
Cornell University
Ithaca, NY 14853-6801
(607) 255-5913 office
(607) 255-9002 fax
jh@oobleck.tn.cornell.edu
jh@alum.mit.edu (permanent)


From jh@oobleck.tn.cornell.edu  Wed Feb 10 01:38:00 1999
From: jh@oobleck.tn.cornell.edu (Joe Harrington)
Date: Tue, 9 Feb 1999 20:38:00 -0500
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <Pine.LNX.3.95.990209132459.831D-100000@mpc3> (message from
 Ryszard Czerminski on Tue, 9 Feb 1999 13:47:11 -0500 (EST))
References: <Pine.LNX.3.95.990209132459.831D-100000@mpc3>
Message-ID: <199902100138.UAA16661@oobleck.tn.cornell.edu>

> It would help a lot to start to develop some document
> which would describe what do we want from this new package
> (i.e. requirements).

See the 2 URLs at the end of my previous message...

--jh--
Joe Harrington
326 Space Sciences Building
Cornell University
Ithaca, NY 14853-6801
(607) 255-5913 office
(607) 255-9002 fax
jh@oobleck.tn.cornell.edu
jh@alum.mit.edu (permanent)


From Oliphant.Travis@mayo.edu  Mon Feb  8 07:21:07 1999
From: Oliphant.Travis@mayo.edu (Travis E. Oliphant)
Date: Mon, 08 Feb 1999 01:21:07 -0600
Subject: [Matrix-SIG] A full-blown interactive data analysis environment
Message-ID: <36BE9063.48140255@mayo.edu>

> We'd then pick one to use and
> write what we needed to get it to interface well.  That might be a
> SWIG wrapper or a piece of C/C++ code that translated the data to our
> interchange formats. 

To try and be precise, so that we understand each other, explain what
more we would do than what I did in producing the cephesmodule and the
fftw module, besides put them together into one package and write better
documentaion?

If that is the basic idea, just bigger scale and more packages,  than
let's go do it, I'm in.

Perhaps a good way to get this going is to take FFTW or cephes which
I've already got into a useable form and discuss what changes to make to
make them fit better into the desired scheme.  For that matter we could
start with LAPACK and see what could be done there.  It would help me,
if someone could clarify what is wrong (if anything) with the way all of
these libraries are currently used with Python.  (Is it just that they
don't all come in one big tar file and use individual-style make files
rather than one big one?)

As for packages we need at least these as far as I see (let's get the
list rolling):

FFT  (my vote is for FFTW)
LinearAlgebra
Random Numbers
Ordinary Differential Equations (ODEPACK?)
Integration (QUADPACK?)
Optimization (I know there's something on netlib)
Root finding (Equation solving)
Filter package (remez exchange algorithm, low level linear iir or fir
filter function, etc.) (I've got something like this in development).

Graphics and plotting:  This will be the problem I fear for
cross-platform.  How about we get going and talk about this one once we
have the Numerics happening....maybe something will surface elsewhere in
the meantime?

I vote for help written in LaTeX and translated to TeXinfo and HTML.

But again, it is also critical that we have some sort of interactive
help system implemented for the package to be useful to the novice.

Looking for feedback and someone to blow the whistle :-)

--Travis


From Oliphant.Travis@mayo.edu  Tue Feb  9 16:49:08 1999
From: Oliphant.Travis@mayo.edu (Travis E. Oliphant)
Date: Tue, 09 Feb 1999 10:49:08 -0600
Subject: [Matrix-SIG] A full blown interactive data analysis environment. (resent)
Message-ID: <36C06704.B50AD87F@mayo.edu>

> We'd then pick one to use and
> write what we needed to get it to interface well.  That might be a
> SWIG wrapper or a piece of C/C++ code that translated the data to our
> interchange formats. 

To try and be precise, so that we understand each other, explain what
more we would do than what I did in producing the cephesmodule and the
fftw module, besides put them together into one package and write better
documentaion?

If that is the basic idea, just bigger scale and more packages,  than
let's go do it, I'm in.

Perhaps a good way to get this going is to take FFTW or cephes which
I've already got into a useable form and discuss what changes to make to
make them fit better into the desired scheme.  For that matter we could
start with LAPACK and see what could be done there.  It would help me,
if someone could clarify what is wrong (if anything) with the way all of
these libraries are currently used with Python.  (Is it just that they
don't all come in one big tar file and use individual-style make files
rather than one big one?)

As for packages we need at least these as far as I see (let's get the
list rolling):

FFT  (my vote is for FFTW)
LinearAlgebra
Random Numbers
Ordinary Differential Equations (ODEPACK?)
Integration (QUADPACK?)
Optimization (I know there's something on netlib)
Root finding (Equation solving)
Filter package (remez exchange algorithm, low level linear iir or fir
filter function, etc.) (I've got something like this in development).

Graphics and plotting:  This will be the problem I fear for
cross-platform.  How about we get going and talk about this one once we
have the Numerics happening....maybe something will surface elsewhere in
the meantime?

I vote for help written in LaTeX and translated to TeXinfo and HTML.

But again, it is also critical that we have some sort of interactive
help system implemented for the package to be useful to the novice.

Looking for feedback and someone to blow the whistle :-)

--Travis

P.S.  We really need to set up a CVS server somewhere and get going on
the code development.  That way we could actually have code to comment
to each other on instead of just ideas.  I've never set up CVS before. 
Could we get some space at the Starship and set it up there.  Alas, I
don't have a PSA membership at the moment...


From r.hooft@euromail.net  Wed Feb 10 06:41:36 1999
From: r.hooft@euromail.net (Rob Hooft)
Date: Wed, 10 Feb 1999 07:41:36 +0100 (MET)
Subject: [Matrix-SIG] A full-blown interactive data analysis environment
In-Reply-To: <36BE9063.48140255@mayo.edu>
References: <36BE9063.48140255@mayo.edu>
Message-ID: <14017.10784.433843.450025@octopus.chem.uu.nl>

>>>>> "TEO" == Travis E Oliphant <Oliphant.Travis@mayo.edu> writes:

 TEO> FFT (my vote is for FFTW)

If I remember correctly from some old discussion on linking tcl with
readline from a number of years ago: any software that is clearly
written to be linked to a GPL library is automatically GPL itself.

FTTW is GPL (not LGPL), which means that inclusion in any kind of
package leaves me out (I'm also writing commercial software in
python).


Rob

-- 
=====   R.Hooft@EuroMail.net   http://www.xs4all.nl/~hooft/rob/  =====
=====   R&D, Nonius BV, Delft  http://www.nonius.nl/             =====
===== PGPid 0xFA19277D ========================== Use Linux! =========



From ryszard@moldyn.com  Wed Feb 10 12:24:05 1999
From: ryszard@moldyn.com (Ryszard Czerminski)
Date: Wed, 10 Feb 1999 07:24:05 -0500 (EST)
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902100134.UAA16646@oobleck.tn.cornell.edu>
Message-ID: <Pine.LNX.3.95.990210070956.10293C-100000@mpc3>

On Tue, 9 Feb 1999, Joe Harrington wrote:

> I hope this better explains what I'm getting at.  If you have more
> questions and haven't done so already, please take the time to read
> our article and visit the Interactive Data Analysis Environments web
> site, if you haven't:
> 
> http://oobleck.tn.cornell.edu/jh/ast/papers/idae96.ps
> http://lheawww.gsfc.nasa.gov/users/barrett/IDAE/table.1.html

Yes, these two pages directly address my question - thanks.

In your comparison table you do not mention ROOT.
I have learned about existence of this package few month ago
from the article in Linux Journal. I have downloaded it and 
played with it somewhat and I found it to be very impresive
system. It is an open source project and it might provide
a lot of functionality we are looking for.
It is using C++ interpreter as the the command language.

I would be very much interested in opinions of others on this list.

Ryszard

From: http://root.cern.ch/root/Mission.html

Currently the emphasis of ROOT is on the data analysis domain but thanks
to the approach of
loosely coupled object-oriented frameworks the system can easily be
extended to other domains. 

We believe that ROOT is an ideal environment to introduce physicists
quickly to the new world of
Objects and C++. 

ROOT: Executive Summary

The ROOT system provides a set of OO frameworks with all the functionality
needed to handle
and analyse large amounts of data in a very efficient way. Having the data
defined as a set of
objects, specialised storage methods are used to get direct access to the
separate attributes of the
selected objects, without having to touch the bulk of the data. Included
are histograming methods
in 1, 2 and 3 dimensions, curve fitting, function evaluation,
minimisation, graphics and
visualization classes to allow the easy setup of an analysis system that
can query and process the
data interactively or in batch mode. 

Thanks to the builtin CINT C++ interpreter the command language, the
scripting, or macro,
language and the programming language are all C++. The interpreter allows
for fast prototyping of
the macros since it removes the time consuming compile/link cycle. It also
provides a good
environment to learn C++. If more performance is needed the interactively
developed macros can
be compiled using a C++ compiler.

The system has been designed in such a way that it can query its databases
in parallel on MPP
machines or on clusters of workstations or high-end PC's. ROOT is an open
system that can be
dynamically extended by linking external libraries. This makes ROOT a
premier platform on
which to build data acquisition, simulation and data analysis systems.

Ryszard Czerminski         phone : (617)354-3124 x 10
Moldyn, Inc.               fax   : (617)491-4522
955 Massachusetts Avenue   e-mail: ryszard@moldyn.com
Cambridge MA, 02139-3180   or      ryszard@photon.com



From hinsen@cnrs-orleans.fr  Wed Feb 10 17:19:59 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Wed, 10 Feb 1999 18:19:59 +0100
Subject: [Matrix-SIG] Re: a full-blown interactive data analysis environment
In-Reply-To: <199902100134.UAA16646@oobleck.tn.cornell.edu> (message from Joe
 Harrington on Tue, 9 Feb 1999 20:34:09 -0500)
References: <Pine.LNX.4.04.9902081118250.24-100000@us2.mayo.edu> <199902082155.QAA08575@oobleck.tn.cornell.edu> <199902091746.SAA02706@dirac.cnrs-orleans.fr> <199902100134.UAA16646@oobleck.tn.cornell.edu>
Message-ID: <199902101719.SAA02598@dirac.cnrs-orleans.fr>

> > How exactly does your idea differ from combining LAPACK, FFTPACK, etc.
> > into one tar file and write wrappers and documentation for all of it?
> 
> That is precisely what I want to do.  The shocking thing is that
> nobody has done this yet, at least not in a usable way.  Doing it well
> is not trivial.

OK, now I see what you are aiming at - and I agree it's a worthwhile
project; the average potential user just can't do these things.

> successful free software efforts.  So far, scientific computing has
> not taken part in the explosion of capability (and drop in price) that
> other kinds of computing have as a result of the open-source software
> development model.  The rank and file scientists in most disciplines
> use commercial software or hire programmers to write in C.  There are

Partly this situation has been caused by the scientists themselves.
Much of the commercial software has been written by scientists for
their own work, but instead of following the original scientific
spirit of sharing the result of scientific work freely with other
scientists, they have chosen to go commercial. Tight budgets certainly
encourage this, as does the unfortunate fact that providing scientific
software is not considered "science".

> The way I see changing the situation is to decide first what we want
> (rather than listing the packages that are available), then going out
> and looking for all the free implementations of each component,
> weighing each, and selecting packages based on how well they fit into
> the overall scheme.  This would include their ease of use, quality of

Fine. But we'll have to create separate "task forces"; I suppose
none of us knows all relevant fields sufficiently well!
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From hinsen@cnrs-orleans.fr  Wed Feb 10 17:25:52 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Wed, 10 Feb 1999 18:25:52 +0100
Subject: [Matrix-SIG] A full-blown interactive data analysis environment
In-Reply-To: <14017.10784.433843.450025@octopus.chem.uu.nl>
 (r.hooft@euromail.net)
References: <36BE9063.48140255@mayo.edu> <14017.10784.433843.450025@octopus.chem.uu.nl>
Message-ID: <199902101725.SAA21444@dirac.cnrs-orleans.fr>

> FTTW is GPL (not LGPL), which means that inclusion in any kind of
> package leaves me out (I'm also writing commercial software in
> python).

I think licensing could become an important problem in this project.
Many scientific packages come with licences that do not allow
commercial use (at least not for free), and in some cases the
conditions are even unclear. We might have to use a two-level system,
one for generally free code and one for non-commercial use only (which
would still be useful to many of us).
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From david.buscher@durham.ac.uk  Wed Feb 10 18:02:00 1999
From: david.buscher@durham.ac.uk (david.buscher@durham.ac.uk)
Date: Wed, 10 Feb 1999 18:02:00 +0000 (GMT)
Subject: [Matrix-SIG] FFTW
In-Reply-To: <14017.10784.433843.450025@octopus.chem.uu.nl>
Message-ID: <Pine.OSF.4.05.9902101753420.11930-100000@dugem1.dur.ac.uk>

On Wed, 10 Feb 1999, Rob Hooft wrote:

> FTTW is GPL (not LGPL), which means that inclusion in any kind of
> package leaves me out (I'm also writing commercial software in
> python).

I know this is a side issue, but has anyone asked the authors of FFTW
about whether they would consider LGPL'ing it? I have been `in the market'
for a fast, flexible, open-source C implementation of multi-dimensional
FFT's for many years, and FFTW has been the first package that has come
along that has hit all of the requirements dead-on. The authors state
their intention of it becoming a de-facto standard, so maybe the change of
licence would be something they would be amenable to...

David

----------------------------------------+-------------------------------------
 David Buscher                          |  Phone  +44 191 374 7462
 Dept of Physics, University of Durham, |  Fax    +44 191 374 3709
 South Road, Durham DH1 3LE, UK         |  Email  david.buscher@durham.ac.uk
----------------------------------------+-------------------------------------



From asterian@eecs.umich.edu  Wed Feb 10 18:20:38 1999
From: asterian@eecs.umich.edu (Andrew Sterian)
Date: Wed, 10 Feb 1999 13:20:38 -0500 (EST)
Subject: [Matrix-SIG] FFTW
In-Reply-To: <Pine.OSF.4.05.9902101753420.11930-100000@dugem1.dur.ac.uk> from "david.buscher@durham.ac.uk" at Feb 10, 1999 06:02:00 PM
Message-ID: <199902101820.NAA12221@dip.eecs.umich.edu>

david.buscher@durham.ac.uk writes...
> > FTTW is GPL (not LGPL), which means that inclusion in any kind of
> > package leaves me out (I'm also writing commercial software in
> > python).
> 
> I know this is a side issue, but has anyone asked the authors of FFTW
> about whether they would consider LGPL'ing it? I have been `in the market'
> for a fast, flexible, open-source C implementation of multi-dimensional
> FFT's for many years, and FFTW has been the first package that has come
> along that has hit all of the requirements dead-on. The authors state
> their intention of it becoming a de-facto standard, so maybe the change of
> licence would be something they would be amenable to...

I quote from the FFTW FAQ:

  Question 1.3. Is FFTW free software?

  Starting with version 1.3, FFTW is Free Software in the technical sense
  defined by the Free Software Foundation (see Categories of Free and
  Non-Free Software), and is distributed under the terms of the GNU General
  Public License. Previous versions of FFTW were distributed without fee
  for noncommercial use, but were not technically ``free.''

  Non-free licenses for FFTW are also available that permit different
  terms of use than the GPL.

  Question 1.4. What is this about non-free licenses?

  The non-free licenses are for companies that wish to use FFTW in
  their products but are unwilling to release their software under the
  GPL (which would require them to release source code and allow free
  redistribution). Such users can purchase an unlimited-use license from
  MIT. Contact us for more details.

  We could instead have released FFTW under the LGPL, or even disallowed
  non-Free usage. Suffice it to say, however, that MIT owns the copyright to
  FFTW and they only let us GPL it because we convinced them that it would
  neither affect their licensing revenue nor irritate existing licensees.

Andrew.  asterian@umich.edu | Me: http://www-personal.umich.edu/~asterian
----------------------------+----------------------------------------------
Faith is the willingness to get out of bed in the morning and just show up...
                                                            -- Richard Thieme


From bdmf@email.msn.com  Thu Feb 11 19:33:11 1999
From: bdmf@email.msn.com (!Our Publishing Company)
Date: 11 Feb 1999 11:33:11 -0800
Subject: [Matrix-SIG] The New Bible of Dating!! (x11)
Message-ID: <0d3f00733190b29CPIMSSMTPU02@email.msn.com>

Women all over America have said, "Burn This Book!!!"

CALL TO ORDER YOUR COPY - (800) 830-2047 [24hrs]

How To Juggle Women:
Without Getting Killed or Going Broke
by Stefan Feller

ISBN:0-9658299-4-4
Price: $12.00

This book is an essential guide for the seasoned dating
man who needs help organizing the women in his life.
Women are constantly grabbing your interest and attention.
Do you have to choose on over the other? Of Course not!
Stop letting opportunities like these go to waste.
Why not date them all?

This is a must-read for men who are active on the dating
scene. Not only will you learn how to juggle several
females at a time, you will also improve your overall
chances with women, including where to meet them and
how to do the little things to keep them happy.

You will learn time-saving techniques that will maximize
your availability, and allow you to spend time with several
ladies a day. The author details scheduling methods so you
never have two dates at the same time. You will also learn
money-saving tips so you can date aggressively and still
have money left to buy life's other necessities.

Forget the saying,"For every man there is a woman out
there." Now it will be, "For every man there are at
least three women."
--------------------------------------------------------
CHAPTER TITLES

Chapter One - Getting to Know You
Chapter Two - Categories of Women
Chapter Three - Where to Find Them
Chapter Four - The Rotation
Chapter Five - Prior Planning
Chapter Six - How to Keep Them Happy and Away From Each Other
Chapter Seven - Fiscally Fit
Chapter Eight - Personnel Changes
--------------------------------------------------------
Juggling Women:
Without Getting Killed or Going Broke
by Stefan Feller

ISBN:0-9658299-4-4
Price: $12.00
$3.00 Shipping individuals - COD Charges are free for a limited time.
Wholesale Discounts Available

---------------------------------------------------------
Order by Phone
1 (800)830-2047
24 hours a day

---------------------------------------------------------






From bridgman@lheapop.gsfc.nasa.gov  Fri Feb 12 21:48:08 1999
From: bridgman@lheapop.gsfc.nasa.gov (Dr. William T. Bridgman)
Date: Fri, 12 Feb 1999 16:48:08 -0500
Subject: [Matrix-SIG] Announcing the Astronomical Python mailing list
Message-ID: <v03102806b2ea32726907@[128.183.19.179]>

Why use Python in astronomy?

Many astronomers use scripting and scripting-type languages in astronomical
data analysis.  Some are very expensive.  Others lack portability between
different operating system and processor architectures.  Still others lack
powerful methods for manipulating numerical data.  Python is free, runs on
a multitude of system architectures, and can handle numerical processing.
What Python currently lacks is a large library of fundamental routines
needed by the astronomical community.

That is the goal of this mailing list.

We want to establish a nexus for development and distribution of
Python-based  routines and other software to address the needs of the
astronomical community.  You can find out more about what's been done and
what is planned at our website (which is still under construction)

http://lheawww.gsfc.nasa.gov/~bridgman/AstroPy/

If you are interested in participating in this project, send mail to
majordomo@athena.gsfc.nasa.gov with

subscribe astropy

as the message text. You will be sent a confirmation request to which you
must respond before being added to the list.

Thanks for your attention.

--
Dr. William T."Tom" Bridgman          Raytheon STX
NASA/Goddard Space Flight Center
Code 664                              bridgman@lheapop.gsfc.nasa.gov
Greenbelt, MD 20771                   (301) 286-1346




From ransom@cfa.harvard.edu  Sun Feb 14 17:27:47 1999
From: ransom@cfa.harvard.edu (Scott M. Ransom)
Date: Sun, 14 Feb 1999 17:27:47 +0000
Subject: [Matrix-SIG] Re: AstroPy ... Re: Data Analysis
References: <l03130300b2ebb5af07f1@[209.48.225.17]>
Message-ID: <36C70793.D00A91E0@cfa.harvard.edu>

For the Matrix-Sig people out there that have not subscribed to the AstroPy
mailing list, this is a reply to a post about creating an Astronomy related
data-analysis toolkit with Python.  I have cross-posted it because I think it
is relevant to the "new" data analysis effort....

W.T. Bridgman wrote:

> 1) One of the first tasks for this project is to find out what has already
> been done by list members.

I have made a couple small packages which might be of use to the Astro/Data
Analysis community (and I am more than willing to help out in any way I
can...)

1.  A Romberg Integrator and a "Safe" Newton-Raphson (both in pure Python)
that I wrote, are available as part of Konrad Hinson's Scientific Python
package:
http://starship.skyport.net/crew/hinsen/scientific.html

2.  I have written a Python wrapper for Nick Patavalis' very nice set of
PGPLOT wrappers.  The Python wrapper lets you make very nice plots (including
2-D images) with a single interactive call (ala IDL or MATLAB).  Both the
wrapper and Nick's package are available at:
ftp://cfa0.harvard.edu/pub/ransom/

3.  Not by me, but very useful nonetheless, Gary Strangman has made a very
complete (although I do not know how well it has been tested) set of
statistics functions available at:
http://www.nmr.mgh.harvard.edu/Neural_Systems_Group/gary/python.html

4.  I have interfaced a large pulsar data analysis package that I wrote with
Python using SWIG.  I therefore have a fairly workable NumPy typemap file
(that will probably need to be edited quite extensively for general use) that
allows NumPy to use C arrays and vise-versa.  If anyone wants to use this as a
starting point for a more robust typemap, I have placed it in the above FTP
site.

A couple other points:

I have been considering wrapping the 'C' version of SLALIB for Python.
Unfortunately, the author, P.T.Wallace (ptw@star.rl.ac.uk), only makes the 'C'
version available by email.  If we truly want this wrapped, we need to get his
permission and hopefully a method of making the most current 'C' version
available to anyone.  Does anyone know Mr. Wallace?  If not I might try to
email him myself about this.

I believe the most important thing that needs to be done (and this applies to
the Matrix group as well) is to develop a publication qualtiy (fonts
especially), portable, 2-D and 3-D surface/mesh plotting routine or library.
PGPLOT is great for 2-D work for astronomy, and the Gist stuff is nice as
well, but the font support (I believe) is not there (correct me if I'm wrong),
but I think we have to go beyond those.  The PerlDL people have a good start
on a workable OpenGL 3-D plotting package if some Perl guru out there has a
mind to start a porting effort.  Other options could be a much expanded
TkPlotCanvas, or the new Java APIs (including Java3D).

We should probably try to keep this tied in as much as possible with the
current effort, started by Joe Harrington and others on Matrix-Sig, to create
a full-blown data-analysis environment.

Cheers,

Scott

--
Scott M. Ransom
Phone:  (580) 536-7215             Address:  703 SW Chaucer Cir.
email:  ransom@cfa.harvard.edu               Lawton, OK  73505
PGP Fingerprint: D2 0E D0 10 CD 95 06 DA  EF 78 FE 2B CB 3A D3 53





From Oliphant.Travis@mayo.edu  Mon Feb 15 06:07:39 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Mon, 15 Feb 1999 00:07:39 -0600 (EST)
Subject: [Matrix-SIG] PIL and NumPy Relationship
Message-ID: <Pine.LNX.4.04.9902150000050.3878-100000@us2.mayo.edu>

PIL and/or NumPy users:

Thanks to F Lundh and all else who have contributed to the PIL.  It is
a very nice piece of work and a very useful tool for those of us who
use python to look at and display images.  

Since I have only been a python regular for under a year I wasn't
around when the PIL was just getting off the ground and I have a
couple of questions about the relationship between the PIL and
Numerical Python.

I come from a background of using other data analysis environments
like MATLAB and IDL which treat images as an array of numbers.  As a
result I'm a little surprised to see two different objects being
developed in Python that both serve as images.  The Image object in
PIL and the multiarray object in NumPy.  I wondered if someone could
enlighten me as to any advantage of having two separate objects for
the same kind of data.

Personally, I think this creates an unnecessary dichotomy and division
of scarce resources.  I know, for example, that people have written
and will write "image processing" routines that will work on the Image
object and that I would like to apply those same routines to my Array
object but will first have to get the data into an Image object and
then back into an Array object for further processing.  While not
difficult this is not really acceptable if one is trying to use Python
as an interactive analysis environment.

In many cases one deals with an image which is simply a slice through
or projection of a larger array data set.  With a separate Image
object, all of the methods people have and may develop for image
processing (including loading and saving of image types) are
unavailable unless the translation is made first.

What is the possibility of unifying the PIL and NumPy a bit more?  I
know you can translate between the two objects with fromstring and
tostring methods but this is wasteful in both mental effort (in
keeping track of the different objects and .  It would be better, I
think if the PIL were built around the multiarray object already.  I
guess I'm not sure what kind of work this would involve.  Another
possibility is to have the underlying image data be both an Image
object and Multiarray object at the same time (same data area
different headers? in the C-struct)

So, I'm basically asking users of the PIL to tell me if and why they
feel it is necessary to have two different objects to represent images.

I don't mean to sound critical at all.  I'm just offering my
thoughts.  I basically wouldn't care at all except I find the PIL to
be such a useful addition to Python and am also a heavy user of
Numerical Python.  Thanks again to all involved in
its development.

Thanks,

Travis Oliphant



----------------------------------------------------
Travis Oliphant            200 First St SW          
    	                   Rochester MN 55905       
Ultrasound Research Lab	   (507) 286-5293           
Mayo Graduate School	   Oliphant.Travis@mayo.edu 



From Oliphant.Travis@mayo.edu  Mon Feb 15 15:48:35 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Mon, 15 Feb 1999 09:48:35 -0600 (EST)
Subject: [Matrix-SIG] Plotting in Python
Message-ID: <Pine.LNX.4.04.9902150946220.4429-100000@us2.mayo.edu>

>PGPLOT is great for 2-D work for astronomy, and the Gist stuff is nice as
>well, but the font support (I believe) is not there (correct me if I'm
>wrong),

I've used GIST enough to know that it uses postscript fonts for it's
output.  I think GIST is quite nice for plotting (the style sheets could
use better documentation, though...)  I don't know about PGPLOT, yet.




From frank@ned.dem.csiro.au  Thu Feb 18 04:23:35 1999
From: frank@ned.dem.csiro.au (Frank Horowitz)
Date: Thu, 18 Feb 1999 12:23:35 +0800
Subject: [Matrix-SIG] Python Khoros Interface: deceased?
Message-ID: <36CB95C6.A35D6088@ned.dem.csiro.au>

G'Day David,

    The graphics page of the PyScience web page site still lists a link
to a Khoros Inteface.  I've been sporadically trying to follow that link
for at least 6 months, to no avail.  I've tried e-mailing Suorsa to ask
if there is a newer link, but have never received a reply.

    I believe that the Khoros package referred to was a SWIG job on an
early version of Khoros.  Since Khoros itself is no longer freely
available on the net (I'd love to be proven wrong on that, BTW), and
since the last freely available version of Khoros postdates the version
the Python wrapper targets (as deduced from some old messages found in
deja-news) I'm wondering if there's any point in continuing to claim an
existing link between Python and Khoros?

    Is it time to officially pronounce the Python-Khoros interface dead?

    Cheers,
        Frank Horowitz




From Oliphant.Travis@mayo.edu  Wed Feb 24 00:14:55 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Tue, 23 Feb 1999 18:14:55 -0600 (EST)
Subject: [Matrix-SIG] Sigtools 0.40 released
In-Reply-To: <199902232251.RAA04613@eric.cnri.reston.va.us>
References: <joe-230219991406136237@chinacat.salk.edu>  <199902232251.RAA04613@eric.cnri.reston.va.us>
Message-ID: <Pine.LNX.4.04.9902231804030.193-100000@us2.mayo.edu>

I'm announcing the release of Sigtools 0.40 for Python with Numerical
extensions.

This module provides several general-purpose signal processing functions
which can operate on Numerical Python arrays.  These modules provide
core functionality for signal/image/array processing using Numerical
Python.

The module currently has four methods:

(1) an N-D convolution filter (blurring, edge-detection, etc.)

(2) an N-D order filter (a median filter is an example of this type)

(3) a linear_filter function that operates on N-D (possibly
non-contiguous) arrays along an arbitrary axis.  The linear_filter
implements any linear system with a rational/polynomial transfer function.
(It is just like the filter function in MATLAB).

(4) a Parks-McClellan algorithm for designing optimal FIR filters:  This
is just a wrapper around GPL'd code written by Jake Janovetz.

Tarfiles (with a Setup file)
and RPM's are available at 

http://oliphant.netpedia.net


--Travis Oliphant




From Oliphant.Travis@mayo.edu  Thu Feb 25 23:15:40 1999
From: Oliphant.Travis@mayo.edu (Travis Oliphant)
Date: Thu, 25 Feb 1999 17:15:40 -0600 (EST)
Subject: [Matrix-SIG] Wiener filter using sigtools and Python
Message-ID: <Pine.LNX.4.04.9902251708590.11755-100000@us2.mayo.edu>

While this routine will go into the next addition of sigtools, I thought I
would show interested people, how easy it is to do image (volume) 
processing with Python.

Below find a function to do wiener filtering on an array of data.

from sigtools import *
from MLab import *

def wiener(im,mysize=None,noise=None):

    if mysize==None:
        mysize = 3*ones(len(a.shape))
    mysize = array(mysize);

    # Estimate the local mean
    lMean = convolveND(im,ones(mysize),1) / prod(mysize)

    # Estimate the local variance
    lVar = convolveND(im**2,ones(mysize),1) / prod(mysize) - lMean**2

    # Estimate the noise power if needed.
    if noise==None:
        noise = mean(ravel(lVar))

    # Compute result
    # out = lMean + (maximum(0, lVar - noise) ./
    #               maximum(lVar, noise)) * (im - lMean) 
    #
    out = im - lMean
    im = lVar - noise
    im = maximum(im,0)
    lVar = maximum(lVar,noise)
    out = out / lVar
    out = out * im
    out = out + lMean

    return out




Enjoy,

--Travis




From dubois1@llnl.gov  Thu Feb 25 23:29:35 1999
From: dubois1@llnl.gov (Paul F. Dubois)
Date: Thu, 25 Feb 1999 15:29:35 -0800
Subject: [Matrix-SIG] Numeric Test on 1.5.2b2 o.k.
Message-ID: <000301be6116$b47fc600$f4160218@c1004579-c.plstn1.sfba.home.com>

FYI: I made 1.5.2b2 out of the box, and then made the current Numerical by
doing python makethis.py and the Numerical test executes correctly. This was
on Linux (Redhat, 5.2).

Paul