From hinsen@dirac.cnrs-orleans.fr  Fri Oct  1 14:21:08 1999
From: hinsen@dirac.cnrs-orleans.fr (hinsen@dirac.cnrs-orleans.fr)
Date: Fri, 1 Oct 1999 15:21:08 +0200
Subject: [Matrix-SIG] performance hit in c extensions
In-Reply-To: <199909302224.RAA04282@v1.wustl.edu> (message from Heather Drury
 on Thu, 30 Sep 1999 17:24:52 -0500 (CDT))
References: <199909302224.RAA04282@v1.wustl.edu>
Message-ID: <199910011321.PAA07833@chinon.cnrs-orleans.fr>

> I have sucessfully written wrappers for most of my C code so
> I can call the modules directly from python. Unfortunately,
> it *appears* that the code executes about 50% slower (I haven't
> actually timed it) when called from python versus being 
> called from the command line. I would not have thought
> there would be any performance impact. 

I could imagine two reasons:

- Different compiler options. Did you use full optimization to compile
  the extension modules? The Makefile.pre.in mechanism will give you
  the same options as were used to compile Python itself, which is not
  always the optimum.

- Calling overhead. If your C routines run only for a short time,
  the Python initialization and calling overhead might be significant.

In case of doubt, a profiler will tell you all you want to know
(and probably quite a bit more).

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From heather@v1.wustl.edu  Fri Oct  1 19:16:27 1999
From: heather@v1.wustl.edu (Heather Drury)
Date: Fri, 1 Oct 1999 13:16:27 -0500 (CDT)
Subject: [Matrix-SIG] performance hit in c extensions
In-Reply-To: <199910011321.PAA07833@chinon.cnrs-orleans.fr> from "hinsen@dirac.cnrs-orleans.fr" at Oct 1, 99 03:21:08 pm
Message-ID: <199910011816.NAA11216@v1.wustl.edu>

Hi,

Thanks for everyones suggestions re: performance slow down in python.
I've done some more experiments and here is some additional information:

I wrote a very small python test script that calls my C wrapped code (which 
is computationally expensive) with a small array and it took 2.5 minutes. 

I then called the same function from my relatively large python program
(lots of graphics & vtk stuff) and it took 13 MINUTES!!

Ouch. 

The disk is not thrashing. Over 200 meg of memory is free when
I run the program. When I call the function from my "big" program it starts
off pretty quickly and then gradually slows to a crawl. 

So, it's not python. It seems to be memory related (I guess). Is
there some limitation to the size of memory that the python
interpreter can use?

Heather
-- 

Heather Drury                               	heather@v1.wustl.edu 
Washington University School of Medicine    	http://v1.wustl.edu	
Department of Anatomy & Neurobiology         	Phone: 314-362-4325
660 S. Euclid, MS 8108                       	FAX: 314-747-4370
St. Louis, MO 63110-1093


From fyen@grossprofit.com  Sun Oct  3 20:24:06 1999
From: fyen@grossprofit.com (Felix Yen)
Date: Sun, 3 Oct 1999 15:24:06 -0400
Subject: [Matrix-SIG] Numpy Win32 Debug settings
Message-ID: <419A3E73A18BD211A17000105A1C37E90EEB19@mail.grossprofit.com>

I am having trouble building a debug configuration of Numpy with
Visual C++ 6.0.  (My application crashes when it tries to call a
dll initialization function.)  Could someone give me a working set
of .dsp files, or a brief description of the correct project settings?
VC5 files would also work.


Felix



From hinsen@cnrs-orleans.fr  Mon Oct  4 17:08:35 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Mon, 4 Oct 1999 18:08:35 +0200
Subject: [Matrix-SIG] performance hit in c extensions
In-Reply-To: <199910011816.NAA11216@v1.wustl.edu> (message from Heather Drury
 on Fri, 1 Oct 1999 13:16:27 -0500 (CDT))
References: <199910011816.NAA11216@v1.wustl.edu>
Message-ID: <199910041608.SAA11571@chinon.cnrs-orleans.fr>

> The disk is not thrashing. Over 200 meg of memory is free when
> I run the program. When I call the function from my "big" program it starts
> off pretty quickly and then gradually slows to a crawl. 
> 
> So, it's not python. It seems to be memory related (I guess). Is

Just run "top" to check the memory size of the process (assuming of
course that you are using Unix, otherwise you should look for some
equivalent tool). At the same time you get the CPU time percentage
that the job gets, which is valuable information as well.

> there some limitation to the size of memory that the python
> interpreter can use?

No, unless imposed by the C compiler or the C runtime library or the
OS, or whatever else. But not by Python.

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From hinsen@dirac.cnrs-orleans.fr  Tue Oct  5 10:59:08 1999
From: hinsen@dirac.cnrs-orleans.fr (hinsen@dirac.cnrs-orleans.fr)
Date: Tue, 5 Oct 1999 11:59:08 +0200
Subject: [Matrix-SIG] Bug in NumPy
Message-ID: <199910050959.LAA13850@chinon.cnrs-orleans.fr>

I just hit what I consider a bug in NumPy:

  from Numeric import *

  a = zeros((0,))
  print a
  print a[::-1]

prints:

  zeros((0,), 'l')
  [0]

I'd expect the reverse of nothing to be nothing! I also wonder where the
added element comes from - it is always zero, integer or float depending
on the array type.

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From pfrazao@ualg.pt  Thu Oct  7 10:50:32 1999
From: pfrazao@ualg.pt (Pedro Miguel Frazao Fernandes Ferreira)
Date: Thu, 07 Oct 1999 10:50:32 +0100
Subject: [Matrix-SIG] array assignments
Message-ID: <37FC6CE8.425E9571@ualg.pt>

Hi All,

	Can anyone tell me if this is the correct behaviour:

Python 1.5.1 (#1, Dec 17 1998, 20:58:15)  [GCC 2.7.2.3] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
Executing startup script
End
>>> a=arange(10)
>>> b=arange(10)
>>> a=b
>>> b[5]=10
>>> print a
[ 0  1  2  3  4 10  6  7  8  9]
>>>

	Should a and b be two different objects ?
	= operator should copy ?

	Thanks
-- 
------------------------------------------------------------------------
    Pedro Miguel Frazao Fernandes Ferreira, Universidade do Algarve
          U.C.E.H., Campus de Gambelas, 8000 - Faro, Portugal
pfrazao@ualg.pt     Tel.:+351 89 800950 / 872959     Fax: +351 89 818560
                     http://w3.ualg.pt/~pfrazao


From mwh21@cam.ac.uk  Thu Oct  7 11:37:45 1999
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Thu, 7 Oct 1999 11:37:45 +0100 (BST)
Subject: [Matrix-SIG] array assignments
In-Reply-To: <37FC6CE8.425E9571@ualg.pt>
Message-ID: <Pine.LNX.4.10.9910071134340.10176-100000@localhost.localdomain>

Welcome to Python!

On Thu, 7 Oct 1999, Pedro Miguel Frazao Fernandes Ferreira wrote:

> Hi All,
> 
> 	Can anyone tell me if this is the correct behaviour:
> 
> Python 1.5.1 (#1, Dec 17 1998, 20:58:15)  [GCC 2.7.2.3] on linux2
> Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
> Executing startup script
> End
> >>> a=arange(10)
> >>> b=arange(10)
> >>> a=b
> >>> b[5]=10
> >>> print a
> [ 0  1  2  3  4 10  6  7  8  9]
> >>>
> 
> 	Should a and b be two different objects ?
> 	= operator should copy ?

Absolutely not. (Well, that's a bit strong, but it is inherent in Python
that = doesn't copy).

There isn't really a "= operator" in Python. "a=b" is syntax for "bind the
name a to the object currently bound to the name b".

HTH
Michael



From jhauser@ifm.uni-kiel.de  Thu Oct  7 12:04:23 1999
From: jhauser@ifm.uni-kiel.de (Janko Hauser)
Date: Thu, 7 Oct 1999 13:04:23 +0200 (CEST)
Subject: [Matrix-SIG] array assignments
In-Reply-To: <37FC6CE8.425E9571@ualg.pt>
References: <37FC6CE8.425E9571@ualg.pt>
Message-ID: <14332.32311.770413.626729@ifm.uni-kiel.de>

Pedro Miguel Frazao Fernandes Ferreira writes:
 > Hi All,
 > 
 > 	Can anyone tell me if this is the correct behaviour:
 > 
 > Python 1.5.1 (#1, Dec 17 1998, 20:58:15)  [GCC 2.7.2.3] on linux2
 > Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
 > Executing startup script
 > End
 > >>> a=arange(10)
 > >>> b=arange(10)
 > >>> a=b
 > >>> b[5]=10
 > >>> print a
 > [ 0  1  2  3  4 10  6  7  8  9]
 > >>>
 > 
 > 	Should a and b be two different objects ?
 > 	= operator should copy ?
 > 

No and no.
>>> a=arange(10)
>>> b=a
>>> b[5]=10
>>> a
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])
>>> b
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])
>>> b=a*1. # or copy.copy(a)
>>> b[5]=5
>>> a
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])
>>> b
array([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.])
>>> b=array(a,copy=1)
>>> b[5]=5
>>> a
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])
>>> b
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

The following is special for NumPy arrays not like the behaviour of
python lists.

>>> b=a[:]
>>> b[5]=10
>>> a
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])
>>> b
array([ 0,  1,  2,  3,  4, 10,  6,  7,  8,  9])

HTH

__Janko


From mbolling@fysik.dtu.dk  Fri Oct 15 09:40:49 1999
From: mbolling@fysik.dtu.dk (Mikkel Bollinger)
Date: Fri, 15 Oct 1999 10:40:49 +0200 (CEST)
Subject: [Matrix-SIG] Bug in matrixmultiply
Message-ID: <Pine.LNX.4.10.9910151038320.4104-100000@laplace>

I have been using the NumPy-package and I believe I found a bug using
the function "matrixmultiply":

>>> c=asarray([[0,1,0],[0,0,0],[0,0,0]])  
>>> matrixmultiply(5,c)
array([[0, 0, 0],
       [5, 0, 0],
       [0, 0, 0]])     

and

>>> matrixmultiply(c,5)
array([[0, 5, 0],
       [0, 0, 0],
       [0, 0, 0]])

When multiplying with a constant, matrixmultiply should either raise an
error message or interpret "5" as 5*unitmatrix. However it should not
transpose the output matrix as is the case for matrixmultiply(5,c).
Regards, 
Mikkel Bollinger 

--
Mikkel Bollinger
Center for Atomic-scale Materials Physics (CAMP)
Department of Physics, Building 307,
Technical University of Denmark
DK-2800 Lyngby, Denmark

E-Mail: mbolling@fysik.dtu.dk
Phone +45 45253204



From robin@jessikat.demon.co.uk  Fri Oct 15 11:27:02 1999
From: robin@jessikat.demon.co.uk (Robin Becker)
Date: Fri, 15 Oct 1999 11:27:02 +0100
Subject: [Matrix-SIG] Bug in matrixmultiply
In-Reply-To: <Pine.LNX.4.10.9910151038320.4104-100000@laplace>
References: <Pine.LNX.4.10.9910151038320.4104-100000@laplace>
Message-ID: <0drcEDA2FwB4Ew23@jessikat.demon.co.uk>

In article <Pine.LNX.4.10.9910151038320.4104-100000@laplace>, Mikkel
Bollinger <mbolling@fysik.dtu.dk> writes
>I have been using the NumPy-package and I believe I found a bug using
>the function "matrixmultiply":
>
>>>> c=asarray([[0,1,0],[0,0,0],[0,0,0]])  
>>>> matrixmultiply(5,c)
>array([[0, 0, 0],
>       [5, 0, 0],
>       [0, 0, 0]])     
>
>and
>
>>>> matrixmultiply(c,5)
>array([[0, 5, 0],
>       [0, 0, 0],
>       [0, 0, 0]])
>
>When multiplying with a constant, matrixmultiply should either raise an
>error message or interpret "5" as 5*unitmatrix. However it should not
>transpose the output matrix as is the case for matrixmultiply(5,c).
>Regards, 
>Mikkel Bollinger 
>
yes I agree. Interestingly 
>>> c*5
array([[0, 5, 0],
       [0, 0, 0],
       [0, 0, 0]])
>>> 5*c
array([[0, 5, 0],
       [0, 0, 0],
       [0, 0, 0]])
>>> 
-- 
Robin Becker


From robin@jessikat.demon.co.uk  Fri Oct 15 12:20:32 1999
From: robin@jessikat.demon.co.uk (Robin Becker)
Date: Fri, 15 Oct 1999 12:20:32 +0100
Subject: [Matrix-SIG] Bug in matrixmultiply
In-Reply-To: <Pine.LNX.4.10.9910151038320.4104-100000@laplace>
References: <Pine.LNX.4.10.9910151038320.4104-100000@laplace>
Message-ID: <pEsrWHAA4wB4EwUP@jessikat.demon.co.uk>

In article <Pine.LNX.4.10.9910151038320.4104-100000@laplace>, Mikkel
Bollinger <mbolling@fysik.dtu.dk> writes
>I have been using the NumPy-package and I believe I found a bug using
>the function "matrixmultiply":
>
>>>> c=asarray([[0,1,0],[0,0,0],[0,0,0]])  
>>>> matrixmultiply(5,c)
>array([[0, 0, 0],
>       [5, 0, 0],
>       [0, 0, 0]])     
>
>and
>
>>>> matrixmultiply(c,5)
>array([[0, 5, 0],
>       [0, 0, 0],
>       [0, 0, 0]])
>
>When multiplying with a constant, matrixmultiply should either raise an
>error message or interpret "5" as 5*unitmatrix. However it should not
>transpose the output matrix as is the case for matrixmultiply(5,c).
>Regards, 
>Mikkel Bollinger 
>
closer checking reveals that in Numeric.py

#This is obsolete, don't use in new code
matrixmultiply = dot

so it's correct for a dot product.
-- 
Robin Becker


From jody@sccsi.com  Fri Oct 15 17:40:26 1999
From: jody@sccsi.com (Jody Winston - Computer)
Date: Fri, 15 Oct 1999 11:40:26 -0500 (CDT)
Subject: [Matrix-SIG] Support for long longs
Message-ID: <199910151640.LAA05921@gateway.infohwy.com>

I'm in the middle of wrapping HDF5 using SWIG and numeric and I've run
into a problem.  HDF5 makes extensive use of signed and unsigned long
long data types.  After a quick look at numeric, it appears that
numeric only supports signed longs (PyArray_LONG).

Should I add signed and unsigned long longs to numeric?  If so, what
version of numeric should I start from and are there any surprises
that I need to know about.

Jody



From dubois1@llnl.gov  Fri Oct 15 19:02:21 1999
From: dubois1@llnl.gov (Paul F. Dubois)
Date: Fri, 15 Oct 1999 11:02:21 -0700
Subject: [Matrix-SIG] Bug in matrixmultiply
References: <Pine.LNX.4.10.9910151038320.4104-100000@laplace> <0drcEDA2FwB4Ew23@jessikat.demon.co.uk>
Message-ID: <001601bf1737$6de927e0$3c810b18@plstn1.sfba.home.com>

My opinion is that the behavior of matrixmultiply is a bug; it should raise
an exception.
The behavior Robin notes for plain * is correct and intentional.

----- Original Message -----
From: Robin Becker <robin@jessikat.demon.co.uk>
To: <matrix-sig@python.org>
Sent: Friday, October 15, 1999 3:27 AM
Subject: Re: [Matrix-SIG] Bug in matrixmultiply


> In article <Pine.LNX.4.10.9910151038320.4104-100000@laplace>, Mikkel
> Bollinger <mbolling@fysik.dtu.dk> writes
> >I have been using the NumPy-package and I believe I found a bug using
> >the function "matrixmultiply":
> >
> >>>> c=asarray([[0,1,0],[0,0,0],[0,0,0]])
> >>>> matrixmultiply(5,c)
> >array([[0, 0, 0],
> >       [5, 0, 0],
> >       [0, 0, 0]])
> >
> >and
> >
> >>>> matrixmultiply(c,5)
> >array([[0, 5, 0],
> >       [0, 0, 0],
> >       [0, 0, 0]])
> >
> >When multiplying with a constant, matrixmultiply should either raise an
> >error message or interpret "5" as 5*unitmatrix. However it should not
> >transpose the output matrix as is the case for matrixmultiply(5,c).
> >Regards,
> >Mikkel Bollinger
> >
> yes I agree. Interestingly
> >>> c*5
> array([[0, 5, 0],
>        [0, 0, 0],
>        [0, 0, 0]])
> >>> 5*c
> array([[0, 5, 0],
>        [0, 0, 0],
>        [0, 0, 0]])
> >>>
> --
> Robin Becker
>
> _______________________________________________
> Matrix-SIG maillist  -  Matrix-SIG@python.org
> http://www.python.org/mailman/listinfo/matrix-sig
>



From dubois1@llnl.gov  Fri Oct 15 19:07:35 1999
From: dubois1@llnl.gov (Paul F. Dubois)
Date: Fri, 15 Oct 1999 11:07:35 -0700
Subject: [Matrix-SIG] Support for long longs
References: <199910151640.LAA05921@gateway.infohwy.com>
Message-ID: <001c01bf1738$289dc960$3c810b18@plstn1.sfba.home.com>

David is trying to find the time to make release 13. It would be nice if you
could wait and work from that.
My reservation is that people keep writing in wanting support for this type
or the other.  I'm worried about the consequences for both the complexity of
the package and the job clients face.

----- Original Message -----
From: Jody Winston - Computer <jody@gateway.infohwy.com>
To: <matrix-sig@python.org>
Sent: Friday, October 15, 1999 9:40 AM
Subject: [Matrix-SIG] Support for long longs


> I'm in the middle of wrapping HDF5 using SWIG and numeric and I've run
> into a problem.  HDF5 makes extensive use of signed and unsigned long
> long data types.  After a quick look at numeric, it appears that
> numeric only supports signed longs (PyArray_LONG).
>
> Should I add signed and unsigned long longs to numeric?  If so, what
> version of numeric should I start from and are there any surprises
> that I need to know about.
>
> Jody
>
>
> _______________________________________________
> Matrix-SIG maillist  -  Matrix-SIG@python.org
> http://www.python.org/mailman/listinfo/matrix-sig
>



From skaller@maxtal.com.au  Thu Oct 21 18:19:18 1999
From: skaller@maxtal.com.au (skaller)
Date: Fri, 22 Oct 1999 03:19:18 +1000
Subject: [Matrix-SIG] Numpy in Viper
Message-ID: <380F4B16.8DC151C2@maxtal.com.au>

Hi. I have reached that stage of development of Viper,
my python interpreter/compiler, that I'm considering
a built-in array type. I've been looking at NumPy,
which seems pretty good, and now have some questions.

First, my idea was to add FISh 1 style arrays to Viper.
FISh is the most advanced array processing language available,
it uses shape analysis to generate optimal code, and is
suited to parallelization. I happened to write the FORTRAN
back end for FISh 1, and the experience using ocaml,
the implementation language, is partly responsible for
trying my hand at an ocaml based python compiler.

FISh arrays are declared like:

	{2,3 : 4, 7 : float_shape }

which means 2 x 3 array of 4 x 7 arrays of
float shaped slots.  This is similar to a NumPy
shaped array _except_ that it is also possible to
have the type of an array element be another array.
[This is vital]

FISh uses combinators to manipulate
the shape, based on category theory, so that
a new higher power kind of polymorphism is made
available. To illustrate, consider the 'map'
function. When you map a 2 x 3 array of
floats, the argument function is a function
from floats to something, the result is a 2 x 3
array of somethings.

Suppose you want to map the 2 x 3 array, to an
array of 2 elements, each being the sum of the
three row elements. (or is that columns? Anyhow ..)

We need to 'reshape' the array so we can use
the accumulation function. We can do this by
changing the shape from

	{ 2,3 : float_shape }

to

	{ 2 : 3 : float_shape }

and now we can map accumulate over the array,
which is ONE dimensional array with 2 elements,
to get a ONE dimensional array with 2 elements.

This mechanism applies to all array functions
like reduce (fold_left), etc. It allows to
write functions (like map) which work on any
kind of array at 'any level'.

I hope I have explained the idea well enough.
The fundamental difference to NumPy is that

	a) array elements are arrays
	b) shapes require a tuple of tuples to represent them


Now, the reason I am posting is that compatibility
and utility are, as usual, in conflict. My plan for Viper,
for example, is that arrays be built-in, and standard
No module: arrays become fundamental types. The reason is
that the compiler can then do optimisations.

I do not think I can achieve FISh quality optimisation
because of the non-statically typed nature of Python,
however, some good optimisations should be possible.

Any comments will be appreciated. you can check out
FISh 1 at:

	http://www-staff.mcs.uts.edu.au/~cbj/FISh/

-- 
John Skaller, mailto:skaller@maxtal.com.au
1/10 Toxteth Rd Glebe NSW 2037 Australia
homepage: http://www.maxtal.com.au/~skaller
downloads: http://www.triode.net.au/~skaller


From alawhead@vcn.bc.ca  Thu Oct 21 23:09:40 1999
From: alawhead@vcn.bc.ca (Alexander Lawhead)
Date: Thu, 21 Oct 1999 15:09:40 -0700 (PDT)
Subject: [Matrix-SIG] Gordon MacMillan's installer and NumPy?
Message-ID: <Pine.GSO.4.10.9910211501560.11660-100000@vcn.bc.ca>

Has anyone out there used Gordon MacMillan's installer to create
executables under Win9X (ones that use NumPy)? If so, could you provide an
example of a .cfg file? I've tried some simple scripts, but there seems
to be a conflict with multiarray.pyd

Thanks,

Alexander



From hinsen@cnrs-orleans.fr  Fri Oct 22 18:31:35 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Fri, 22 Oct 1999 19:31:35 +0200
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <380F4B16.8DC151C2@maxtal.com.au> (message from skaller on Fri,
 22 Oct 1999 03:19:18 +1000)
References: <380F4B16.8DC151C2@maxtal.com.au>
Message-ID: <199910221731.TAA05727@chinon.cnrs-orleans.fr>

> FISh arrays are declared like:
> 
> 	{2,3 : 4, 7 : float_shape }
> 
> which means 2 x 3 array of 4 x 7 arrays of
> float shaped slots.  This is similar to a NumPy
> shaped array _except_ that it is also possible to
> have the type of an array element be another array.
> [This is vital]

I don't see the essential difference; it looks like a matter of
interpretation to me. In NumPy you would create an array of shape (2,
3, 4, 7), and you could then interpret it as an array of shape (2, 3)
with elements that are arrays of shape (4, 7). NumPy indexing allows
you not to specify the last indices, so if a is an array with the
shape shown above, then a[1, 0] is legal and returns an array of shape
(4, 7).

The interpretation becomes important only in some of the more
complicated structural operations. In the early stages of NumPy
design, I proposed to adopt the convention of J (an array language
designed by Kenneth Iverson, who is also the inventor of APL), in
which each operation has a (modifiable) rank that indicates how it
interprets its arguments' dimensions. For example, an operation of
rank 2 applied to the array a would treat is as a (2,3) array of (4,
7) arrays, whereas an operation of rank 1 would treat it as a (2, 3,
4) array of (7,) arrays. If I understand your description of FISh
correctly, it uses a similar approach but makes the interpretation
part of the arrays instead of the operations. Anyway, there was no
majority for such a general scheme, and NumPy uses axis indices for
structural operations. And I have to admit that I could not
present any practical example for which the more general J-style
approach would have been better. So I'd like to ask the same
question about FISh: where does the difference matter?

> Now, the reason I am posting is that compatibility
> and utility are, as usual, in conflict. My plan for Viper,
> for example, is that arrays be built-in, and standard
> No module: arrays become fundamental types. The reason is
> that the compiler can then do optimisations.

That is definitely a good idea.

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From skaller@maxtal.com.au  Mon Oct 25 19:04:34 1999
From: skaller@maxtal.com.au (skaller)
Date: Tue, 26 Oct 1999 04:04:34 +1000
Subject: [Matrix-SIG] Numpy in Viper
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr>
Message-ID: <38149BB2.28B8AD90@maxtal.com.au>

Konrad Hinsen wrote:
> 
> > FISh arrays are declared like:
> >
> >       {2,3 : 4, 7 : float_shape }
> >
> > which means 2 x 3 array of 4 x 7 arrays of
> > float shaped slots.  This is similar to a NumPy
> > shaped array _except_ that it is also possible to
> > have the type of an array element be another array.
> > [This is vital]
> 
> I don't see the essential difference; it looks like a matter of
> interpretation to me. 

	Sure. That's probably correct. But 'interpretation'
is the key.

>In NumPy you would create an array of shape (2,
> 3, 4, 7), and you could then interpret it as an array of shape (2, 3)
> with elements that are arrays of shape (4, 7). NumPy indexing allows
> you not to specify the last indices, so if a is an array with the
> shape shown above, then a[1, 0] is legal and returns an array of shape
> (4, 7).

	Functional operations, such as 'map' and 'reduce'
do not use indexing; that is, they represent loops, but
the loops are not written explicitly. Therefore, the 'indexing'
scheme must be embodied in the data structure they are applied to.
 
> The interpretation becomes important only in some of the more
> complicated structural operations. 

	I believe you have this backwards (technically):
it is important in the LEAST complicated, and most fundamental
operations.

	These 'operations' tend to be 'trivial' structural isomorphisms,
such as shape reinterpretation.

>In the early stages of NumPy
> design, I proposed to adopt the convention of J (an array language
> designed by Kenneth Iverson, who is also the inventor of APL), 

	.. and who wants to use the FISh engine to 
implement J better.

>in
> which each operation has a (modifiable) rank that indicates how it
> interprets its arguments' dimensions. For example, an operation of
> rank 2 applied to the array a would treat is as a (2,3) array of (4,
> 7) arrays, whereas an operation of rank 1 would treat it as a (2, 3,
> 4) array of (7,) arrays. If I understand your description of FISh
> correctly, it uses a similar approach but makes the interpretation
> part of the arrays instead of the operations. 

	Not quite: things that reinterpret the shape are also 'operations'.
In fact, the key to the design is that things like:

	map undim array # 'undim' removes a dimension

can be interpreter two ways:

	(map undim) array

and

	map (undim array)

That the 'map' and 'undim' operations commute with array like this
is the essential point: (map undim) is a new operation, and
(undim array) is a new data structure.

> Anyway, there was no
> majority for such a general scheme, and NumPy uses axis indices for
> structural operations. And I have to admit that I could not
> present any practical example for which the more general J-style
> approach would have been better. So I'd like to ask the same
> question about FISh: where does the difference matter?

	Polymorphism. In 'classic' functional programming
languages, 'map' is type polymorphic in the sense that in

	map f list

f is a function 

	f: 'a -> 'b

where list is a list  of 'a, and the result is a list of 'b,
the point being that 'map' will accept a pair of arguments
agreeing on 'a and 'b as indicated, for _any_ 'a and 'b.
So 'map' is 'polymorphic'.  But it is ONLY very weakly
polymorphic: it is type polymorphic. That 'map' function
on lists will ONLY work with lists. So in a typical
functional language, there is another function
'array_map' for mapping arrays, and 'btree_map' for
binary trees. And for arrays, there need to be multiple
versions of map -- hundreds of them. One for every 
pair of integers (rank, depth) where depth <= rank,
so that you can map the first 'depth' dimensions
of a matrix of some greater or equal rank.

	This is necessary, because map
need to know 'how deep' to go into the data structure.

	The point of FISh is that there is exactly
one version of map -- in FISh 1, it works on arrays
of arbitrary rank, for the whole array, and this is
enough, because of the rank transforming combinators.
[The same appies to all such functions like fold/reduce,etc]


	What does this mean? It means you can write a 
program which works on matrices of ANY rank: that is,
a rank polymorphic program. Just as
you can write programs for 2D matrices of any dimensions.
This CANNOT be done using indexing, because 
each extra rank requires another loop written in
the program.

	I hope my rather poor explanation is clear.
Have a look at FISh for the designers explanation.

-- 
John Skaller, mailto:skaller@maxtal.com.au
1/10 Toxteth Rd Glebe NSW 2037 Australia
homepage: http://www.maxtal.com.au/~skaller
downloads: http://www.triode.net.au/~skaller


From hinsen@cnrs-orleans.fr  Wed Oct 27 16:35:05 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Wed, 27 Oct 1999 17:35:05 +0200
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <38149BB2.28B8AD90@maxtal.com.au> (message from skaller on Tue,
 26 Oct 1999 04:04:34 +1000)
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr> <38149BB2.28B8AD90@maxtal.com.au>
Message-ID: <199910271535.RAA26773@chinon.cnrs-orleans.fr>

> 	Functional operations, such as 'map' and 'reduce'
> do not use indexing; that is, they represent loops, but
> the loops are not written explicitly. Therefore, the 'indexing'
> scheme must be embodied in the data structure they are applied to.

Not necessarily; in J, you would add a modifier to the map operation
to give it some specific rank. On the other hand, it often makes sense
to attach this information to the data, because it rarely changes in
the course of some computation.

> 	Not quite: things that reinterpret the shape are also 'operations'.
> In fact, the key to the design is that things like:
> 
> 	map undim array # 'undim' removes a dimension
> 
> can be interpreter two ways:
> 
> 	(map undim) array
> 
> and
> 
> 	map (undim array)
> 
> That the 'map' and 'undim' operations commute with array like this
> is the essential point: (map undim) is a new operation, and
> (undim array) is a new data structure.

OK, then the (map undim) interpretation is the J way of doing it. In
NumPy, you can only modify the array shape. For mapping a
two-dimensional object, you have to reshape the array, rolling the
mapped dimensions into one, then apply map(), and then reshape again.
A more general map() could be implemented without breaking any
compatibility, but I don't see an easy way to provide a general
mechanism for deriving structural operations.


> > approach would have been better. So I'd like to ask the same
> > question about FISh: where does the difference matter?
> 
> 	Polymorphism. In 'classic' functional programming
> languages, 'map' is type polymorphic in the sense that in

Fine, but the question was about a *practical* application, not
one of principle or elegance (which doesn't mean I don't care,
but it's good idea to keep in mind that programing languages
are made to solve real-life problems).

> 	What does this mean? It means you can write a program which
> works on matrices of ANY rank: that is, a rank polymorphic program.
> Just as you can write programs for 2D matrices of any dimensions.
> This CANNOT be done using indexing, because each extra rank requires
> another loop written in the program.

Still you can write rank polymorphic code in NumPy, using the approach
I mentioned above. I even did it. I wish it were easier, sure, but
being possible is already quite good, compared to (shudder) Fortran
programming...

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From hinsen@dirac.cnrs-orleans.fr  Wed Oct 27 16:35:59 1999
From: hinsen@dirac.cnrs-orleans.fr (hinsen@dirac.cnrs-orleans.fr)
Date: Wed, 27 Oct 1999 17:35:59 +0200
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <38149BB2.28B8AD90@maxtal.com.au> (message from skaller on Tue,
 26 Oct 1999 04:04:34 +1000)
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr> <38149BB2.28B8AD90@maxtal.com.au>
Message-ID: <199910271535.RAA26779@chinon.cnrs-orleans.fr>

> 	Functional operations, such as 'map' and 'reduce'
> do not use indexing; that is, they represent loops, but
> the loops are not written explicitly. Therefore, the 'indexing'
> scheme must be embodied in the data structure they are applied to.

Not necessarily; in J, you would add a modifier to the map operation
to give it some specific rank. On the other hand, it often makes sense
to attach this information to the data, because it rarely changes in
the course of some computation.

> 	Not quite: things that reinterpret the shape are also 'operations'.
> In fact, the key to the design is that things like:
> 
> 	map undim array # 'undim' removes a dimension
> 
> can be interpreter two ways:
> 
> 	(map undim) array
> 
> and
> 
> 	map (undim array)
> 
> That the 'map' and 'undim' operations commute with array like this
> is the essential point: (map undim) is a new operation, and
> (undim array) is a new data structure.

OK, then the (map undim) interpretation is the J way of doing it. In
NumPy, you can only modify the array shape. For mapping a
two-dimensional object, you have to reshape the array, rolling the
mapped dimensions into one, then apply map(), and then reshape again.
A more general map() could be implemented without breaking any
compatibility, but I don't see an easy way to provide a general
mechanism for deriving structural operations.


> > approach would have been better. So I'd like to ask the same
> > question about FISh: where does the difference matter?
> 
> 	Polymorphism. In 'classic' functional programming
> languages, 'map' is type polymorphic in the sense that in

Fine, but the question was about a *practical* application, not
one of principle or elegance (which doesn't mean I don't care,
but it's good idea to keep in mind that programing languages
are made to solve real-life problems).

> 	What does this mean? It means you can write a program which
> works on matrices of ANY rank: that is, a rank polymorphic program.
> Just as you can write programs for 2D matrices of any dimensions.
> This CANNOT be done using indexing, because each extra rank requires
> another loop written in the program.

Still you can write rank polymorphic code in NumPy, using the approach
I mentioned above. I even did it. I wish it were easier, sure, but
being possible is already quite good, compared to (shudder) Fortran
programming...

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From skaller@maxtal.com.au  Thu Oct 28 07:03:07 1999
From: skaller@maxtal.com.au (skaller)
Date: Thu, 28 Oct 1999 16:03:07 +1000
Subject: [Matrix-SIG] Numpy in Viper
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr> <38149BB2.28B8AD90@maxtal.com.au> <199910271535.RAA26773@chinon.cnrs-orleans.fr>
Message-ID: <3817E71B.5800CDEF@maxtal.com.au>

Konrad Hinsen wrote:
> 
> >       Functional operations, such as 'map' and 'reduce'
> > do not use indexing; that is, they represent loops, but
> > the loops are not written explicitly. Therefore, the 'indexing'
> > scheme must be embodied in the data structure they are applied to.
> 
> Not necessarily; in J, you would add a modifier to the map operation
> to give it some specific rank. 

	You are right. In fact, the very point of FISh is that
the _same_ modifier can equally well be applied to either the
data _or_ to the operator. Indeed, it is the 'commutativity'
which is the heart of the strengthened polymorphism FISh makes
available.

> OK, then the (map undim) interpretation is the J way of doing it. In
> NumPy, you can only modify the array shape. For mapping a
> two-dimensional object, you have to reshape the array, rolling the
> mapped dimensions into one, then apply map(), and then reshape again.
> A more general map() could be implemented without breaking any
> compatibility, but I don't see an easy way to provide a general
> mechanism for deriving structural operations.

	I already know how to do this, the issue here
is compatibility. There is no doubt the FISh functionality
is superior _technically_. However, compatibility is always
very important.

	The questions are:

	a) is it worth the effort for Python to provide native arrays

If YES: (my belief)

	b) should NumPy be ignored in favour of the FISh mechanism?

If NO: (my belief)

	c) should NumPy be supported but deprecated?

If NO: (I am not sure: hence questions on this forum)

	d) Should we attempt to make the two mechanisms compatible

It is my belief that (d) would be the best option IF it is, in fact
possible. (c) is the next best option.

> >       Polymorphism. In 'classic' functional programming
> > languages, 'map' is type polymorphic in the sense that in
> 
> Fine, but the question was about a *practical* application, not
> one of principle or elegance (which doesn't mean I don't care,
> but it's good idea to keep in mind that programing languages
> are made to solve real-life problems).

	Sigh. Would you go back to writing machine code?
What is the 'practical' application of a high level language?

	You are asking the same question. I'm sure you already
know the answer. It is why you write Python not C or Fortran.
Elegance is vital. It is the very heart of computing.
It is, indeed, just another word for abstraction :-)

-- 
John Skaller, mailto:skaller@maxtal.com.au
1/10 Toxteth Rd Glebe NSW 2037 Australia
homepage: http://www.maxtal.com.au/~skaller
downloads: http://www.triode.net.au/~skaller


From pearu@ioc.ee  Thu Oct 28 08:32:38 1999
From: pearu@ioc.ee (Pearu Peterson)
Date: Thu, 28 Oct 1999 10:32:38 +0300 (EETDST)
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <3817E71B.5800CDEF@maxtal.com.au>
Message-ID: <Pine.HPX.4.05.9910281010260.3435-100000@egoist.ioc.ee>


On Thu, 28 Oct 1999, skaller wrote:

> 	The questions are:
> 
> 	a) is it worth the effort for Python to provide native arrays
> 
> If YES: (my belief)
> 
> 	b) should NumPy be ignored in favour of the FISh mechanism?
> 
> If NO: (my belief)
> 
> 	c) should NumPy be supported but deprecated?
> 
> If NO: (I am not sure: hence questions on this forum)
> 
> 	d) Should we attempt to make the two mechanisms compatible
> 
> It is my belief that (d) would be the best option IF it is, in fact
> possible. (c) is the next best option.

Hi!

I am not familiar with this J stuff but I do have some questions:
First, can you give some (rough) estimate how it would affect the
performance?
Second, how it would affect writing Py/C API's? 
I'll presume that the notion of contiguousness is there, right?

BTW, I do like elegance too ;)

Regards,
	Pearu



From hinsen@cnrs-orleans.fr  Thu Oct 28 17:50:02 1999
From: hinsen@cnrs-orleans.fr (Konrad Hinsen)
Date: Thu, 28 Oct 1999 18:50:02 +0200
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <3817E71B.5800CDEF@maxtal.com.au> (message from skaller on Thu,
 28 Oct 1999 16:03:07 +1000)
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr> <38149BB2.28B8AD90@maxtal.com.au> <199910271535.RAA26773@chinon.cnrs-orleans.fr> <3817E71B.5800CDEF@maxtal.com.au>
Message-ID: <199910281650.SAA27298@chinon.cnrs-orleans.fr>

> 	You are right. In fact, the very point of FISh is that
> the _same_ modifier can equally well be applied to either the
> data _or_ to the operator. Indeed, it is the 'commutativity'
> which is the heart of the strengthened polymorphism FISh makes
> available.

OK, that sounds definitely useful.

> 	a) is it worth the effort for Python to provide native arrays

Yes!

> 	b) should NumPy be ignored in favour of the FISh mechanism?

No. There is just too much NumPy code around.

> 	c) should NumPy be supported but deprecated?

That's a difficult question. It depends on how much better and
how incompatible an alternative scheme would be (and I don't
even know how to measure these quantities).

> 	d) Should we attempt to make the two mechanisms compatible
> 
> It is my belief that (d) would be the best option IF it is, in fact
> possible. (c) is the next best option.

Of course (d) is ideal. And why shouldn't it be possible? It shouldn't
be difficult to implement the current NumPy functionality in terms
of some more advanced approach.

Konrad.
-- 
-------------------------------------------------------------------------------
Konrad Hinsen                            | E-Mail: hinsen@cnrs-orleans.fr
Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69
Rue Charles Sadron                       | Fax:  +33-2.38.63.15.17
45071 Orleans Cedex 2                    | Deutsch/Esperanto/English/
France                                   | Nederlands/Francais
-------------------------------------------------------------------------------


From skaller@maxtal.com.au  Fri Oct 29 05:48:01 1999
From: skaller@maxtal.com.au (skaller)
Date: Fri, 29 Oct 1999 14:48:01 +1000
Subject: [Matrix-SIG] Numpy in Viper
References: <Pine.HPX.4.05.9910281010260.3435-100000@egoist.ioc.ee>
Message-ID: <38192701.FEF905F2@maxtal.com.au>

Pearu Peterson wrote:

[Viper/NumPy]

> >       d) Should we attempt to make the two mechanisms compatible
> >
> > It is my belief that (d) would be the best option IF it is, in fact
> > possible. (c) is the next best option.
> 
> Hi!
> 
> I am not familiar with this J stuff but I do have some questions:
> First, can you give some (rough) estimate how it would affect the
> performance?

	No. I can tell you about FISh 1 itself: quicksort,
generated as C, is faster than qsort by a significant percentage
(some figure are availeble from the FISh home page).

	When it comes to Viper built in arry handling,
as opposed to NumPy, it will depend a lot on the implementation,
in two ways: first, on the low level data structures: NumPy
is probably optimal there, and a native ocaml version can
only be slower.

	On the other hand, when it comes to higher level
constructions, NumPy is grossly inefficient. This is not
so much a fault of NumPy, but rather due the limited ability
of the Python compiler to optimise it.

	The typical problem for matrix operations is
the creation of spurious temporaries, for example,
consider:

	a = b + c + d + e - f

This can be done without any temporary storage at all,
assuming the assignment to 'a' is destructive.

In addition, there is a fairly large overhead, adding
successive pairs, compared to adding all 5 corresponding
components simultaneously.

[Viper currently does binary operations here, for
other types, but it recognizes the special case
already at won't be too hard to optimise]

> Second, how it would affect writing Py/C API's?
> I'll presume that the notion of contiguousness is there, right?

	At present, Viper does not support CPython API at all.
It may, perhaps, do so, if a Python CAPI compatible back end
is fitted to the compiler. [At present, there is no compiler,
nor any decision on the first back end to write]

	The question is: why would a _numerical_ programmer
want a C API at all, if they had a high performance compiler?
[The answer 'compatibility' is obvious. Any other?]

-- 
John Skaller, mailto:skaller@maxtal.com.au
1/10 Toxteth Rd Glebe NSW 2037 Australia
homepage: http://www.maxtal.com.au/~skaller
downloads: http://www.triode.net.au/~skaller


From skaller@maxtal.com.au  Fri Oct 29 06:11:10 1999
From: skaller@maxtal.com.au (skaller)
Date: Fri, 29 Oct 1999 15:11:10 +1000
Subject: [Matrix-SIG] Numpy in Viper
References: <380F4B16.8DC151C2@maxtal.com.au> <199910221731.TAA05727@chinon.cnrs-orleans.fr> <38149BB2.28B8AD90@maxtal.com.au> <199910271535.RAA26773@chinon.cnrs-orleans.fr> <3817E71B.5800CDEF@maxtal.com.au> <199910281650.SAA27298@chinon.cnrs-orleans.fr>
Message-ID: <38192C6E.622D20C1@maxtal.com.au>

Konrad Hinsen wrote:

> >       d) Should we attempt to make the two mechanisms compatible
> >
> > It is my belief that (d) would be the best option IF it is, in fact
> > possible. (c) is the next best option.
> 
> Of course (d) is ideal. And why shouldn't it be possible? 

	i don't know: I am not familiar with NumPy, I have a copy
of the specs, but have never used it. I have not attempted to implement
native arrays for Viper yet, so I just don't know.

>It shouldn't
> be difficult to implement the current NumPy functionality in terms
> of some more advanced approach.

	Ostensibly that is true, but in practice, it may
turn out to present difficulties I'm unaware of, and it may
just be a whole lot of work.

	I really don't know. I am hoping to gain some idea
before I start on the implementation. I also do not know how
far I can carry the FISh arrays through: the 'concrete
language' FISh uses is ML like, with static typing,
which is not the same as the Python concrete language.

	the sort of thing I'm hoping is that 
an expression like a + b * c / d * h(q) ^ d,
all variables being arrays, will be evaluated by
the FISh engine, C code generated, linked to dynamically,
and executed -- resulting in optimal performance.
FISh can do more, since, by using lambda calculus,
all functions are inlined, resulting in a single
expression which is evaluated as above, it is not
clear how far I can go with Python without static
typing (this depends on how successful the type
inference is).

-- 
John Skaller, mailto:skaller@maxtal.com.au
1/10 Toxteth Rd Glebe NSW 2037 Australia
homepage: http://www.maxtal.com.au/~skaller
downloads: http://www.triode.net.au/~skaller


From tim_one@email.msn.com  Fri Oct 29 07:56:20 1999
From: tim_one@email.msn.com (Tim Peters)
Date: Fri, 29 Oct 1999 02:56:20 -0400
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <38192701.FEF905F2@maxtal.com.au>
Message-ID: <000501bf21da$b45f6100$af2d153f@tim>

[John Skaller asks ...]
> ...
> 	The question is: why would a _numerical_ programmer
> want a C API at all, if they had a high performance compiler?
> [The answer 'compatibility' is obvious. Any other?]

In my experience, largely because proper numerical analysis (i.e., surviving
life with floating point) is very difficult in large-scale work, and if you
can leverage off a widely thought-to-be robust numeric library, life is
correspondingly that much easier and safer.

In another vein, I've written many pieces of hyper-optimized numeric
libraries for HW vendors over the years, and these days I wouldn't even
consider it without full access to HW-level 754 knobs, and even a bit of
inline assembler.  Almost all vendors supply hooks for both in C (although
different hooks from different vendors).

IOW, to avoid reinventing more-than-likely poorer wheels, but also to make
it possible to create great new wheels when necessary.

half-the-joy-of-a-c-api-is-that-it-bridges-to-a-fortran-one-ly y'rs  - tim




From jsaenz@lcdx00.wm.lc.ehu.es  Fri Oct 29 08:46:00 1999
From: jsaenz@lcdx00.wm.lc.ehu.es (Jon Saenz)
Date: Fri, 29 Oct 1999 09:46:00 +0200 (MET DST)
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <38192701.FEF905F2@maxtal.com.au>
Message-ID: <Pine.OSF.3.95.991029094130.24069B-100000@lcdx00.wm.lc.ehu.es>

On Fri, 29 Oct 1999, skaller wrote:

> Pearu Peterson wrote:
[....]
> 	The question is: why would a _numerical_ programmer
> want a C API at all, if they had a high performance compiler?
> [The answer 'compatibility' is obvious. Any other?]
> 
Probably, neither Viper nor any other high level language is able to
prepare a routine/set of routines to handle ALL the tasks that ALL the
programmers (even numerical ones) in the world intend to use.
Nobody is able to forsee all the uses of the language. Otherwise,
MS Excel would be nice for all of us ;-DDD

Access to individual elements by means of a C API seems important to me.

Jon Saenz.				| Tfno: +34 946012470
Depto. Fisica Aplicada II               | Fax:  +34 944648500
Facultad de Ciencias.   \\ Universidad del Pais Vasco \\
Apdo. 644   \\ 48080 - Bilbao  \\ SPAIN





From robin@jessikat.demon.co.uk  Fri Oct 29 11:42:25 1999
From: robin@jessikat.demon.co.uk (Robin Becker)
Date: Fri, 29 Oct 1999 11:42:25 +0100
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <38192C6E.622D20C1@maxtal.com.au>
References: <380F4B16.8DC151C2@maxtal.com.au>
 <199910221731.TAA05727@chinon.cnrs-orleans.fr>
 <38149BB2.28B8AD90@maxtal.com.au>
 <199910271535.RAA26773@chinon.cnrs-orleans.fr>
 <3817E71B.5800CDEF@maxtal.com.au>
 <199910281650.SAA27298@chinon.cnrs-orleans.fr>
 <38192C6E.622D20C1@maxtal.com.au>
Message-ID: <YKlnhEARoXG4EwaL@jessikat.demon.co.uk>

I'm seeing a lot about viper both here and in clp. Is viper an open
source, research or commercial project? Will it be seeing the light of
day in some testable form soon?
-- 
Robin Becker


From phil@geog.ubc.ca  Fri Oct 29 17:28:32 1999
From: phil@geog.ubc.ca (Phil Austin)
Date: Fri, 29 Oct 1999 09:28:32 -0700 (PDT)
Subject: [Matrix-SIG] Numpy in Viper
In-Reply-To: <000501bf21da$b45f6100$af2d153f@tim>
References: <38192701.FEF905F2@maxtal.com.au>
 <000501bf21da$b45f6100$af2d153f@tim>
Message-ID: <14361.52016.947318.641049@brant.geog.ubc.ca>

 > [John Skaller asks ...]
 > > ...
 > > 	The question is: why would a _numerical_ programmer
 > > want a C API at all, if they had a high performance compiler?
 > > [The answer 'compatibility' is obvious. Any other?]
 > 

Note that any successful numerical language is also going to have to be
able to generate data in some portable format (of which there are
dozens), and interface cleanly to graphics apis like Opengl.

It's also important to remember how collaborative modern science is.
I'm currently working with a dozen colleagues (plus their students) at
five institutions, all actively developing and sharing thousands of
lines of (mainly Fortran) code.  The likelihood that any of them will
learn Python, not to mention Viper, in the next 2-3 years is near
zero.


_______________
Phil Austin		INTERNET: paustin@eos.ubc.ca
(604) 822-2175		FAX:	  (604) 822-6150

Associate Professor
Atmospheric Sciences Programme
Department of Earth and Ocean Sciences
University of British Columbia






From patrick@dante.alexa.com  Fri Oct 29 20:06:16 1999
From: patrick@dante.alexa.com (Patrick Tufts)
Date: Fri, 29 Oct 1999 12:06:16 -0700
Subject: [Matrix-SIG] pickle dump and MemoryError
Message-ID: <l03130302b43f9d4caa18@[216.101.185.125]>

Using Numeric and LinearAlgebra modules, I create a large matrix (approx 2k
x 26k) and then do a svd (singular value decomposition) on it without an
error.  svd decomposes an array into three arrays, and I can pickle and
write the first two resulting arrays, but the third causes a MemoryError.

If I don't pickle the third result from svd, the code runs without error.

According to top, the maximum memory use of the process is well under my
swap size. Use is 1.5G, machine has 1G of RAM and 3G of swap.  The disk I'm
trying to write to also has plenty of free space.

Any suggestions for how I can save this array to disk without error?  Error
traceback and code fragment at end of message.

--Pat

>Traceback (innermost last):
>  File "filter.py", line 98, in ?
>    p.dump(d0p)
>  File "/usr/local/lib/python1.5/pickle.py", line 97, in dump
>    self.save(object)
>  File "/usr/local/lib/python1.5/pickle.py", line 192, in save
>    self.save_reduce(callable, arg_tup, state)
>  File "/usr/local/lib/python1.5/pickle.py", line 218, in save_reduce
>    save(arg_tup)
>  File "/usr/local/lib/python1.5/pickle.py", line 198, in save
>    f(self, object)
>  File "/usr/local/lib/python1.5/pickle.py", line 288, in save_tuple
>    save(element)
>  File "/usr/local/lib/python1.5/pickle.py", line 198, in save
>    f(self, object)
>  File "/usr/local/lib/python1.5/pickle.py", line 270, in save_string
>    self.write(STRING + `object` + '\n')
>MemoryError

The code that generates this error is:

>import sys,string,regex
>from Numeric import *
>from LinearAlgebra import *

>[t0,s0,d0p]=singular_value_decomposition(d)

>d0pfile=open('d0ppickle','w')
>p=pickle.Pickler(d0pfile)
>p.dump(d0p)
>d0pfile.close
>
>print "pickled d0p - done"


>Python 1.5.2 (#1, Oct 11 1999, 15:01:23)  [GCC 2.8.1] on sunos5
>Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam

>uname -a
>SunOS fred 5.6 Generic_105182-05 i86pc i386 i86pc