You've just purchased set of Maibach brand earthenware on web site cvv2.ru
Easy to use, Maibach kitchenware is also famous for its modern look.
Our utensils, designed for easy and fast cooking of a variety of foods, will lower your energy consumption rate and save your time and money.
If you are looking for a reliable and trouble-free kitchen help, you have just found it.
All Maibach kitchenware is made of stainless steel - a proven chrome and nickel 18/10 alloy. Easy to use and famous for its modern look, Maibach kitchenware allows you to simultaneously cook several dishes on one burner.
A heat-absorbing base, built for optimal transmission of thermal energy from the burner to the cookware, is the most important feature of Maibach utensils. The exclusive Platinum model uses a triple heat-absorbing base, guaranteeing a still better protection of food from burning. Even and smooth heating of the utensil's base and sides prevents food from burning.
We are glad to offer to our customers the most generous reward program:
every customer who spent with us more than USD 249.99 or Euro 199.99, automatically qualified for bonus. Choose your bonus:
1. Sony VHS cassette with 240 minutes of best underage porno you ever see. (NTSC and Secam both are available)
2. Bestselling manual "How to create plastic bomb in home" and "How to hijack a train or an aircraft, with color pictures and FAQ"
To cancell order:firstname.lastname@example.org
Refund Service ICQ#307004
All Maibach products are covered by lifetime warranty.
Please help me with installing NumPy.
When I run:
python setup.py install
there was error.
It said that:
ImportError: No module named distutils
I'm using Mandrake 9.
Do you Yahoo!?
Yahoo! Mail - More reliable, more storage, less spam
Release Notes for numarray-0.9
Numarray is an array processing package designed to efficiently
manipulate large multi-dimensional arrays. Numarray is modelled after
Numeric and features c-code generated from python template scripts,
the capacity to operate directly on arrays in files, and improved type
1. Support for "from __future__ import division"
True division has been implemented for numarray. This means that
modules that wish to use true division can also use numarray and
numarray division will work as follows:
a. dividing any two integer arrays using "/" results in a Float32
b. dividing two floating point arrays using "//" results in truncation
of the result as in: a // b == floor(a/b).
2. C-coded array slicing
Array slicing has been re-implemented in C-code as part of the
_ndarray module. This means faster slicing. Thanks go to Warren
Hack, Chris Hanley, and Ivo Busko for helping debug a huge refcount
3. Decreased Ufunc overhead
Ufunc execution speed has clawed and scratched its way back to where
it was around numarray-0.5. Improvements here included optimization
of the ufunc caching, smarter thread handling, and smarter support for
subclasses. The ufunc caching is based on a simple 20 element table
for each ufunc.
4. Faster array creation from C
Code which creates NumArrays from C (including numarray itself) can
now do so faster because the API functions have been modified to do
the array __init__ inline rather than through an expensive Python
II. BUGS FIXED / CLOSED
for more details.
913781 Another memory leak in example in Chapter 12
908399 Numarray 0.7: "del a" dumps core
899259 astype Int16 copy4bytes: access beyond buffer
895801 Buffer overflow in sum w/ 0-sized array
894810 MemoryError When Creating Large Arrays
890703 getnan() and getinf() failure
883124 and and operator.and respond differently
865410 Usage of __dict__
854480 Slice assignment of float to integer
839367 Overlapping slice assign fails
828941 Numarray: determinant returns scalar or array
820122 Linearalgebra2.determinant problem
817343 Sub-classing of NumArray inhibited by complex values
793336 crash in _sort.pyd
772548 Reference counting errors
683957 Adding certain arrays fails in Numarray
1. numarray extension writers should note that the documented use of
PyArray_INCREF and PyArray_XDECREF (in numarray) has been found to be
incompatible with Numeric and has therefore been deprecated. numarray
wrapper functions using PyArray_INCREF and PyArray_XDECREF should
switch to ordinary Py_INCREF and Py_XDECREF.
Numarray-0.9 windows executable installers, source code, and manual is
Numarray is hosted by Source Forge in the same project which hosts
The web page for Numarray information is at:
Trackers for Numarray Bugs, Feature Requests, Support, and Patches are
at the Source Forge project for NumPy at:
numarray-0.9 requires Python 2.2.2 or greater.
Numarray was written by Perry Greenfield, Rick White, Todd Miller, JC
Hsu, Paul Barrett, Phil Hodge at the Space Telescope Science
Institute. We'd like to acknowledge the assitance of Francesc Alted,
Paul Dubois, Sebastian Haase, Tim Hochberg, Nadav Horesh, Edward
C. Jones, Eric Jones, Jochen Küpper, Travis Oliphant, Pearu Peterson,
Peter Verveer, Colin Williams, and everyone else who has contributed
with comments and feedback.
Numarray is made available under a BSD-style License. See
LICENSE.txt in the source distribution for details.
Todd Miller jmiller(a)stsci.edu
it is surely a silly question, but I'm quite new to Python and Numpy.
So I hope you can help me, I've searched Numpy's manual without
Suppose I want to test if a paremeter inside a function is an array or
not. How can I test this?
type(my_array) is array
Thanks in advance,
"Science is like sex: sometimes something useful comes out,
but that is not the reason we are doing it" -- (Richard Feynman)
Wed, 10 Mar 2004 20:04:22 -0500
Stock-Profile - NEW ISSUE: SEVI - Systems Evolution Incorporated
Breaking NEWS: Systems Evolution Acquires AXP Technologies
STAFFORD, Texas -- Systems Evolution, Inc., (OTC BB:SEVI.OB - News), an information technology services company, announces that it has acquired AXP Technologies, Inc., a Houston based computer and network support company.
This acquisition accelerates Systems Evolution's move into network security and computer support services for small to mid-size businesses (SMBs). The SMB Solutions Practice will be led by AXP Technologies founders, Willie A. Jackson and Ryan L. Sumstad as Director and Managing Consultant, respectively. AXP Technolgies designed and implemented computer systems and network security throughout organizations, maintaining and supporting them at a fraction of the traditional costs.
Disclosure: Market-Torque (M-T) is not a registered financial advisory. The information presented by M-T is not for purchasing or selling securities. M-T compiles then distributes opinions, comments and information based on other public sources. Penny stocks are considered to be highly speculative and may be unsuitable for all but very aggressive investors. M-T does not hold nor does it plan to hold a position in this stock. This Profile was a paid advertisement by a third party not affiliated with the profiled company. M-T was compensated four thousand dollars to publish and distribute this report. Paid advertisements for a third party do not necessarily reflect the views of M-T. Target prices may fluctuate depending on market environments. Please always consult a registered financial advisor before making any decisions. This report is for entertainment and advertising purposes only and should not be used as investment advice. Copyright 2004, Roytico Ltd. This report distributed by MMS. Apartado 173-3006 Zona Franca MeoBarreal Heredia, Costa Rica.
No more ads, cathisterba(a)freemail.lt
Is this intended:
Why is na.array().shape not equal to na.array(1).shape ?
Did I miss the respective section in the manual ?
I'm happy to announce the availability of PyTables 0.8.
PyTables is a hierarchical database package designed to efficiently
manage very large amounts of data. PyTables is built on top of the
HDF5 library and the numarray package. It features an object-oriented
interface that, combined with natural naming and C-code generated from
Pyrex sources, makes it a fast, yet extremely easy-to-use tool for
interactively saving and retrieving very large amounts of data. It also
provides flexible indexed access on disk to anywhere in the data.
PyTables is not designed to work as a relational database competitor,
but rather as a teammate. If you want to work with large datasets of
multidimensional data (for example, for multidimensional analysis), or
just provide a categorized structure for some portions of your cluttered
RDBS, then give PyTables a try. It works well for storing data from data
acquisition systems (DAS), simulation software, network data monitoring
systems (for example, traffic measurements of IP packets on routers),
working with very large XML files or as a centralized repository for
system logs, to name only a few possible uses.
In this release you will find:
- Variable Length Arrays (VLA's) for saving a collection
of variable length of elements in each row of an array.
- Extensible Arrays (EA's) for extending homogeneous
datasets on disk.
- Powerful replication capabilities, ranging from single leaves
up to complete hierarchies.
- With the introduction of the UnImplemented class, greatly
improved HDF5 native file import capabilities.
- Two new useful utilities: ptdump & ptrepack.
- Improved documentation (with the help of Scott Prater).
- New record on data size achieved: 5.6 TB (!) in one single
- Enhanced platform support. New platforms: MacOSX, FreeBSD,
Linux64, IRIX64 (yes, a clean 64-bit port is there) and
- More tests units (now exceeding 800).
- Many other minor improvements.
More in detail:
- The new VLArray class enables you to store large lists of rows
containing variable numbers of elements. The elements can
be scalars or fully multimensional objects, in the PyTables
tradition. This class supports two special objects as rows:
Unicode strings (UTF-8 codification is used internally) and
generic Python objects (through the use of cPickle).
- The new EArray class allows you to enlarge already existing
multidimensional homogeneous data objects. Consider it
an extension of the already existing Array class, but
with more functionality. Online compression or other filters
can be applied to EArray instances, for example.
Another nice feature of EA's is their support for fully
multidimensional data selection with extended slices. You
can write "earray[1,2:3,...,4:200]", for example, to get the
desired dataset slice from the disk. This is implemented
using the powerful selection capabilities of the HDF5
library, which results in very highly efficient I/O
operations. The same functionality has been added to Array
objects as well.
- New UnImplemented class. If a dataset contains unsupported
datatypes, it will be associated with an UnImplemented
instance, then inserted into to the object tree as usual.
This allows you to continue to work with supported objects
while retaining access to attributes of unsupported
datasets. This has changed from previous versions, where a
RuntimeError occurred when an unsupported object was
The combination of the new UnImplemented class with the
support for new datatypes will enable PyTables to greatly
increase the number of types of native HDF5 files that can
be read and modified.
- Boolean support has been added for all the Leaf objects.
- The Table class has now an append() method that allows you
to save large buffers of data in one go (i.e. bypassing the
Row accessor). This can greatly improve data gathering
- The standard HDF5 shuffle filter (to further enhance the
compression level) is supported.
- The standard HDF5 fletcher32 checksum filter is supported.
- As the supported number of filters is growing (and may be
further increased in the future), a Filters() class has been
introduced to handle filters more easily. In order to add
support for this class, it was necessary to make a change in
the createTable() method that is not backwards compatible:
the "compress" and "complib" parameters are deprecated now
and the "filters" parameter should be used in their
place. You will be able to continue using the old parameters
(only a Deprecation warning will be issued) for the next few
releases, but you should migrate to the new version as soon
as possible. In general, you can easily migrate old code by
substituting code in its place:
table = fileh.createTable(group, 'table', Test, '',
should be replaced by
table = fileh.createTable(group, 'table', Test, '',
- A copy() method that supports slicing and modification of
filtering capabilities has been added for all the Leaf
objects. See the User's Manual for more information.
- A couple of new methods, namely copyFile() and copyChilds(),
have been added to File class, to permit easy replication
of complete hierarchies or sub-hierarchies, even to
other files. You can change filters during the copy
process as well.
- Two new utilities has been added: ptdump and
ptrepack. The utility ptdump allows the user to examine
the contents of PyTables files (both metadata and actual
data). The powerful ptrepack utility lets you
selectively copy (portions of) hierarchies to specific
locations in other files. It can be also used as an
importer for generic HDF5 files.
- The meaning of the stop parameter in read() methods has
changed. Now a value of 'None' means the last row, and a
value of 0 (zero) means the first row. This is more
consistent with the range() function in python and the
__getitem__() special method in numarray.
- The method Table.removeRows() is no longer limited by table
size. You can now delete rows regardless of the size of the
- The "numarray" value has been added to the flavor parameter
in the Table.read() method for completeness.
- The attributes (.attr instance variable) are Python
properties now. Access to their values is no longer
lazy, i.e. you will be able to see both system or user
attributes from the command line using the tab-completion
capability of your python console (if enabled).
- Documentation has been greatly improved to explain all the
new functionality. In particular, the internal format of
PyTables is now fully described. You can now build
"native" PyTables files using any generic HDF5 software
by just duplicating their format.
- Many new tests have been added, not only to check new
functionality but also to more stringently check
existing functionality. There are more than 800 different
tests now (and the number is increasing :).
- PyTables has a new record in the data size that fits in one
single file: more than 5 TB (yeah, more than 5000 GB), that
accounts for 11 GB compressed, has been created on an AMD
Opteron machine running Linux-64 (the 64 bits version of the
Linux kernel). See the gory details in:
- New platforms supported: PyTables has been compiled and tested
under Linux32 (Intel), Linux64 (AMD Opteron and Alpha), Win32
(Intel), MacOSX (PowerPC), FreeBSD (Intel), Solaris (6, 7, 8
and 9 with UltraSparc), IRIX64 (IRIX 6.5 with R12000) and it
probably works in many more architectures. In particular,
release 0.8 is the first one that provides a relatively clean
porting to 64-bit platforms.
- As always, some bugs have been solved (especially bugs that
occur when deleting and/or overwriting attributes).
- And last, but definitely not least, a new donations section
has been added to the PyTables web site
(http://sourceforge.net/projects/pytables, then follow the
"Donations" tag). If you like PyTables and want this effort
to continue, please, donate!
What is a table?
A table is defined as a collection of records whose values are stored
in fixed-length fields. All records have the same structure and all
values in each field have the same data type. The terms
"fixed-length" and "strict data types" seem to be quite a strange
requirement for an language like Python that supports dynamic data
types, but they serve a useful function if the goal is to save very
large quantities of data (such as is generated by many scientific
applications, for example) in an efficient manner that reduces demand
on CPU time and I/O resources.
What is HDF5?
For those people who know nothing about HDF5, it is is a general
purpose library and file format for storing scientific data made at
NCSA. HDF5 can store two primary objects: datasets and groups. A
dataset is essentially a multidimensional array of data elements, and
a group is a structure for organizing objects in an HDF5 file. Using
these two basic constructs, one can create and store almost any kind of
scientific data structure, such as images, arrays of vectors, and
structured and unstructured grids. You can also mix and match them in
HDF5 files according to your needs.
I'm using Linux (Intel 32-bit) as the main development platform, but
PyTables should be easy to compile/install on many other UNIX
machines. This package has also passed all the tests on a UltraSparc
platform with Solaris 7 and Solaris 8. It also compiles and passes all
the tests on a SGI Origin2000 with MIPS R12000 processors, with the
MIPSPro compiler and running IRIX 6.5. It also runs fine on Linux 64-bit
platforms, like an AMD Opteron running SuSe Linux Enterprise Server. It
has also been tested in MacOSX platforms (10.2 but should also work on
Regarding Windows platforms, PyTables has been tested with Windows
2000 and Windows XP (using the Microsoft Visual C compiler), but it
should also work with other flavors as well.
For online code examples, have a look at
and, for newly introduced Variable Length Arrays:
Go to the PyTables web site for more details:
Share your experience
Let me know of any bugs, suggestions, gripes, kudos, etc. you may
-- Francesc Alted
I've just written a small tutorial on discretizing time series with
Just some basic stuff I've used when working with time series in my
research, such as mapping integer values to symbols, mapping real
values to integers using bins, and extracting features using
non-overlapping or overlapping windows (sliding window).
It was written in a flash, and I'm no numarray expert, so
feedback/corrections would be welcome.
Magnus Lie Hetland "The mind is not a vessel to be filled,
http://hetland.org but a fire to be lighted." [Plutarch]