Python for embedded systems with memory constraints
gkkvishnu at gmail.com
Mon Jun 11 20:59:19 CEST 2007
Using the best fit for Python will not be a problem, because Python makes
allocations of lot of small size blocks.So those split blocks of small sizes
are used by Python sometime. And what I observed from my investigation with
the memory manager(MM) for Python is , with any MM we cannot eliminate
fragmentation and even though Python is memory hungry I cannot allot some
50MB (or more) just for python application because it will add to the
embedded system memory cost.
So now I only see the solution to clear my memory pool and restart Python
without restarting the system (i.e. no power cycle to hardware). I tried to
do this when my memory pool is 60% used in these steps:
1) Py_Finalize( )
2) Reset my Memory pool (i.e. free list links)
3) Then Restart Python by calling Py_Initialize().
But this resulted in Python crash during Py_Initialize(), where I found
that the static variables within embedded Python source code are still
holding some of the references to my memory pool. So now my question is how
do I restart Python (i.e. reinitialize Python) without restarting whole
system. Is there a way to reset/re-initilaize those static variables such
that it will be possible to re-Initialize Python.
On 6/10/07, MRAB <google at mrabarnett.plus.com> wrote:
> On Jun 9, 1:33 pm, vishnu <gkkvis... at gmail.com> wrote:
> > Hi,
> > Thanks Cameron for your suggestions.
> > In fact I am using custom memory sub-allocator where I preallocate a
> > pool of memory during initialization of my application and ensure that
> > Python doesn't make any system mallocs later . With this arrangement,
> > python seems to run out of preallocated memory (of 10MB) after running
> > few simple scripts due to huge external fragmentation. My memory
> > sub-allocator got a good design which uses the best-fit algorithm and
> > coaelescing the adjacent blocks during each free call.
> > If anybody out there used their own memory manager and ran Python
> > without fragmentation , could provide some inputs on this.
> >From what I remember, the best-fit algorithm isn't a good idea because
> unless the free block was exactly the right size you'd tend to get
> left with lots of small fragments. (Suppose that the best fit was a
> free block only 4 bytes bigger than what you want; what can you do
> with a free block of 4 bytes?)
> A worst-fit algorithm would leave larger free blocks which are more
> useful subsequently, but I think that the recommendation was next-fit
> (ie use the first free block that's big enough, starting from where
> you found the last one).
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Python-list