[C++-sig] Problem: boost_python 1.31 (release) causes huge memory allocation
Wirawan Purwanto
wirawan at camelot.physics.wm.edu
Thu Jul 1 17:59:56 CEST 2004
Hi,
Just a compliment, first: I am very new to boost::python library. I'm really
impressed at the ease of use of that library. I have an existing code that I
want to "manipulate" via python, and it only took me a few tests to get the
simple interface up and running!
Now is the meat. :) To highlight where the questions are, I put "QUESTION:" at
the beginning of the paragraph, below. Sorry if the detail is too long.
I am using Mandrake Linux 10 (kernel = 2.6.3-7mdk, gcc = 3.3.2-6mdk) and boost
library version 1.31.0 (official release). I compiled the *whole* boost library
using bjam, as directed in the website. The first attempt to "wrap" my code
with boost::python went fine. Here's the wrapper initialization code:
BOOST_PYTHON_MODULE(HubbardGP)
{
using namespace boost::python;
TBH_PY_DEBUG(("Initializing HubbardGP module interface\n"));
class_<HubbardGP>("HubbardGP")
.def("OpenFiles", &HubbardGP::OpenFiles)
.def("Solve", &HubbardGP::Solve)
.def("ReportResults", &HubbardGP::ReportResults)
// .add_property("ndim", &HubbardGP::ndim_pyget)
;
TBH_PY_DEBUG(("Done initializing HubbardGP module interface\n"));
}
Please disregard "TBH_PY_DEBUG" there--it just calls C's printf function to
print the string on the screen.
I built my shared object (HubbardGP.so) using make(1) for a stupid reason: I
still don't understand completely bjam's quirks. I linked against the "release"
version of libboost_python-gcc.so, in this way:
$ g++ -L/usr/local/lib -fPIC -Wall -ftemplate-depth-100 \
-g -O0 -DDEBUG \
-DBOOST_PYTHON_DYNAMIC_LIB \
-DTBHQMC_MAKE_BOOST_PYTHON_MODULE \
-DTBH_NDIM=1 -DTBH_USE_FFTW3 -D__section__=1 \
-Isrc -Isrc/cp.inc \
-I/usr/local/boost-1.31.0/include/boost-1_31 \
-I/usr/include/python2.3 \
-c src/test-lattice-params.cpp -o objs/HubbardGP.o
$ g++ -L/usr/local/lib -shared -Wall -static-libgcc \
-llapack -lcblas -lf77blas -latlas -lfftw3 \
-lg2c \
-lboost_python-gcc \
-L/usr/local/boost-1.31.0/lib \
objs/HubbardGP.o -o HubbardGP.so
Do you see the difference? My code was compiled using DEBUG switch (and -O0),
but then linked to the RELEASE variant of the boost::python lib.
QUESTION: is this an acceptable practice? I'm wondering if the DEBUG code must
always be linked against the DEBUG boost::python lib and vice versa.
QUESTION: Without the .addproperty() stuff (that's commented in the snippet
above), the code runs fine. But when I added the .addproperty() line, a
calamity happens: when I was about to instantiate HubbardGP object, the
computer froze. When I checked using top(1), the python program tries to
allocate a huge chunk of memory, which did NOT happen before the .addproperty()
is added in my wrapper. I re-ran strace(1), and found that the program (either
python or the boost_python lib, or remotely possibly my own code?) attempted to
allocate ~590MB of memory. Here's the strace(1) output, when I forcibly LIMIT
the amount of vmem available to the program to 250 MB only:
[del]
write(1, "Initializing HubbardGP module in"..., 40Initializing HubbardGP
module interface) = 40
write(1, "Done initializing HubbardGP modu"..., 45Done initializing
HubbardGP module interface) = 45
close(3) = 0
futex(0x804a998, FUTEX_WAKE, 1) = 0
write(1, "Creating an instance of HubbardG"..., 34Creating an instance
of HubbardGP) = 34
> mmap2(NULL, 595271680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
> 0) = -1 ENOMEM (Cannot allocate memory)
brk(0) = 0x80d6000
brk(0x2b888000) = 0x80d6000
> mmap2(NULL, 595406848, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
> 0) = -1 ENOMEM (Cannot allocate memory)
mmap2(NULL, 2097152, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE,
-1, 0) = 0x87cb5000
munmap(0x87cb5000, 307200) = 0
munmap(0x87e00000, 741376) = 0
mprotect(0x87d00000, 135168, PROT_READ|PROT_WRITE) = 0
> mmap2(NULL, 595271680, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
> 0) = -1 ENOMEM (Cannot allocate memory)
futex(0x401edaf4, FUTEX_WAKE, 2147483647) = 0
futex(0x402207d4, FUTEX_WAKE, 2147483647) = 0
write(2, "Traceback (most recent call last"..., 35Traceback (most recent
call last):) = 35
open("test-lattice-params.py", O_RDONLY|O_LARGEFILE) = 3
write(2, " File \"test-lattice-params.py\","..., 46 File
"test-lattice-params.py", line 4, in ?) = 46
fstat64(3, {st_mode=S_IFREG|0644, st_size=267, ...}) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0xab5b2000
read(3, "import HubbardGP\n\nprint \"Creatin"..., 4096) = 267
write(2, " ", 4 ) = 4
write(2, "H = HubbardGP.HubbardGP()\n", 26H = HubbardGP.HubbardGP()
) = 26
close(3) = 0
munmap(0xab5b2000, 4096) = 0
write(2, "MemoryError", 11MemoryError) = 11
write(2, "\n", 1
) = 1
[del]
There are 3 points when it tries to allocate huge amount of memory.
Now, I'm too new to both python and boost::python. Could you help me with this?
I don't believe that my code was the one doing the mess. I tend to think that
somehow the compiled boost_python code was acting up here. But I can't debug
the code easily, as it involves running python itself in the debugger. How do
you debug such a problem? Using python debugger is also not possible, since I
don't think it allows going down to the C++-level code.
As a workaround, I could only link my HubbardGP.so against the DEBUG variant
(libboost_python-gcc-d.so). Then the code went fine, even with .add_property()
there.
I tried once to regenerate the problem using a much smaller testcase, but the
problem (huge mmap2) didn't show up.
As a reference, here's my python test code:
import HubbardGP
print "Creating an instance of HubbardGP"
H = HubbardGP.HubbardGP()
print "Done, now opening files"
x = H.OpenFiles("/tmp/file1.txt", "/tmp/file2.txt", "/tmp/file3.txt")
print "x is %d" % x
H.Solve()
print "NOW REPORTING RESULTS:"
H.ReportResults()
I would appreciate if someone helps me out in this respect.
The full source code of the wrapped C++ object is available if you need to look
into it. But it's way too large to post here.
Thanks,
Wirawan
More information about the Cplusplus-sig
mailing list