using mmap on large (> 2 Gig) files
"Martin v. Löwis"
martin at v.loewis.de
Wed Oct 25 13:23:40 CEST 2006
> 2. The OS may be stupid. Mapping a large file may be a major slowdown
> simply because the memory mapping is implemented suboptimally inside
> the OS. For example it may try to load and synchronise huge portions of
> the file that you don't need.
Can you give an example of an operating system that behaves that way?
To my knowledge, all current systems integrating memory mapping somehow
with the page/buffer caches, using various strategies to write-back
(or just discard in case of no writes) pages that haven't been used
for a while.
> The missing offset argument is essential for getting adequate
> performance from a memory-mapped file object.
I very much question that statement. Do you have any numbers to
More information about the Python-list