Max files in unix folder from PIL process

"Martin v. Löwis" martin at v.loewis.de
Mon Mar 28 22:49:03 EST 2005


David Pratt wrote:
> Hi.  I am creating a python application that uses PIL to generate 
> thumbnails and sized images. It is beginning to look the volume of 
> images will be large. This has got me to thinking.  Is there a number 
> that Unix can handle in a single directory. I am using FreeBSD4.x at the 
> moment. I am thinking the number could be as high 500,000 images in a 
> single directory but more likely in the range of 6,000 to 30,000 for 
> most. I did not want to store these in Postgres. Should this pose a 
> problem on the filesystem?  I realize less a python issue really but I 
> though some one might have an idea on the list.

It all depends on the file system you are using, and somewhat on the
operations you are typically performing. I assume this is ufs/ffs, so
the directory is a linear list of all files.

This causes some performance concerns for accessing: if you want to
access an individual file, you need to scan the entire directory. The
size of a directory entry depends on the length of a name. Assuming
file names of 10 characters, in which case each entry is 20 bytes, a
directory with 500,000 images file names requires 10MB on disk. So
each directory lookup would potentially require to read 10MB from
disk, which might be noticable. For 6,000 entries, the directory
size is 120kB, which might not be noticable.

In 4.4+, there is a kernel compile time option UFS_DIRHASH,
which causes creation of an in-memory hashtable for directories,
speeding up lookups significantly. This requires, of course, enough
main memory to actually keep the hashtable.

Regards,
Martin



More information about the Python-list mailing list