[Catalog-sig] [PSF-Members] Cheeseshop performance improved

René Dudfield renesd at gmail.com
Sat Jul 7 09:41:54 CEST 2007


I thought because of the types of joins being done postgresql needed a
primary key - but maybe you can get them working with just some more

On 7/7/07, "Martin v. Löwis" <martin at v.loewis.de> wrote:
> > A lot of the queries cause a sequential scan of all the rows in the
> > journal and release tables.
> >
> > I think the cause of this is that one of the tables does not have a
> > primary key, so postgresql can't optimize the query.  Even if the
> > table had an incrementing numeric id field, then I think the joins
> > could be sped up.  I haven't tested this yet, but maybe that'd help -
> > or maybe there would need to be more changes needed.  Postgresql
> > definitely needs a PK on each table though.
> Not definitely - and index is enough. A PK only adds an additional
> constraint, and does not contribute in itself to performance.
> In any case, I plan to add a name-version index to release_classifiers,
> as the browsing often looks into release_classifiers by name and
> version.
> > ps, I'm going to try and finish off that caching/static file work I've
> > been working on(more on that later).  I guess I'll need to test things
> > a little differently with fastcgi.  How did you set up a fastcgi pypi?
> FastCgiServer /data/pypi/src/pypi/pypi.fcgi  -idle-timeout 60 -processes 4
> then
>    # Trick Apache in providing Basic-Auth to pypi.fcgi
>    RewriteCond %{HTTP:Authorization}  ^(.+)$
>    RewriteRule ^/pypi(.*) /data/pypi/src/pypi/pypi.fcgi$1
>    ScriptAlias /pypi /data/pypi/src/pypi/pypi.fcgi
> Regards,
> Martin

More information about the Catalog-SIG mailing list