[Catalog-sig] [PSF-Members] Cheeseshop performance improved
renesd at gmail.com
Sat Jul 7 03:22:27 CEST 2007
yeah, the sql can be improved.
A lot of the queries cause a sequential scan of all the rows in the
journal and release tables.
I think the cause of this is that one of the tables does not have a
primary key, so postgresql can't optimize the query. Even if the
table had an incrementing numeric id field, then I think the joins
could be sped up. I haven't tested this yet, but maybe that'd help -
or maybe there would need to be more changes needed. Postgresql
definitely needs a PK on each table though.
ps, I'm going to try and finish off that caching/static file work I've
been working on(more on that later). I guess I'll need to test things
a little differently with fastcgi. How did you set up a fastcgi pypi?
On 7/7/07, "Martin v. Löwis" <martin at v.loewis.de> wrote:
> > I never heard anything from Thomas, which I would think would be the right
> > person to run this through, as I really don't know anything about the
> > arrangement we have with XS4ALL. I guess we'd also need to get the PSF to
> > approve this, though I'd imagine that'd be little more than a formality.
> > If we don't have any response from Thomas in a bit, I can try contacting
> > XS4ALL directly and see if they can give us any ideas.
> I expect such a project to complete in a matter of months rather
> than a matter of days. It took a year or so before the current set of
> machines was actively being used (IIRC).
> > However, I believe that Martin also thinks that with his FastCGi changes it
> > should be happy now as is...
> Indeed. If there are further complaints on the performance, I'd like to
> hear them (preferably with a way for reproducing them). There is still
> stuff that can be done to improve PyPI further, such as better usage of
> Catalog-SIG mailing list
> Catalog-SIG at python.org
More information about the Catalog-SIG