On 24 Jul 2018, at 16:57, Matthias Bach wrote:
Your proposal regarding splitting up the search indexing sounds good. Am I correct in assuming that this will mean that we will get a replica that, from the API point of view, catches up as quickly as one without devpi-web and, afterwards, will start returning reasonable search results step by step? This would help us with disaster recovery and, even more, with the export-import cycle for the major version upgrade.
That is the goal, yes. The replicas should have the metadata state as quickly as possible, and catch up with everything else, like documentation unzipping, description rendering and indexing. Another thing would be to split off the file downloads as well. There is already functionality to fetch files from the master if they are missing. So we could start the downloads after the metadata is replicated completely, unlike now where we wait for all the files in the current serial to finish downloading.
Regards, Florian Schulze