Hello,

My organization is exploring replacing our current, patched-together "wheelhouse" with a proper Devpi server mirroring PyPi and hosting our internal packages. Because this Devpi architecture will be accessed by hundreds of developers and over 100 build server workers, it needs to be highly available (able to handle a lot of traffic). So this leads me to several questions (five major questions and a few sub-questions, to be more accurate):

- Is anyone familiar with a threshold of diminishing returns when it comes to the "threads" server setting? At what point, if any, does increasing this number do no good? I assume that # of "threads" roughly equates to # of simultaneous connections that single server is capable of, perhaps minus one  or two. Is this a fair assumption?

- How many "devpi-server" instances can access a single "serverdir?" Is it just one, and bad things will happen if a second "devpi-server" tries to access it (and, if I need multiple servers, I should use master-replica replication)? Or can multiple "devpi-server" instances access a single "serverdir" to serve as multiple hot masters?

- When employing master-replica replication, I'm wondering if the following configuration is sensible, or if there are better approaches: 1) One (or possibly more, if allowed) master, 2) Two or more replicas replicating off the master, 3) An HTTPS load balancer that sends all traffic to the replicas (not the master? or include the master in the load balancing?). In this case, what happens if someone uses the https://load-balanced-devpi-url/ URL to publish a new package? Does a replica it lands on just forward that up to the master? Or does the user get an error? Or do worse things happen?

- When employing master-replica replication, I noticed in the documentation this word  of caution:

"You probably want to make sure that before starting the replica and/or master that you also have the Web interface and search plugin installed. You cannot install it later."

To be clear, I already have "devpi-web" installed (and a theme, too), but this note confuses (and slightly concerns) me, so I'd like to understand it better. Why would you be unable to install "devpi-web" after starting a master or replica? What would it break? Why wouldn't this work? Or do you just mean that you would have to shut down the servers to install "devpi-web" (which is understandable)? Is this different from non-replication environments (can you install "devpi-web" after starting a non-replicating server), and if so, why? I'm asking for details about this because I'm curious if it has potential consequences (stability, etc.) that extend beyond just the "devpi-web" plugin.

- When creating a new index, could someone elaborate a bit more on the "mirror_whitelist" setting? What is the difference between not setting it at all (default/implicit) and explicitly setting it to "*"? What does "*" actually mean? When you upload a new package to the index that conflicts with a package in its base index, is the resulting behavior that the uploaded package (or perhaps just the versions you upload) ALWAYS overrides the one in the base index, regardless of the "mirror_whitelist" setting? My goal here is to have an index that uses "root/pypi" as its base and also hosts our internal packages, so that we can point Pipenv to one URL and install everything from there. Is there a more sensible approach than that which I am planning on taking?

Thanks in advance for helping me make the right decisions in our upcoming deployment.

Nick Williams