On 30 October 2015 at 00:33, Marcus Smith
4) Although using a process interface is not necessarily a problem, I don't agree with your point on why a python interface would be unworkable. You're assuming that pip would try to import all the build tools (for every dependency it's processing) in the master process. An alternative could be that pip would have it's own tool (that runs as a subprocess in an isolated env) that knows how to load and work with python build interfaces. You could argue that a python api is an advantage, because build tools aren't
That would mean that pip has to have the same exact version of it embedded in the environment that build tools will be run in, which it doesn't have today.
sorry, I can't make clear sense of your sentence here. : )
I'll just explain my point again.
pip doesn't necessarily have to "interact with many different versions of the same build tool during a single invocation" if for example it's subprocessing the interactions to some "pip-build" tool that handles the imports and use of the python API. I.e. pips calls some "pip-build" too (per build), which does the import, not pip itself.
and again, it's not about arguing for this idea, but just that your "in-process APIs are harder" argument doesn't decide the matter.
I agree that it doesn't decide the matter - I added a section to rationale about this contention. We can literally do whatever we want to do - my reading of the interactions is that its going to be less fragile overall to not have to have a Python->command process thunk that is a separate interface we have to carry around in Pip everywhere. But we can make literally anything work, so I'm probably going to just say 'Nick and Donald can decide' and not argue about this.
I see no problem with evolving them in lockstep,
it's unnecessarily complex IMO. if they're really in lockstep, then they're one thing it seems to me.
I don't understand.
I'd rather avoid the chance for a bug where something tries to parse a v2 schema build description with a v1 schema parser.
but it won't happen? the pypa.yaml schema version would determine the parser version that's used.
We have two separate documents. If we mark the schema version in one, and not in the other, then we introduce room for skew. Yes, anything that skewed would be buggy, but there'd be no clue that such a bug was happening to anyone debugging it. All my experience debugging network protocols and data storage engines tells me to be explicit about the version of anything that can *possibly* be evaluated separately.
7) it's unclear when pip get's to run "dist-info" and when the result might be different. For example, we've discussed that run time dependencies may get clarifed *after* the build process.... so this command might produce different results at different times?
Pip would run dist-info when determining the install-requires and extras for the package.
you're not addressing the point about how the act of building can create new run time dependencies, per the whole long discussion with Nathaniel recently (his draft deals with this matter explicitly)
I don't follow how there is an issue: the dist-info hook would be
responsible for figuring that out. In the numpy ABI case it would do
discovery of the ABI it would build against and then use that.
-Rob
--
Robert Collins