I could probably be convinced about something that makes handling versions easier going into the standard lib, but that's about it.
There's a few reasons that I don't want these things added to the stdlib themselves.
One of the major ones is that of "agility". We've seen with distutils how impossible it can be to make improvements to the system. Now some of this is made better with the way the new system is being designed with versioned metadata but it doesn't completely go away. We can look at Python's past to see just how long any individual version sticks around and we can assume that if something gets added now that particular version will be around for a long time.
Another is because of how long it can take a new version of Python to become "standard", especially in the 3.x series since the entire 3.x series itself isn't standard, any changes made to the standard lib won't be usable for years and years. This can be mitigated by releasing a backport on PyPI, but if every version of Python but the latest one is going to require installing these libs from PyPI in order to usefully interact with the "world", then you might as well just require all versions of Python to install bits from PyPI.
Yet another is by blessing a particular implementation, that implementations behaviors become the standard (indeed the way the PEP system generally works for this is once it's been added to the standard lib the PEP is a historical document and the documentation becomes the standard). However packaging is not like Enums or urllibs, or smtp. We are essentially defining a protocol, one that non Python tools will be expected to use (for Debian and RPMs for example). We are using these PEPs more like a RFC than a proposal to include something in the stdlib.
There's also the case of usefulness. You mention some code that can parse the JSON metadata and validate it. Weel assumingly we'll have the metadata for 2.0 set down by the time 3.4 comes around. So sure 3.4 could have that, but then maybe we release metadata 2.1 and now 3.4 can only parse _some_ of the metadata. Maybe we release a metadata 3.0 and now it can't parse any metadata. But even if it can parse the metadata what does it do with it? The major places you'd be validating the metadata (other than merely consuming it) is either on the tools that create packages or in PyPI performing checks on a valid file upload. In the build tool case they are going to either need to write their own code for actually creating the package or, more likely, they'll reuse something like distlib. If those tools are already going to be using a distlib-like library then we might as just keep the validation code in there.
Now the version parsing stuff which I said I could be convinced is slightly different. It is really sort of it's own thing. It's not dependent on the other pieces of packaging to be useful, and it's not versioned. It's also the only bit that's really useful on it's own. People consuming the (future) PyPI API could use it to fully depict the actual metadata so it's kind of like JSON itself in that regard.
The installer side of things the purist side of me doesn't like adding it to the standard library for all the same reasons but the pragmatic side of me wants it there because it enables fetching the other bits that are needed for "pip install X" to be a reasonable official response to these kind of questions. But I pushed for and still believe that if a prerequisite for doing that involves "locking" in pip or any of it's dependencies by adding them to the standard library then I am vehemently against doing it.
Wow that was a lot of words...
-----------------
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA