
On 2021-04-01 09:36, Adi Roiban wrote:
For example, I don't understand why you need to create an sdist and from sdist to create a wheel and then install that wheel inside tox for each tox test environment.
Why not create the wheel once and install the same wheel in all environments.
Are you referring to the use of tox-wheel? Note that the workflow I setup in towncrier has a single build job that creates an sdist and from that a wheel one time, uploads those as artifacts in the workflow, and then every other test job and the publish job use those exact pre-created files. So, I think we agree here. I'll let Grainger weigh in on their present take on tox-wheel if they want.
There is an extra 1 minute required for each job to generate the coverage XML file. We can download all the coverage artifacts and generate the XML only once.
I did coverage collection from all jobs for pymodbus. Looks like I still had the XML report generating in each job, but that could presumably be shifted. Anyways, I'm sure several of us could do this off the top of our heads but in case it is of any interest here's what I did. Or maybe you just point out something silly I did there and should learn from. :]
That is one big matrix :) ... I don't understand why you need test and check and coverage, each with a separate matrix.
I don't think it really is as big as you think. The test and coverage matrix are separated (and the coverage matrix is kind of not a matrix, what with one job) because that's the level at which GitHub allows you to describe dependencies. If you want to collect the coverage during test runs and then combine all the results together before uploading to codecov etc a single time, I think this is just a mandatory structure. The coverage job must be separated so it can depend on the test matrix. Having checks be a separate matrix is kind of neither here nor there on this topic, I think. And the All job is compensation for a missing GitHub Actions feature.
I am thinking of something like this:
* A matrix to run jobs with coverage on each environment (os + py combination + extra stuff noipv6, nodeps, etc) * Each job from the matrix will run an initial combine to combine sub-process coverage * Each job will upload once python coverage raw file. * A single job will download those coverage file, combine them once again to generate an XML file to be pushed to codecov.io
Sounds good. Sounds like exactly what I suggested. :] Except for maybe misunderstanding about what level of separation is needed for the interjob dependencies. Hopefully this clarifies a bit. I think we actually intend the same thing. Unless I've missed something else? Cheers, -kyle