-
Notifications
You must be signed in to change notification settings - Fork 3
Add configuration workflow job, which automatically configures the rest of the workflow run #43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…st of the workflow run And other changes aimed at minimising deployment and maintenance overhead. Ideally, one shouldn't need to update this workflow file for new Python versions, and should not need to edit it at all for each repository it is deployed to - the workflow file can be deployed as-is. * No longer need to specify PURE or NOARCH - this will be introspected during the configure job. * No longer need to specify the package name, it will be introspected during the configure job. * No longer need to specify which versions of Python to build for in the case of a non-pure wheel or non-noarch conda package - the configure job will set variables to target all currently-supported Python versions. * Use `cibuildwheel` to build impure wheels on all platforms instead of only `manylinux` for Linux (now deprecated). * No longer need to adjust list of OSs to run jobs on - this will be determined automatically based on pure/noarch status. * No longer need to specify the repository in job-level if-statements to prevent jobs running on forks. Instead, jobs simply do run on forks, but upload steps will be skipped if the relevant secrets are absent * Removed "ignore tags" step that was designed to prevent running the workflow twice on a commit with tags and uploading twice to the Anaconda test label, resulting in an error. Instead, we upload releases to real PyPI and Anaconda main label, and non-releases to test PyPI and anaconda test label so there is no duplication. Also we add the `--skip-existing` flag to anaconda uploads since sometimes you just need to re-run workflows anyway, this should be idempotent and not crash simply because some previous uploads succeeded. * Add `if-no-files-found: error` to all `Upload Artifact` actions, with step-level `if` statements so they only run when we expect output. This reduces the incidence of failed runs that produced no artifacts appearing successful. * Fix macOS impure conda builds. These previously didn't work at all on the newer macOS runners, because conda's compiler toolchains are super duper out of date. Instead, we use conda-forge toolchains by installing miniforge instead of miniconda. * Use bash to build conda packages on Windows, same as Linux/macOS - it was never necessary to use cmd.exe in the first place, I think doing so just masked the path-length limit problem for some packages, which we now address by setting the conda build root path to a directory with a very short filepath. It was not possible to get rid of absolutely all configuration, but at least we can obviate the need to edit the workflow file itself, which should now be able to be identical across repositories. To have this workflow do anaconda uploads, you'll need to set the `ANACONDA_USER` variable in your repository or organisation's Actions variables (similar to how secrets are set). I've already done this for the `labscript-suite` organisation.
276779f
to
5ec853f
Compare
Also switch to |
fae56ab
to
243b0b9
Compare
hard-code release-branch-semver and no-local-version These are no longer configured to be different in CI than locally Drop minimum version requirements for build requirements where those versions are now several years old
This is important for moving to PyPI attested uploads and is generally recommended by the PyPI upload actions anyway. We split the other steps into separate jobs for consistency
243b0b9
to
45ffdcd
Compare
Since we can no longer use the presence of TestPyPI and PyPI API keys to know if we should attempt to upload releases (i.e. we're not running in a fork), we instead do need some explicit configuration. Rather than do this all in repository variables, we do it in a config file in the workflow directory. RELEASE_REPO is set to the repository releases should be made from (otherwise uploads are skipped). For consistency, ANACONDA_USER is set here too rather than as a repository variable. For completeness everything else is configurable there too. Re-add the "ignore tags" step - with fixes since it was incorrect given recent changes to the workflow `on:` configuration. It is needed due to race conditions - we are cloning the repo in multiple jobs and a tag could appear in between them, leading to version inconsistencies.
Ok, this is working, and I've used it in To enable trusted publisher releases, one just needs to go to: https://pypi.org/manage/project/qtutils/settings/publishing/ (or the same with test.pypi) and fill in the details. We are using the optional environments (named Unfortunately if we're using trusted publisher releases, then we can't infer whether we should attempt to upload to PyPI/testPyPI or not based on the presence of the needed secret. So we do need some per-repository configuration after all. Rather than make this either in the form of repository variables, or hard-coded within the workflow file itself, I've decided on having a separate I made most things configurable even though everything can be introspected other than So for deployment to other repos, we'll need to copy both files into other repos and modify This is simple enough that it could be scripted to roll it out to master and release branches on all repos, though I'd probably make a chacklist of other changes to make at the same time, such as:
I'll merge this soon if there aren't objections or additional suggestions. Have already done the trusted publisher setup for this repo. |
Well I think this is a pretty solid upgrade. Keeping the config options localized is a decent compromise. As a suggestion (that you have probably already considered), could we introspect if the repo running the workflow is a fork so we don't have to set the RELEASE_REPO? A quick look showed this SO suggestion (with comments being important as well). Even if that works, I still like the idea of a shell config script that collects all the important things (selecting release targets etc), even if the defaults are what we want most of the time. |
Note: for a non-pure installation you get an error configuring on apparently any |
And other changes aimed at minimising deployment and maintenance overhead. Ideally, one shouldn't need to update this workflow file for new Python versions, and should not need to edit it at all for each repository it is deployed to - the workflow file can be deployed as-is.
cibuildwheel
to build impure wheels on all platforms instead of onlymanylinux
for Linux (now deprecated).--skip-existing
flag to anaconda uploads since sometimes you just need to re-run workflows anyway, this should be idempotent and not crash simply because some previous uploads succeeded.if-no-files-found: error
to allUpload Artifact
actions, with step-levelif
statements so they only run when we expect output. This reduces the incidence of failed runs that produced no artifacts appearing successful.Changes needed to use this workflow
It was not possible to get rid of absolutely all configuration, but at least we can obviate the need to edit the workflow file itself, which should now be able to be identical across repositories. To have this workflow do anaconda uploads, you'll need to set the
ANACONDA_USER
variable in your repository or organisation's Actions variables (similar to how secrets are set). I've already done this for thelabscript-suite
organisation.