| Commit message (Collapse) | Author | Age | Lines |
... | |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use docker-compose run instead of docker-compose up. This is more
appropriate since the container is only needed for one command. The
latter was actually starting the whole snekbox server. Furthermore,
the former has the --rm option to remove the container when the command
finishes.
As an extra precaution, use docker-compose down in the self-hosted
runner to also remove images, volumes, networks, and any other
containers that were somehow missed. Removing images will also prevent
the disk usage from building up. This is not necessary for the GH-hosted
runner since a new VM is used for each run.
|
|
|
|
| |
The step was running even if the pre-commit hooks step never ran.
|
| |
|
|
|
|
|
|
|
|
| |
Remove the dependency on the container so the lint job can run in
parallel with the build job. More time has to be spent installing
Python dependencies, but this is made up for by not having to download
and load the image artefact in addition to not having to wait for the
build job.
|
|
|
|
| |
See https://github.com/TheKevJames/coveralls-python/issues/240
|
|
|
|
|
|
| |
Unlike the cache action, the build-push action's GHA cache feature
seems to only do an exact comparison for the scope. Thus, new commits
lead to cache misses.
|
| |
|
|
|
|
|
|
| |
Make the artefact and file names identical to simplify things. The
artefact name doesn't have to be unique anyway since it can only be
downloaded by the same workflow run.
|
| |
|
|
|
|
|
|
|
| |
load: true was already creating a tarball, but it was getting
immediately loaded. Since no other Docker builds run in this job,
it's useless to load it. The action can still be leveraged to create
the tarball instead of manually invoking `docker save`.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Python 3.9 is available on the self-hosted runner and is sufficient to
run coveralls. Trying to get the setup-python action supported on
the self-hosted runner proved to be problematic.
|
|
|
|
|
|
| |
The self-hosted runner has cgroupv2 enabled. It's only needed to run
the tests on a cgroupv2 system. Only lint, push the image, and deploy
it on one runner to avoid redundancy.
|
| |
|
|
|
|
| |
Signed-off-by: Hassan Abouelela <[email protected]>
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CI was building the image twice: once with dev dependencies and again
without. Separating the pipenv command into separate layers allows the
second build in CI to take advantage of the cache for the base
dependencies that it will share across both builds.
Install numpy along with the dev dependencies within the container.
Previously it was installed in CI only, but this meant extra work for
those running tests locally.
Install numpy to the correct site.
|
|
|
|
|
| |
Generating the report in the same step resulted in the report exit code
overriding the exit code of the test runner.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid redundant specification of Docker settings.
The compose file is set up to build all stages. This makes sense for
local development; both an interactive shell and running the webserver
are useful. Therefore, the image built is tagged "snekbox:dev".
However, CI does not need to run a webserver. It is therefore sufficient
for it to only build to the venv stage, and it does exactly that. The
image in CI is tagged as "snekbox-venv:<git sha>".
To facilitate the discrepancy in image tags, the suffix for the image
tag can be set with the new IMAGE_SUFFIX environment variable. Docker
Compose will use this to determine the image from which to create a
container.
A TTY needs to be allocated to prevent the container from exiting
immediately after starting. This is probably because the entrypoint
is Python (inherited from the base image), and the REPL relies on a TTY.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The Python script uses the same underlying code Falcon uses to invoke
nsjail. It allows for the omission of redundant shell code that set up
cgroups and nsjail args.
This is also a step towards removing dependence on shell scripts and
thus resolving #73.
|
|/ |
|
|
|
|
| |
Pre-commit requires git.
|
| |
|
| |
|
|
|
|
| |
One of the unit tests depends on numpy.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Simple stuff. Basically copy-paste from site.
|
|
|
|
|
|
|
|
| |
One problem that our master builds may have is that they retain more and
more layers of old builds, as there is no easy way of purging them from
the cache. As such master cache would not have benefits over using
repository-based caching, I've removed persistent local caching for
non-PR builds.
|
| |
|
|
|
|
|
|
| |
I accidentally escaped a single quote in a run command; I've removed it
now. I also changed the job name to `lint-test-build-push` to better
reflect the contents of the job.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've migrated the build pipeline to GitHub Actions and changed the
container registry to GitHub Container Registry. In the process, I've
made some changes to our docker setup and caching:
- We are now using a single multi-stage Dockerfile
Instead of three separate dockerfiles, we are now using a
single multi-stage Dockerfile that can be used to build the three images
we want using build targets.
In part, this is because we're now using the docker buildx build action
currently recommended by docker. This new engine runs in a sandboxed
mode, meaning that while it can export built images to `docker` running
in the host, it cannot import local images from it to base builds on.
- Docker builds are now cached within GitHub Actions
The builds are now cached using the GitHub Actions cache of the build
cache directory. The cache keys try to match a cache generated by a
build that matches the current build as closely as possible. In case of
a cache miss, we fall back to caching from the latest image pushed to
the container repository.
- The `base` and `venv` images now have an inline cache manifest
In order to fall back intelligently to caching from the repository, the
final build and push action for the `base` and `venv` images includes an
"inline" cache manifest. This means that the build process can inspect,
without pulling, if it makes sense to pull layers to speed up the build.
The other options, pushing a cache manifest separately (not inline), is
currently not supported by GHCR.
The custom caching script has been removed.
- Linting errors are now added as GitHub Actions annotations
Just like for some of our other pipelines, linting now generates
annotations if linting errors are observed.
- Coverage is pushed to coveralls.io
A coverage summary is now pushed to coveralls.io. Each CI run will get a
unique job that's linked in the CI output. If the run is attached to a
PR, coveralls.io will automatically add a check link with the coverage
result to the PR as well.
- The README.md, Pipfile, docker-compose, and scripts have been updated
As we now need to pull from and link to the GHCR, I've updated the other
files to reflect these changes, including Pipfile run commands. I've
also changed the CI badge and added a coveralls.io badge.
|
| |
|