| Commit message (Collapse) | Author | Age | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Mount /dev/shm in the container by setting ipc to "private". This is
the same as "none" (the previous value) with the only difference being
that shm is mounted. This is needed for integration tests to pass.
The integration tests always relied on shared memory due to their use of
multiprocessing. They managed to work because glibc used to fall back to
/tmp if /dev/shm wasn't available. However, newer versions of glibc,
which Debian Bookworm now uses, removed that fallback behaviour.
|
|
|
|
| |
As recommended by https://docs.docker.com/compose/compose-file/compose-file-v3/#short-syntax-1
|
|
|
|
|
| |
Use a more unique name to avoid accidentally using the value of a
similar env var that was set for an unrelated reason.
|
|
|
|
| |
Force the image to be built if it doesn't exist.
|
| |
|
|
|
|
|
|
|
| |
Also remove the reliance on the container needing to mount the host's
files to the same directory during local testing.
Fix #135
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid redundant specification of Docker settings.
The compose file is set up to build all stages. This makes sense for
local development; both an interactive shell and running the webserver
are useful. Therefore, the image built is tagged "snekbox:dev".
However, CI does not need to run a webserver. It is therefore sufficient
for it to only build to the venv stage, and it does exactly that. The
image in CI is tagged as "snekbox-venv:<git sha>".
To facilitate the discrepancy in image tags, the suffix for the image
tag can be set with the new IMAGE_SUFFIX environment variable. Docker
Compose will use this to determine the image from which to create a
container.
A TTY needs to be allocated to prevent the container from exiting
immediately after starting. This is probably because the entrypoint
is Python (inherited from the base image), and the REPL relies on a TTY.
|
|
|
|
|
|
|
|
|
|
| |
Managing development containers through Docker Compose is convenient.
However, it isn't quite flexible enough to facilitate both development
and normal use. It's not really worth accommodating the latter since
the container gets pushed to a registry and that's the intended way to
run the service. Anyone that is checking out the repository and
therefore has access to the compose file is likely a developer, not a
user.
|
| |
|
|
|
|
|
|
|
|
| |
One problem that our master builds may have is that they retain more and
more layers of old builds, as there is no easy way of purging them from
the cache. As such master cache would not have benefits over using
repository-based caching, I've removed persistent local caching for
non-PR builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've migrated the build pipeline to GitHub Actions and changed the
container registry to GitHub Container Registry. In the process, I've
made some changes to our docker setup and caching:
- We are now using a single multi-stage Dockerfile
Instead of three separate dockerfiles, we are now using a
single multi-stage Dockerfile that can be used to build the three images
we want using build targets.
In part, this is because we're now using the docker buildx build action
currently recommended by docker. This new engine runs in a sandboxed
mode, meaning that while it can export built images to `docker` running
in the host, it cannot import local images from it to base builds on.
- Docker builds are now cached within GitHub Actions
The builds are now cached using the GitHub Actions cache of the build
cache directory. The cache keys try to match a cache generated by a
build that matches the current build as closely as possible. In case of
a cache miss, we fall back to caching from the latest image pushed to
the container repository.
- The `base` and `venv` images now have an inline cache manifest
In order to fall back intelligently to caching from the repository, the
final build and push action for the `base` and `venv` images includes an
"inline" cache manifest. This means that the build process can inspect,
without pulling, if it makes sense to pull layers to speed up the build.
The other options, pushing a cache manifest separately (not inline), is
currently not supported by GHCR.
The custom caching script has been removed.
- Linting errors are now added as GitHub Actions annotations
Just like for some of our other pipelines, linting now generates
annotations if linting errors are observed.
- Coverage is pushed to coveralls.io
A coverage summary is now pushed to coveralls.io. Each CI run will get a
unique job that's linked in the CI output. If the run is attached to a
PR, coveralls.io will automatically add a check link with the coverage
result to the PR as well.
- The README.md, Pipfile, docker-compose, and scripts have been updated
As we now need to pull from and link to the GHCR, I've updated the other
files to reflect these changes, including Pipfile run commands. I've
also changed the CI badge and added a coveralls.io badge.
|
| |
|
| |
|
|
|
|
|
|
|
| |
* Venv image can sync dev dependencies
* Copy tests to image
* Add a Pipenv script for running a development shell in a container
* Add Pipenv scripts for building dev images
|
|
|
|
|
|
|
|
| |
* Create a separate image for the virtual environment
* Build NsJail in the base image
* Remove the NsJail binaries
* Replace tini with Docker's init feature
* Update Python to 3.7.3
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
make snekbox a class
adds nsjail 2.5 (compiled on alpine 3.7)
execute python code via nsjail
|
| |
|
|
|
|
| |
code optimisations, update readme
|
| |
|
|
|