| Commit message (Collapse) | Author | Age | Lines |
|
|
|
|
|
|
|
| |
One problem that our master builds may have is that they retain more and
more layers of old builds, as there is no easy way of purging them from
the cache. As such master cache would not have benefits over using
repository-based caching, I've removed persistent local caching for
non-PR builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've migrated the build pipeline to GitHub Actions and changed the
container registry to GitHub Container Registry. In the process, I've
made some changes to our docker setup and caching:
- We are now using a single multi-stage Dockerfile
Instead of three separate dockerfiles, we are now using a
single multi-stage Dockerfile that can be used to build the three images
we want using build targets.
In part, this is because we're now using the docker buildx build action
currently recommended by docker. This new engine runs in a sandboxed
mode, meaning that while it can export built images to `docker` running
in the host, it cannot import local images from it to base builds on.
- Docker builds are now cached within GitHub Actions
The builds are now cached using the GitHub Actions cache of the build
cache directory. The cache keys try to match a cache generated by a
build that matches the current build as closely as possible. In case of
a cache miss, we fall back to caching from the latest image pushed to
the container repository.
- The `base` and `venv` images now have an inline cache manifest
In order to fall back intelligently to caching from the repository, the
final build and push action for the `base` and `venv` images includes an
"inline" cache manifest. This means that the build process can inspect,
without pulling, if it makes sense to pull layers to speed up the build.
The other options, pushing a cache manifest separately (not inline), is
currently not supported by GHCR.
The custom caching script has been removed.
- Linting errors are now added as GitHub Actions annotations
Just like for some of our other pipelines, linting now generates
annotations if linting errors are observed.
- Coverage is pushed to coveralls.io
A coverage summary is now pushed to coveralls.io. Each CI run will get a
unique job that's linked in the CI output. If the run is attached to a
PR, coveralls.io will automatically add a check link with the coverage
result to the PR as well.
- The README.md, Pipfile, docker-compose, and scripts have been updated
As we now need to pull from and link to the GHCR, I've updated the other
files to reflect these changes, including Pipfile run commands. I've
also changed the CI badge and added a coveralls.io badge.
|
| |
|
| |
|
|
|
|
|
| |
`pipenv run` creates a venv because it cannot detect that a --system
install was done. The solution is to invoke gunicorn directly.
|
|
|
|
|
| |
There will be more config files to come so it's cleaner to have them
together than littering the root directory with more files.
|
|
|
|
|
|
|
|
|
|
|
| |
A virtual environment is redundant in the context of deployment. It
just increases the size and build time of the image.
* Replace venv with system interpreter
* Mount Python binaries in /usr/local/bin in NsJail
* Fix #61: Python symlink in venv not resolving
* Re-lock Pipfile because it wasn't up to date according to
pipenv install --deploy
|
|
|
|
|
|
|
|
|
|
|
|
| |
devfs and sysfs were problematic since they were being mounted as
tmpfs, which is r/w. For example, the Python process could write to
cgroups. Now, only what is needed to run Python gets mounted. This
boils down to the venv itself and some shared libraries Python needs.
* Use a config file for NsJail instead of command-line options
* Map 65534 (nobody) user & group inside the user namespace to 65534
outside the namespace rather than mapping to current uid/guid (which
was 0 AKA root)
|
| |
|
|
|
|
|
| |
A C compiler is needed for some of the Python libraries to build
because they don't have wheels >:(
|
|
|
|
|
| |
Unlike Alpine, Python manylinux wheels work on Debian because it's a
glibc-based distro.
|
| |
|
|
|
|
|
|
|
|
|
| |
Currently, the dev image is broken due to typed-ast being present and
requiring GCC and Python.h. Supposedly that package will be made
optional by flake8-annotations in a later update.
* Use the Python image for the base image's first stage to save
downloading a separate alpine image.
|
| |
|
|
|
|
|
|
|
| |
Unspecify the depth to make the clone non-shallow again. A depth of 1 was too
shallow as it only allowed the latest commit to be cloned. An arbitrary larger
depth would still break eventually. The repository is small enough to not
warrant a shallow clone anyway.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
* Put scripts in a new scripts folder
|
| |
|
|
|
|
|
|
|
| |
* Venv image can sync dev dependencies
* Copy tests to image
* Add a Pipenv script for running a development shell in a container
* Add Pipenv scripts for building dev images
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
* Create a separate image for the virtual environment
* Build NsJail in the base image
* Remove the NsJail binaries
* Replace tini with Docker's init feature
* Update Python to 3.7.3
|
|\ |
|
| |
| |
| |
| |
| | |
This PR is to add CI settings to master and to test the PR CI pipeline.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
make snekbox a class
adds nsjail 2.5 (compiled on alpine 3.7)
execute python code via nsjail
|
| |
|
| |
|
| |
|
| |
|
|
|