| Commit message (Collapse) | Author | Age | Lines |
... | |
| |
| |
| |
| |
| |
| | |
It is a flake8 plugin which enforces PEP 8 naming conventions.
Resolves #63
|
|/ |
|
| |
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| | |
One of the unit tests depends on numpy.
|
| | |
|
| | |
|
| |
| |
| | |
Co-authored-by: Joe Banks <[email protected]>
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Isolate snekbox's dependencies from the packages available within the
Python interpreter. Disable Python's default behaviour of site-dependent
manipulations of sys.path. The custom directory looks like a user site
to allow `pip install --user` to work with it. However, snekbox will see
it as simply an additional search path for modules rather than as a user
site.
Disable isolated mode (-I) because it implies (-E), which ignores
PYTHON* environment variables. This conflicts with the reliance on
`PYTHONPATH`.
Specify `PYTHONUSERBASE` in the Dockerfile to make installing packages
to expose more intuitive for users. Otherwise, they'd have to remember
to set this variable every time they need to install something.
|
|\
| |
| | |
Make flake8 properly run through pre-commit in PyCharm.
|
|/ |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
I've added a test that checks if output exceeding the limit is
correctly truncated. To make the test more robust, I've defined a
constant for the read chunk size.
|
| |
| |
| |
| |
| |
| |
| | |
When you send a signal `N` to a subprocess using Popen, it will return
`-N` as its exit code. As the rest of the code returns signal exit codes
as `128 + N`, we convert those negative exit codes into the standard
form used by the rest of the code.
|
| | |
|
| |
| |
| |
| |
| |
| | |
This new behavior matches how other limiters terminate the subprocess,
resulting in a more consistency in the front-end for the end users as
well.
|
| |
| |
| |
| |
| |
| | |
I've increased the number of characters in each chunk we read from
stdout to 10_000. This means we now read roughly 10 KB - 40 KB in each
chunk.
|
| |
| |
| |
| |
| |
| |
| | |
Previously, the chunk of output that took us over the output limit was
dismissed. As we've already consumed it and it's not going to have a
very large size, we might as well include it in the final output we
return.
|
| |
| |
| |
| |
| |
| | |
The function now returns a single, joined string instead of a list of a
list of strings. That way, we don't have to join the list in two
different code branches.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recently, we discovered that for some code inputs, snekbox would get
into an OOM event on the container level, seemingly bypassing the memory
restrictions laid on code execution by NSJail.
After investigating the issue, we identified the culprit to be the
STDOUT pipe we use to get output back from NSJail: As output is piped
out of the jailed process, it will be gathered outside of the NSJail in
the main container process instead. This meant that our initial attempts
of limiting the allowed filesize within the NSJail failed, as the OOM
happened outside of the jailed environment.
To mitigate the issue, I've written a loop that consumes the STDOUT pipe
in chunks of 100 characters. Once the size of the accrued output reaches
a certain limit (currently set to 1 MB), we send a SIGTERM signal to
NSJail to terminate itself. The output up to that point will be relayed
back to the caller.
A minimal code snippet to trigger the event and the mitigation:
```py
while True:
print(" ")
```
I've included a test for this vulnerability in `tests/test_nsjail.py`.
|
|
|
|
|
| |
This will set the maximum size of a created file to be 10Mb, a fairly generous amount.
The reason for this is that when a huge stdout is buffered it does not get affected by the memory protections of nsjail and is sent to the host container, which has potential to cause an OOM.
|
| |
|
|\ |
|
| | |
|
| | |
|
|/
|
|
| |
Simple stuff. Basically copy-paste from site.
|
|\
| |
| |
| |
| | |
python-discord/sebastiaan/backend/migrate-ci-to-github-actions
Migrate to GitHub Actions and GitHub Container Registry
|
| |
| |
| |
| |
| | |
I've fixed paths still pointing to the old Dockerfile location. I've
also reverted an error that somehow got committed to the Dockerfile.
|
| |
| |
| |
| |
| |
| | |
I've removed the redundant intermediate image build commands from the
Pipfile. Since everything is now contained in one Dockerfile, we can
simply build the final image in one go.
|
| |
| |
| |
| |
| |
| |
| |
| | |
One problem that our master builds may have is that they retain more and
more layers of old builds, as there is no easy way of purging them from
the cache. As such master cache would not have benefits over using
repository-based caching, I've removed persistent local caching for
non-PR builds.
|
| | |
|
| |
| |
| |
| |
| |
| | |
I accidentally escaped a single quote in a run command; I've removed it
now. I also changed the job name to `lint-test-build-push` to better
reflect the contents of the job.
|
| |
| |
| |
| |
| |
| | |
Now we've migrated to GitHub Actions, we don't need have XML reports of
our unit tests as we're no longer using the Azure test result
application.
|