| Commit message (Collapse) | Author | Age | Lines |
| | |
|
| | |
|
| |
|
|
|
|
| |
See the docstring. This does not make the ambition to be powerful
enough to be included in `tests.helpers`, and is only intended
for local purposes.
|
| |\ |
|
| | |\
| | |
| | | |
#364 offensive msg autodeletion
|
| | | | |
|
| | | |\
| | |/
| |/| |
|
| | |\ \
| | | |
| | | | |
Create cooldown.md
|
| | | | |
| | | |
| | | | |
Co-authored-by: Mark <[email protected]>
|
| | | |\ \ |
|
| | | | | | |
|
| | | | | | |
|
| | | | | |
| | | | |
| | | | | |
Co-authored-by: Mark <[email protected]>
|
| | | | | |
| | | | |
| | | | | |
Co-authored-by: Joseph Banks <[email protected]>
|
| | | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Apply suggestions from code review
Co-authored-by: Joseph Banks <[email protected]>
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | | | | | |
|
| | |\ \ \ \
| | | | | |
| | | | | | |
Check infraction reason isn't None before shortening it
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | | |
|
| | | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The reason None check should be nested to avoid affecting the else/elif
statements that follow.
|
| | | | | | | |
|
| | | | | | | |
|
| | |\ \ \ \ \
| | | | | | |
| | | | | | | |
Add persistence to the help channel system
|
| | | |\ \ \ \ \
| | |/ / / / /
| |/| | | | | |
|
| | | |/ / / /
| |/| | | | |
|
| | | | | | | |
|
| | |\ \ \ \ \
| | | | | | |
| | | | | | | |
Make token detection more robust and completely rewrite its tests
|
| | | |\ \ \ \ \
| | | | | | | |
| | | | | | | |
| | | | | | | | |
bug/filters/928/non-ascii-token
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It's redundant; there's no benefit here in abstracting two lines of code
into a function.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This covers the case when a token is matched, but its user ID and
timestamp turn out to be invalid.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It now supports the changes that switched to finditer, added match
groups, and added the Token NamedTuple. It also accounts for the
is_maybe_token function being removed.
For the sake of simplicity, call assertions on is_valid_user_id and
is_valid_timestamp were not made.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The function was removed due to redundancy. Therefore, its tests are
obsolete.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The tests for valid inputs and invalid inputs were split to make them
more readable.
|
| | | | | | | | | |
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
`find_token_in_message` now uses the latter so the tests should adjust
accordingly.
|
| | | | | | | | | |
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It has to account for the addition of groups. It's easiest to compare
the entire string so `finditer` is used to return re.Match objects;
the tuples of `findall` would be cumbersome. Also threw in a change
to use `assertCountEqual` cause the order doesn't really matter.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It was broken due to the addition of groups. Rather than returning the
full match, `findall` returns groups if any exist. The test was
comparing a tuple of groups to the token string, which was of course
failing. Now `fullmatch` is used cause it's simpler - just check for
`None` and don't worry about iterating matches to search.
|
| | | | | | | | | |
|
| | | | | | | | | |
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The timestamp in the token is in seconds and is being compared against
the epoch. To make life easier, they should use the same unit.
Previously, the epoch was in milliseconds.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It makes more sense to use the lazy function when the loop is already
short-circuiting on the first valid token it finds.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
It felt redundant to be splitting the token in two different functions
when regex could take care of this from the outset. '
A NamedTuple was created to house the token. This is nicer than passing
an re.Match object, because it's clearer which attributes are available.
Even if the regex used named groups, it wouldn't be as obvious which
group names exist.
Without the split, `is_maybe_token` is dwindled down to a redundant
function. Therefore, it's been removed.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
They need to be escaped when they're in a character set. By default,
they are interpreted as part of the character range syntax.
|
| | | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Making the regex more accurate reduces false positives at an earlier
stage. There's no benefit to matching non-base64 as that would
just be weeded out as invalid at a later stage anyway when it tries to
decode it.
|