From c4213744c18be23e3e4484f126ae0b2d0eba4437 Mon Sep 17 00:00:00 2001 From: Sebastiaan Zeeff <33516116+SebastiaanZ@users.noreply.github.com> Date: Wed, 2 Oct 2019 16:59:03 +0200 Subject: Migrate pytest to unittest After a discussion in the core developers channel, we have decided to migrate from `pytest` to `unittest` as the testing framework. This commit sets up the repository to use `unittest` and migrates the first couple of tests files to the new framework. What I have done to migrate to `unitest`: - Removed all `pytest` test files, since they are incompatible. - Removed `pytest`-related dependencies from the Pipfile. - Added `coverage.py` to the Pipfile dev-packages and relocked. - Added convenience scripts to Pipfile for running the test suite. - Adjust to `azure-pipelines.yml` to use `coverage.py` and `unittest`. - Migrated four test files from `pytest` to `unittest` format. In addition, I've added five helper Mock subclasses in `helpers.py` and created a `TestCase` subclass in `base.py` to add an assertion that asserts that no log records were logged within the context of the context manager. Obviously, these new utility functions and classes are fully tested in their respective `test_` files. Finally, I've started with an introductory guide for writing tests for our bot in `README.md`. --- tests/README.md | 200 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 200 insertions(+) create mode 100644 tests/README.md (limited to 'tests/README.md') diff --git a/tests/README.md b/tests/README.md new file mode 100644 index 000000000..085ea39e0 --- /dev/null +++ b/tests/README.md @@ -0,0 +1,200 @@ +# Testing our Bot + +Our bot is one of the most important tools we have to help us run our community. To make sure that tool doesn't break, we decided to start testing it using unit tests. It is our goal to provide it with 100% test coverage in the future. This guide will help you get started with writing the tests needed to achieve that. + +_**Note:** This is a practical guide to getting started with writing tests for our bot, not a general introduction to writing unit tests in Python. If you're looking for a more general introduction, you may like Corey Schafer's [Python Tutorial: Unit Testing Your Code with the unittest Module](https://www.youtube.com/watch?v=6tNS--WetLI) or Ned Batchelder's PyCon talk [Getting Started Testing](https://www.youtube.com/watch?v=FxSsnHeWQBY). + +## Tools + +We are using the following modules and packages for our unit tests: + +- [unittest](https://docs.python.org/3/library/unittest.html) (standard library) +- [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) (standard library) +- [coverage.py](https://coverage.readthedocs.io/en/stable/) + +To ensure the results you obtain on your personal machine are comparable to those in the Azure pipeline, please make sure to run your tests with the virtual environment defined by our [Pipfile](/Pipfile). To run your tests with `pipenv`, we've provided two "scripts" shortcuts: + +1. `pipenv run test` will run `unittest` with `coverage.py` +2. `pipenv run report` will generate a coverage report of the tests you've run with `pipenv run test`. If you append the `-m` flag to this command, the report will include the lines and branches not covered by tests in addition to the test coverage report. + +**Note:** If you want a coverage report, make sure to run the tests with `pipenv run test` *first*. + +## Writing tests + +Since our bot is a collaborative, community project, it's important to take a couple of things into when writing tests. This ensures that our test suite is consistent and that everyone understands the tests someone else has written. + +### File and directory structure + +To organize our test suite, we have chosen to mirror the directory structure of [`bot`](/bot/) in the [`tests`](/tests/) subdirectory. This makes it easy to find the relevant tests by providing a natural grouping of files. More general files, such as [`helpers.py`](/tests/helpers.py) are located directly in the `tests` subdirectory. + +All files containing tests should have a filename starting with `test_` to make sure `unittest` will discover them. This prefix is typically followed by the name of the file the tests are written for. If needed, a test file can contain multiple test classes, both to provide structure and to be able to provide different fixtures/set-up methods for different groups of tests. + +### Writing individual and independent tests + +When writing individual test methods, it is important to make sure that each test tests its own *independent* unit. In general, this means that you don't write one large test method that tests every possible branch/part/condition relevant to your function, but rather write separate test methods for each one. The reason is that this will make sure that if one test fails, the rest of the tests will still run and we feedback on exactly which parts are and which are not working correctly. (However, the [DRY-principle](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) also applies to tests, see the `Using self.subTest for independent subtests` section.) + +#### Method names and docstrings + +It's very important that the output of failing tests is easy to understand. That's why it is important to give your test methods a descriptive name that tells us what the failing test was testing. In addition, since **single-line** docstrings will be printed in the output along with the method name, it's also important to add a good single-line docstring that summarizes the purpose of the test to your test method. + +#### Using self.subTest for independent subtests + +Since it's important to make sure all of our tests are independent from each other, you may be tempted to copy-paste your test methods to test a function against a number of different input and output parameters. Luckily, `unittest` provides a better a better of doing that: [`subtests`](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests). + +By using the `subTest` context manager, we can perform multiple independent subtests within one test method (e.g., by having a loop for the various inputs we want to test). Using this context manager ensures the test will be run independently and that, if one of them fails, the rest of the subtests are still executed. (Normally, a test function stops once the first exception, including `AssertionError`, is raised.) + +An example (taken from [`test_converters.py`])(/tests/bot/test_converters.py): + +```py + def test_tag_content_converter_for_valid(self): + """TagContentConverter should return correct values for valid input.""" + test_values = ( + ('hello', 'hellpo'), + (' h ello ', 'h ello'), + ) + + for content, expected_conversion in test_values: + with self.subTest(content=content, expected_conversion=expected_conversion): + conversion = asyncio.run(TagContentConverter.convert(self.context, content)) + self.assertEqual(conversion, expected_conversion) +``` + +It's important to note the keyword arguments we provide to the `self.subTest` context manager: These keyword arguments and their values will printed in the output when one of the subtests fail, making sure we know *which* subTest failed: + +``` +.................................................................... +====================================================================== +FAIL: test_tag_content_converter_for_valid (tests.bot.test_converters.ConverterTests) (content='hello', expected_conversion='hellpo') +TagContentConverter should return correct values for valid input. +---------------------------------------------------------------------- + +# Snipped to save vertical space +``` + +## Mocking + +Since we are testing independent "units" of code, we sometimes need to provide "fake" versions of objects generated by code external to the unit we are trying to test to make sure we're truly testing independently from other units of code. We call these "fake objects" mocks. We mainly use the [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html) module to create these mock objects, but we also have a couple special mock types defined in [`helpers.py`](/tests/helpers.py). + +An example of mocking is when we provide a command with a mocked version of `discord.ext.commands.Context` object instead of a real `Context` object and then assert if the `send` method of the mocked version was called with the right message content by the command function: + +```py +import asyncio +import unittest + +from bot.cogs import bot +from tests.helpers import MockBot, MockContext + + +class BotCogTests(unittest.TestCase): + def test_echo_command_correctly_echoes_arguments(self): + """Test if the `!echo ` command correctly echoes the content.""" + mocked_bot = MockBot() + bot_cog = bot.Bot(mocked_bot) + + mocked_context = MockContext() + + text = "Hello! This should be echoed!" + + asyncio.run(bot_cog.echo_command.callback(bot_cog, mocked_context, text=text)) + + mocked_context.send.assert_called_with(text) +``` + +### Mocking coroutines + +By default, `unittest.mock.Mock` and `unittest.mock.MagicMock` can't mock coroutines, since the `__call__` method they provide is synchronous. In anticipation of the `AsyncMock` that will be [introduced in Python 3.8](https://docs.python.org/3.9/whatsnew/3.8.html#unittest), we have added an `AsyncMock` helper to [`helpers.py`](/tests/helpers.py). Do note that this drop-in replacement only implements an asynchronous `__call__` method, not the additional assertions that will come with the new `AsyncMock` type in Python 3.8. + +### Special mocks for some `discord.py` types + +To quote Ned Batchelder, Mock objects are "automatic chameleons". This means that they will happily allow the access to any attribute or method and provide a mocked value in return. One downside to this is that if the code you are testing gets the name of the attribute wrong, your mock object will not complain and the test may still pass. + +In order to avoid that, we have defined a number of Mock types in [`helpers.py`](/tests/helpers.py) that follow the specifications of the actual Discord types they are mocking. This means that trying to access an attribute or method on a mocked object that does not exist by default on the equivalent `discord.py` object will result in an `AttributeError`. In addition, these mocks have some sensible defaults and **pass `isinstance` checks for the types they are mocking**. + +These special mocks are added when they are needed, so if you think it would be sensible to add another one, feel free to propose one in your PR. + +**Note:** These mock types only "know" the attributes that are set by default when these `discord.py` types are first initialized. If you need to work with dynamically set attributes that are added after initialization, you can still explicitly mock them: + +```py +import unittest.mock +from tests.helpers import MockGuild + +guild = MockGuild() +guild.some_attribute = unittest.mock.MagicMock() +``` + +The attribute `some_attribute` will now be accessible as a `MagicMock` on the mocked object. + +--- + +## Some considerations + +Finally, there are some considerations to make when writing tests, both for writing tests in general and for writing tests for our bot in particular. + +### Test coverage is a starting point + +Having test coverage is a good starting point for unit testing: If a part of your code was not covered by a test, we know that we have not tested it properly. The reverse is unfortunately not true: Even if the code we are testing has 100% branch coverage, it does not mean it's fully tested or guaranteed to work. + +One problem is that 100% branch coverage may be misleading if we haven't tested our code against all the realistic input it may get in production. For instance, take a look at the following `member_information` function and the test we've written for it: + +```py +import datetime +import unittest +import unittest.mock + + +def member_information(member): + joined = member.joined.stfptime("%d-%m-%Y") if member.joined else "unknown" + return f"{member.name} (joined: {joined})" + + +class FunctionsTests(unittest.TestCase): + def test_member_information(self): + member = unittest.mock.Mock() + member.name = "lemon" + member.joined = None + self.assertEqual(member_information(member), "lemon (joined: unknown)") +``` + +If you were to run this test, not only would the function pass the test, `coverage.py` will also tell us that the test provides 100% branch coverage for the function. Can you spot the bug the test suite did not catch? + +The problem here is that we have only tested our function with a member object that had `None` for the `member.joined` attribute. This means that `member.joined.stfptime("%d-%m-%Y")` was never executed during our test, leading to us missing the spelling mistake in `stfptime` (it should be `strftime`). + +Adding another test would not increase the test coverage we have, but it does ensure that we'll notice that this function can fail with realistic data: + +```py +# (...) +class FunctionsTests(unittest.TestCase): + # (...) + def test_member_information_with_join_datetime(self): + member = unittest.mock.Mock() + member.name = "lemon" + member.joined = datetime.datetime(year=2019, month=10, day=10) + self.assertEqual(member_information(member), "lemon (joined: 10-10-2019)") +``` + +Output: +``` +.E +====================================================================== +ERROR: test_member_information_with_join_datetime (tests.test_functions.FunctionsTests) +---------------------------------------------------------------------- +Traceback (most recent call last): + File "/home/pydis/playground/tests/test_functions.py", line 23, in test_member_information_with_join_datetime + self.assertEqual(member_information(member), "lemon (joined: 10-10-2019)") + File "/home/pydis/playground/tests/test_functions.py", line 8, in member_information + joined = member.joined.stfptime("%d-%m-%Y") if member.joined else "unknown" +AttributeError: 'datetime.datetime' object has no attribute 'stfptime' + +---------------------------------------------------------------------- +Ran 2 tests in 0.003s + +FAILED (errors=1) +``` + +What's more, even if the spelling mistake would not have been there, the first test did not test if the `member_information` function formatted the `member.join` according to the output we actually want to see. + +### Unit Testing vs Integration Testing + +Another restriction of unit testing is that it tests in, well, units. Even if we can guarantee that the units work as they should independently, we have no guarantee that they will actually work well together. Even more, while the mocking described above gives us a lot of flexibility in factoring out external code, we work under the implicit assumption of understanding and utilizing those external objects correctly. So, in addition to testing the parts separately, we also need to test if the parts integrate correctly into a single application. + +We currently have no automated integration tests or functional tests, so **it's still very important to fire up the bot and test the code you've written manually** in addition to the unit tests you've written. -- cgit v1.2.3 From 6d9cb1ad99d064d8810feb553c6b0463c74c92d4 Mon Sep 17 00:00:00 2001 From: Sebastiaan Zeeff <33516116+SebastiaanZ@users.noreply.github.com> Date: Fri, 11 Oct 2019 19:54:47 +0200 Subject: Change pipeline testrunner to xmlrunner I have change the testrunner from `unittest` to `xmlrunner` in the Azure pipeline to be able to publish our test results on Azure. This is the same runner as `site` uses to generate XML reports. In addition, I've cleaned up some small mistakes in docstrings and `README.md`. --- .gitignore | 4 ++-- Pipfile | 1 + Pipfile.lock | 10 +++++++++- azure-pipelines.yml | 13 ++++++++++--- tests/README.md | 6 +++--- tests/helpers.py | 14 ++++++-------- 6 files changed, 31 insertions(+), 17 deletions(-) (limited to 'tests/README.md') diff --git a/.gitignore b/.gitignore index 261fa179f..210847759 100644 --- a/.gitignore +++ b/.gitignore @@ -114,5 +114,5 @@ log.* # Custom user configuration config.yml -# JUnit XML reports from pytest -junit.xml +# xmlrunner unittest XML reports +TEST-**.xml diff --git a/Pipfile b/Pipfile index 0c73e4ca2..48d839fc3 100644 --- a/Pipfile +++ b/Pipfile @@ -32,6 +32,7 @@ flake8-tidy-imports = "~=2.0" flake8-todo = "~=0.7" pre-commit = "~=1.18" safety = "~=1.8" +unittest-xml-reporting = "~=2.5" dodgy = "~=0.1" [requires] diff --git a/Pipfile.lock b/Pipfile.lock index 366d1e525..95955ff89 100644 --- a/Pipfile.lock +++ b/Pipfile.lock @@ -1,7 +1,7 @@ { "_meta": { "hash": { - "sha256": "f5f32a03b561f1805f52447ca4e6582dd459581c5d581638925b7fabb09869f8" + "sha256": "c27d699b4aeeed204dee41f924f682ae2a670add8549a8826e58776594370582" }, "pipfile-spec": 6, "requires": { @@ -880,6 +880,14 @@ ], "version": "==1.4.0" }, + "unittest-xml-reporting": { + "hashes": [ + "sha256:140982e4b58e4052d9ecb775525b246a96bfc1fc26097806e05ea06e9166dd6c", + "sha256:d1fbc7a1b6c6680ccfe75b5e9701e5431c646970de049e687b4bb35ba4325d72" + ], + "index": "pypi", + "version": "==2.5.1" + }, "urllib3": { "hashes": [ "sha256:2393a695cd12afedd0dcb26fe5d50d0cf248e5a66f75dbd89a3d4eb333a61af4", diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 15470f9be..da3b06201 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -30,11 +30,11 @@ jobs: - script: python -m flake8 displayName: 'Run linter' - - script: BOT_API_KEY=foo BOT_TOKEN=bar WOLFRAM_API_KEY=baz coverage run -m unittest + - script: BOT_API_KEY=foo BOT_TOKEN=bar WOLFRAM_API_KEY=baz coverage run -m xmlrunner displayName: Run tests - - script: coverage xml -o coverage.xml - displayName: Create test coverage report + - script: coverage report -m && coverage xml -o coverage.xml + displayName: Generate test coverage report - task: PublishCodeCoverageResults@1 displayName: 'Publish Coverage Results' @@ -43,6 +43,13 @@ jobs: codeCoverageTool: Cobertura summaryFileLocation: coverage.xml + - task: PublishTestResults@2 + condition: succeededOrFailed() + displayName: 'Publish Test Results' + inputs: + testResultsFiles: '**/TEST-*.xml' + testRunTitle: 'Bot Test Results' + - job: build displayName: 'Build & Push Container' dependsOn: 'test' diff --git a/tests/README.md b/tests/README.md index 085ea39e0..471a00923 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,8 +1,8 @@ # Testing our Bot -Our bot is one of the most important tools we have to help us run our community. To make sure that tool doesn't break, we decided to start testing it using unit tests. It is our goal to provide it with 100% test coverage in the future. This guide will help you get started with writing the tests needed to achieve that. +Our bot is one of the most important tools we have to help us run our community. To make sure that tool doesn't break, we've decided to start writing unit tests for it. It is our goal to provide it with 100% test coverage in the future. This guide will help you get started with writing the tests needed to achieve that. -_**Note:** This is a practical guide to getting started with writing tests for our bot, not a general introduction to writing unit tests in Python. If you're looking for a more general introduction, you may like Corey Schafer's [Python Tutorial: Unit Testing Your Code with the unittest Module](https://www.youtube.com/watch?v=6tNS--WetLI) or Ned Batchelder's PyCon talk [Getting Started Testing](https://www.youtube.com/watch?v=FxSsnHeWQBY). +_**Note:** This is a practical guide to getting started with writing tests for our bot, not a general introduction to writing unit tests in Python. If you're looking for a more general introduction, you may like Corey Schafer's [Python Tutorial: Unit Testing Your Code with the unittest Module](https://www.youtube.com/watch?v=6tNS--WetLI) or Ned Batchelder's PyCon talk [Getting Started Testing](https://www.youtube.com/watch?v=FxSsnHeWQBY)._ ## Tools @@ -43,7 +43,7 @@ Since it's important to make sure all of our tests are independent from each oth By using the `subTest` context manager, we can perform multiple independent subtests within one test method (e.g., by having a loop for the various inputs we want to test). Using this context manager ensures the test will be run independently and that, if one of them fails, the rest of the subtests are still executed. (Normally, a test function stops once the first exception, including `AssertionError`, is raised.) -An example (taken from [`test_converters.py`])(/tests/bot/test_converters.py): +An example (taken from [`test_converters.py`](/tests/bot/test_converters.py)): ```py def test_tag_content_converter_for_valid(self): diff --git a/tests/helpers.py b/tests/helpers.py index 64fc04afe..18c9866bf 100644 --- a/tests/helpers.py +++ b/tests/helpers.py @@ -50,7 +50,7 @@ class HashableMixin(discord.mixins.EqualityComparable): class ColourMixin: - """A mixin of Mocks that provides the aliasing of color->colour like discord.py does.""" + """A mixin for Mocks that provides the aliasing of color->colour like discord.py does.""" @property def color(self) -> discord.Colour: @@ -159,14 +159,9 @@ class MockRole(AttributeMock, unittest.mock.Mock, ColourMixin, HashableMixin): attribute_mocktype = unittest.mock.MagicMock - def __init__( - self, - name: str = "role", - role_id: int = 1, - position: int = 1, - **kwargs, - ) -> None: + def __init__(self, name: str = "role", role_id: int = 1, position: int = 1, **kwargs) -> None: super().__init__(spec=role_instance, **kwargs) + self.name = name self.id = role_id self.position = position @@ -201,11 +196,14 @@ class MockMember(AttributeMock, unittest.mock.Mock, ColourMixin, HashableMixin): **kwargs, ) -> None: super().__init__(spec=member_instance, **kwargs) + self.name = name self.id = user_id + self.roles = [MockRole("@everyone", 1)] if roles: self.roles.extend(roles) + self.mention = f"@{self.name}" self.send = AsyncMock() -- cgit v1.2.3 From d6bedd16eb8fc81b2f0a17992ba000ed27fd7d72 Mon Sep 17 00:00:00 2001 From: Sebastiaan Zeeff <33516116+SebastiaanZ@users.noreply.github.com> Date: Fri, 11 Oct 2019 23:05:01 +0200 Subject: Make textual changes to testing guide I've made some textual changes to the testing guidelines defined in README.md. --- tests/README.md | 54 ++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 20 deletions(-) (limited to 'tests/README.md') diff --git a/tests/README.md b/tests/README.md index 471a00923..4ed32c29b 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,6 +1,6 @@ # Testing our Bot -Our bot is one of the most important tools we have to help us run our community. To make sure that tool doesn't break, we've decided to start writing unit tests for it. It is our goal to provide it with 100% test coverage in the future. This guide will help you get started with writing the tests needed to achieve that. +Our bot is one of the most important tools we have for running our community. As we don't want that tool tool break, we decided that we wanted to write unit tests for it. We hope that in the future, we'll have a 100% test coverage for the bot. This guide will help you get started with writing the tests needed to achieve that. _**Note:** This is a practical guide to getting started with writing tests for our bot, not a general introduction to writing unit tests in Python. If you're looking for a more general introduction, you may like Corey Schafer's [Python Tutorial: Unit Testing Your Code with the unittest Module](https://www.youtube.com/watch?v=6tNS--WetLI) or Ned Batchelder's PyCon talk [Getting Started Testing](https://www.youtube.com/watch?v=FxSsnHeWQBY)._ @@ -12,38 +12,45 @@ We are using the following modules and packages for our unit tests: - [unittest.mock](https://docs.python.org/3/library/unittest.mock.html) (standard library) - [coverage.py](https://coverage.readthedocs.io/en/stable/) -To ensure the results you obtain on your personal machine are comparable to those in the Azure pipeline, please make sure to run your tests with the virtual environment defined by our [Pipfile](/Pipfile). To run your tests with `pipenv`, we've provided two "scripts" shortcuts: +To ensure the results you obtain on your personal machine are comparable to those generated in the Azure pipeline, please make sure to run your tests with the virtual environment defined by our [Pipfile](/Pipfile). To run your tests with `pipenv`, we've provided two "scripts" shortcuts: -1. `pipenv run test` will run `unittest` with `coverage.py` -2. `pipenv run report` will generate a coverage report of the tests you've run with `pipenv run test`. If you append the `-m` flag to this command, the report will include the lines and branches not covered by tests in addition to the test coverage report. +- `pipenv run test` will run `unittest` with `coverage.py` +- `pipenv run report` will generate a coverage report of the tests you've run with `pipenv run test`. If you append the `-m` flag to this command, the report will include the lines and branches not covered by tests in addition to the test coverage report. -**Note:** If you want a coverage report, make sure to run the tests with `pipenv run test` *first*. +If you want a coverage report, make sure to run the tests with `pipenv run test` *first*. ## Writing tests -Since our bot is a collaborative, community project, it's important to take a couple of things into when writing tests. This ensures that our test suite is consistent and that everyone understands the tests someone else has written. +Since consistency is an important consideration for collaborative projects, we have written some guidelines on writing tests for the bot. In addition to these guidelines, it's a good idea to look at the existing code base for examples (e.g., [`test_converters.py`](/tests/bot/test_converters.py)). ### File and directory structure -To organize our test suite, we have chosen to mirror the directory structure of [`bot`](/bot/) in the [`tests`](/tests/) subdirectory. This makes it easy to find the relevant tests by providing a natural grouping of files. More general files, such as [`helpers.py`](/tests/helpers.py) are located directly in the `tests` subdirectory. +To organize our test suite, we have chosen to mirror the directory structure of [`bot`](/bot/) in the [`tests`](/tests/) subdirectory. This makes it easy to find the relevant tests by providing a natural grouping of files. More general testing files, such as [`helpers.py`](/tests/helpers.py) are located directly in the `tests` subdirectory. All files containing tests should have a filename starting with `test_` to make sure `unittest` will discover them. This prefix is typically followed by the name of the file the tests are written for. If needed, a test file can contain multiple test classes, both to provide structure and to be able to provide different fixtures/set-up methods for different groups of tests. -### Writing individual and independent tests +### Writing independent tests -When writing individual test methods, it is important to make sure that each test tests its own *independent* unit. In general, this means that you don't write one large test method that tests every possible branch/part/condition relevant to your function, but rather write separate test methods for each one. The reason is that this will make sure that if one test fails, the rest of the tests will still run and we feedback on exactly which parts are and which are not working correctly. (However, the [DRY-principle](https://en.wikipedia.org/wiki/Don%27t_repeat_yourself) also applies to tests, see the `Using self.subTest for independent subtests` section.) +When writing unit tests, it's really important to make sure each test that you write runs independently from all of the other tests. This both means that the code you write for one test shouldn't influence the result of another test and that if one tests fails, the other tests should still run. + +The basis for this is that when you write a test method, it should really only test a single aspect of the thing you're testing. This often means that you do not write one large test that tests "everything" that can be tested for a function, but rather that you write multiple smaller tests that each test a specific branch/path/condition of the function under scrutiny. + +To make sure you're not repeating the same set-up steps in all these smaller tests, `unittest` provides fixtures that are executed before and after each test is run. In addition to test fixtures, it also provides special set-up and clean-up methods that are run before the first test in a test class or after the last test of that class has been run. For more information, see the documentation for [`unittest.TestCase`](https://docs.python.org/3/library/unittest.html#unittest.TestCase). #### Method names and docstrings -It's very important that the output of failing tests is easy to understand. That's why it is important to give your test methods a descriptive name that tells us what the failing test was testing. In addition, since **single-line** docstrings will be printed in the output along with the method name, it's also important to add a good single-line docstring that summarizes the purpose of the test to your test method. +As you can probably imagine, writing smaller, independent tests also means that the number of tests will be large. This means that it is incredibly import to use good method names to identify what each test is doing. A general guideline is that the name should capture the goal of your test: What is this test method trying to assert? + +In addition to good method names, it's also really important to write a good *single-line* docstring. The `unittest` module will print such a single-line docstring along with the method name in the output it gives when a test fails. This means that a good docstring that really captures the purpose of the test makes it much easier to quickly make sense of output. #### Using self.subTest for independent subtests -Since it's important to make sure all of our tests are independent from each other, you may be tempted to copy-paste your test methods to test a function against a number of different input and output parameters. Luckily, `unittest` provides a better a better of doing that: [`subtests`](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests). +Another thing that you will probably encounter is that you want to test a function against a list of input and output values. Given the section on writing independent tests, you may now be tempted to copy-paste the same test method over and over again, once for each unique value that you want to test. However, that would result in a lot of duplicate code that is hard to maintain. + -By using the `subTest` context manager, we can perform multiple independent subtests within one test method (e.g., by having a loop for the various inputs we want to test). Using this context manager ensures the test will be run independently and that, if one of them fails, the rest of the subtests are still executed. (Normally, a test function stops once the first exception, including `AssertionError`, is raised.) +Luckily, `unittest` provides a good alternative to that: the [`subTest`](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests) context manager. It is often used in conjunction with a `for`-loop iterating of a collection of values you want to test your function against and it provides two important features. First, it will make sure that if an assertion statements fails on of the iterations, the other iterations are still run. The other important feature it provides is that it will distinguish your iterations from each other in the output. -An example (taken from [`test_converters.py`](/tests/bot/test_converters.py)): +This is an example of `TestCase.subTest` in action (taken from [`test_converters.py`](/tests/bot/test_converters.py)): ```py def test_tag_content_converter_for_valid(self): @@ -68,14 +75,19 @@ FAIL: test_tag_content_converter_for_valid (tests.bot.test_converters.ConverterT TagContentConverter should return correct values for valid input. ---------------------------------------------------------------------- -# Snipped to save vertical space +# ... ``` ## Mocking -Since we are testing independent "units" of code, we sometimes need to provide "fake" versions of objects generated by code external to the unit we are trying to test to make sure we're truly testing independently from other units of code. We call these "fake objects" mocks. We mainly use the [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html) module to create these mock objects, but we also have a couple special mock types defined in [`helpers.py`](/tests/helpers.py). +As we are trying to test our "units" of code independently, we want to make sure that we do not rely objects and data generated by "external" code. If we we did, then we wouldn't know if the failure we're observing was caused by the code we are actually trying to test or an external piece of code. -An example of mocking is when we provide a command with a mocked version of `discord.ext.commands.Context` object instead of a real `Context` object and then assert if the `send` method of the mocked version was called with the right message content by the command function: + +However, the features that we are trying to test often depend on those objects generated by other pieces of code. It would be difficult to test a bot command, without having access to a `Context` instance. Fortunately, there's a solution for that: we use fake objects that act like the true object. We call these fake objects "mocks". + +To create these mock object, we mainly use the [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html) module. In addition, we also defined a couple of specialized mock objects that mock specific `discord.py` types (see the section on the below.). + +An example of mocking is when we provide a command with a mocked version of `discord.ext.commands.Context` object instead of a real `Context` object. This makes sure we can then check (_assert_) if the `send` method of the mocked Context object was called with the correct message content (without having to send a real message to the Discord API!): ```py import asyncio @@ -102,13 +114,13 @@ class BotCogTests(unittest.TestCase): ### Mocking coroutines -By default, `unittest.mock.Mock` and `unittest.mock.MagicMock` can't mock coroutines, since the `__call__` method they provide is synchronous. In anticipation of the `AsyncMock` that will be [introduced in Python 3.8](https://docs.python.org/3.9/whatsnew/3.8.html#unittest), we have added an `AsyncMock` helper to [`helpers.py`](/tests/helpers.py). Do note that this drop-in replacement only implements an asynchronous `__call__` method, not the additional assertions that will come with the new `AsyncMock` type in Python 3.8. +By default, the `unittest.mock.Mock` and `unittest.mock.MagicMock` classes cannot mock coroutines, since the `__call__` method they provide is synchronous. In anticipation of the `AsyncMock` that will be [introduced in Python 3.8](https://docs.python.org/3.9/whatsnew/3.8.html#unittest), we have added an `AsyncMock` helper to [`helpers.py`](/tests/helpers.py). Do note that this drop-in replacement only implements an asynchronous `__call__` method, not the additional assertions that will come with the new `AsyncMock` type in Python 3.8. ### Special mocks for some `discord.py` types To quote Ned Batchelder, Mock objects are "automatic chameleons". This means that they will happily allow the access to any attribute or method and provide a mocked value in return. One downside to this is that if the code you are testing gets the name of the attribute wrong, your mock object will not complain and the test may still pass. -In order to avoid that, we have defined a number of Mock types in [`helpers.py`](/tests/helpers.py) that follow the specifications of the actual Discord types they are mocking. This means that trying to access an attribute or method on a mocked object that does not exist by default on the equivalent `discord.py` object will result in an `AttributeError`. In addition, these mocks have some sensible defaults and **pass `isinstance` checks for the types they are mocking**. +In order to avoid that, we have defined a number of Mock types in [`helpers.py`](/tests/helpers.py) that follow the specifications of the actual Discord types they are mocking. This means that trying to access an attribute or method on a mocked object that does not exist on the equivalent `discord.py` object will result in an `AttributeError`. In addition, these mocks have some sensible defaults and **pass `isinstance` checks for the types they are mocking**. These special mocks are added when they are needed, so if you think it would be sensible to add another one, feel free to propose one in your PR. @@ -193,8 +205,10 @@ FAILED (errors=1) What's more, even if the spelling mistake would not have been there, the first test did not test if the `member_information` function formatted the `member.join` according to the output we actually want to see. +All in all, it's not only important to consider if all statements or branches were touched at least once with a test, but also if they are extensively tested in all situations that may happen in production. + ### Unit Testing vs Integration Testing -Another restriction of unit testing is that it tests in, well, units. Even if we can guarantee that the units work as they should independently, we have no guarantee that they will actually work well together. Even more, while the mocking described above gives us a lot of flexibility in factoring out external code, we work under the implicit assumption of understanding and utilizing those external objects correctly. So, in addition to testing the parts separately, we also need to test if the parts integrate correctly into a single application. +Another restriction of unit testing is that it tests, well, in units. Even if we can guarantee that the units work as they should independently, we have no guarantee that they will actually work well together. Even more, while the mocking described above gives us a lot of flexibility in factoring out external code, we are work under the implicit assumption that we fully understand those external parts and utilize it correctly. What if our mocked `Context` object works with a `send` method, but `discord.py` has changed it to a `send_message` method in a recent update? It could mean our tests are passing, but the code it's testing still doesn't work in production. -We currently have no automated integration tests or functional tests, so **it's still very important to fire up the bot and test the code you've written manually** in addition to the unit tests you've written. +The answer to this is that we also need to make sure that the individual parts come together into a working application. In addition, we will also need to make sure that the application communicates correctly with external applications. Since we currently have no automated integration tests or functional tests, that means **it's still very important to fire up the bot and test the code you've written manually** in addition to the unit tests you've written. -- cgit v1.2.3 From 2d938c610f42b62de78a26a186e6ffb5ff6e624a Mon Sep 17 00:00:00 2001 From: Sebastiaan Zeeff <33516116+SebastiaanZ@users.noreply.github.com> Date: Fri, 11 Oct 2019 23:16:28 +0200 Subject: Update README.md --- tests/README.md | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) (limited to 'tests/README.md') diff --git a/tests/README.md b/tests/README.md index 4ed32c29b..6ab9bc93e 100644 --- a/tests/README.md +++ b/tests/README.md @@ -1,6 +1,6 @@ # Testing our Bot -Our bot is one of the most important tools we have for running our community. As we don't want that tool tool break, we decided that we wanted to write unit tests for it. We hope that in the future, we'll have a 100% test coverage for the bot. This guide will help you get started with writing the tests needed to achieve that. +Our bot is one of the most important tools we have for running our community. As we don't want that tool break, we decided that we wanted to write unit tests for it. We hope that in the future, we'll have a 100% test coverage for the bot. This guide will help you get started with writing the tests needed to achieve that. _**Note:** This is a practical guide to getting started with writing tests for our bot, not a general introduction to writing unit tests in Python. If you're looking for a more general introduction, you may like Corey Schafer's [Python Tutorial: Unit Testing Your Code with the unittest Module](https://www.youtube.com/watch?v=6tNS--WetLI) or Ned Batchelder's PyCon talk [Getting Started Testing](https://www.youtube.com/watch?v=FxSsnHeWQBY)._ @@ -31,7 +31,7 @@ All files containing tests should have a filename starting with `test_` to make ### Writing independent tests -When writing unit tests, it's really important to make sure each test that you write runs independently from all of the other tests. This both means that the code you write for one test shouldn't influence the result of another test and that if one tests fails, the other tests should still run. +When writing unit tests, it's really important to make sure that each test that you write runs independently from all of the other tests. This both means that the code you write for one test shouldn't influence the result of another test and that if one tests fails, the other tests should still run. The basis for this is that when you write a test method, it should really only test a single aspect of the thing you're testing. This often means that you do not write one large test that tests "everything" that can be tested for a function, but rather that you write multiple smaller tests that each test a specific branch/path/condition of the function under scrutiny. @@ -39,7 +39,7 @@ To make sure you're not repeating the same set-up steps in all these smaller tes #### Method names and docstrings -As you can probably imagine, writing smaller, independent tests also means that the number of tests will be large. This means that it is incredibly import to use good method names to identify what each test is doing. A general guideline is that the name should capture the goal of your test: What is this test method trying to assert? +As you can probably imagine, writing smaller, independent tests also results in a large number of tests. To make sure that it's easy to see which test does what, it is incredibly important to use good method names to identify what each test is doing. A general guideline is that the name should capture the goal of your test: What is this test method trying to assert? In addition to good method names, it's also really important to write a good *single-line* docstring. The `unittest` module will print such a single-line docstring along with the method name in the output it gives when a test fails. This means that a good docstring that really captures the purpose of the test makes it much easier to quickly make sense of output. @@ -47,8 +47,7 @@ In addition to good method names, it's also really important to write a good *si Another thing that you will probably encounter is that you want to test a function against a list of input and output values. Given the section on writing independent tests, you may now be tempted to copy-paste the same test method over and over again, once for each unique value that you want to test. However, that would result in a lot of duplicate code that is hard to maintain. - -Luckily, `unittest` provides a good alternative to that: the [`subTest`](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests) context manager. It is often used in conjunction with a `for`-loop iterating of a collection of values you want to test your function against and it provides two important features. First, it will make sure that if an assertion statements fails on of the iterations, the other iterations are still run. The other important feature it provides is that it will distinguish your iterations from each other in the output. +Luckily, `unittest` provides a good alternative to that: the [`subTest`](https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests) context manager. This method is often used in conjunction with a `for`-loop iterating of a collection of values that we want to test a function against and it provides two important features. First, it will make sure that if an assertion statements fails on one of the iterations, the other iterations are still run. The other important feature it provides is that it will distinguish the iterations from each other in the output. This is an example of `TestCase.subTest` in action (taken from [`test_converters.py`](/tests/bot/test_converters.py)): @@ -80,12 +79,12 @@ TagContentConverter should return correct values for valid input. ## Mocking -As we are trying to test our "units" of code independently, we want to make sure that we do not rely objects and data generated by "external" code. If we we did, then we wouldn't know if the failure we're observing was caused by the code we are actually trying to test or an external piece of code. +As we are trying to test our "units" of code independently, we want to make sure that we do not rely objects and data generated by "external" code. If we we did, then we wouldn't know if the failure we're observing was caused by the code we are actually trying to test or something external to it. -However, the features that we are trying to test often depend on those objects generated by other pieces of code. It would be difficult to test a bot command, without having access to a `Context` instance. Fortunately, there's a solution for that: we use fake objects that act like the true object. We call these fake objects "mocks". +However, the features that we are trying to test often depend on those objects generated by external pieces of code. It would be difficult to test a bot command without having access to a `Context` instance. Fortunately, there's a solution for that: we use fake objects that act like the true object. We call these fake objects "mocks". -To create these mock object, we mainly use the [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html) module. In addition, we also defined a couple of specialized mock objects that mock specific `discord.py` types (see the section on the below.). +To create these mock object, we mainly use the [`unittest.mock`](https://docs.python.org/3/library/unittest.mock.html) module. In addition, we have also defined a couple of specialized mock objects that mock specific `discord.py` types (see the section on the below.). An example of mocking is when we provide a command with a mocked version of `discord.ext.commands.Context` object instead of a real `Context` object. This makes sure we can then check (_assert_) if the `send` method of the mocked Context object was called with the correct message content (without having to send a real message to the Discord API!): -- cgit v1.2.3