Code coverage is a simple tool for checking which lines of your application code are run by your test suite.100% coverage is a laudable goal, as it means every line is run at least once.
Keep these heavier tests in a separate test suite that is run by some scheduled task, and run all other tests as often as needed. Learn your tools and learn how to run a single test or a test case. Then, when developing a function inside a module, run this function’s tests frequently, ideally automatically when you save the code.
Coverage.py is the Python tool for measuring code coverage.Ned Batchelder has maintained it for an incredible 14 years!
Online Python IDE is a web-based tool powered by ACE code editor. This tool can be used to learn, build, run, test your python script. You can open the script from your local and continue to build using this IDE. Since Django 1.6 you can run a complete test case, or single test, using the complete dot notation for the element you want to run. Automatic test discovery will now find tests in any file that starts with test under the working directory, so addressing the question you would have to rename your files, but you can now keep them inside the directory you want. MongoEngine’s documentation suggests using Mongomock in tests, which is built around excellent Mock module from the Python standard library. To use it - after it’s installation - just change.
I like adding Coverage.py to my Django projects, like fellow Django Software Foundation member Sasha Romijn.
Let’s look at how we can integrate it with a Django project, and how to get that golden 100% (even if it means ignoring some lines).
Configuring Coverage.py¶
Install coverage
with pip install coverage
.It includes a C extension for speed-up, it’s worth checking that this installs properly - see the installation docs for information.
Then set up a configuration file for your project.The default file name is .coveragerc
, but since that’s a hidden file I prefer to use the option to store the configuration in setup.cfg
.
This INI file was originally used only by setuptools
but now many tools have the option to read their configuration from it.For Coverage.py, we put our settings there in sections prefixed with coverage:
.
The Run Section¶
This is where we tell Coverage.py what coverage data to gather.
We tell Coverage.py which files to check with the source
option.In a typical Django project this is as easy as specifying the current directory (source = .
) or the app directory (source = myapp/*
).Add it like so:
(Remove the coverage:
if you’re using .coveragerc
.).
An issue I’ve seen on a Django project is Coverage.py finding Python files from a nested node_modules
.It seems Python is so great even JavaScript projects have a hard time resisting it!We can tell coverage to ignore these files by adding omit = */node_modules/*
.
When you come to a fork in the road, take it.
—Yogi Berra
An extra I like to add is branch coverage.This ensures that your code runs through both the True
and False
paths of each conditional statement.You can set this up by adding branch = True
in your run
section.
As an example, take this code:
With branch coverage off, we can get away with tests that pass in a red widget.Really, we should be testing with both red and non-red widgets.Branch coverage enforces this, by counting both paths from the if
.
The Report Section¶
This is where we tell Coverage.py how to report the coverage data back to us.
I like to add three settings here.
fail_under = 100
requires us to reach that sweet 100% goal to pass.If we’re under our target, the report command fails.show_missing = True
adds a column to the report with a summary of which lines (and branches) the tests missed.This makes it easy to go from a failure to fixing it, rather than using the HTML report.skip_covered = True
avoids outputting file names with 100% coverage.This makes the report a lot shorter, especially if you have a lot of files and are getting to 100% coverage.
Add them like so:
(Again, remove the coverage:
prefix if you’re using .coveragerc
.)
Template Coverage¶
Your Django project probably has a lot of template code.It’s a great idea to test its coverage too.This can help you find blocks or whole template files that the tests didn’t run.
Lucky for us, the primary plugin listed on the Coverage.py plugins page is the Django template plugin.
See the django_coverage_plugin PyPI page for its installation instructions.It just needs a pip install
and activation in [coverage:run]
.
Git Ignore¶
If your project is using Git, you’ll want to ignore the files that Coverage.py generates.GitHub’s default Python .gitignore
already ignores Coverage’s file.If your project isn’t using this, add these lines in your .gitignore
:
Using Coverage in Tests¶
This bit depends on how you run your tests.I prefer using pytest with pytest-django.However many, projects use the default Django test runner, so I’ll describe that first.
With Django’s Test Runner¶
If you’re using manage.py test
, you need to change the way you run it.You need to wrap it with three coverage commands like so:
99% - looks like I have a little bit of work to do on my test application!
Having to run three commands sucks.That’s three times as many commands as before!
We could wrap the tests with a shell script.You could add a shell script with this code:
Update (2020-01-06):Previously the below section recommended a custom test
management command.However, since this will only be run after some imports, it's not possible to record 100% coverage this way.Thanks to Hervé Le Roy for reporting this.
However, there’s a more integrated way of achieving this inside Django.We can patch manage.py
to call Coverage.py’s API to measure when we run the test
command.Here’s how, based on the default manage.py
in Django 3.0:
Notes:
The two customizations are the blocks before and after the
execute_from_command_line
block, guarded withif running_tests:
.You need to add
manage.py
toomit
in the configuration file, since it runs before coverage starts.For example:(It's fine, and good, to put them on multiple lines.Ignore the furious red from my blog's syntax highlighter.)
The
.report()
method doesn’t exit for us like the commandline method does.Instead we do our own test on the returnedcovered
amount.This means we can removefail_under
from the[coverage:report]
section in our configuration file.
Run the tests again and you'll see it in use:
Yay!
(Okay, it’s still 99%.Spoiler: I’m actually not going to fix that in this post because I’m lazy.)
With pytest¶
It’s less work to set up Coverage testing in the magical land of pytest.Simply install the pytest-cov plugin and follow its configuration guide.
The plugin will ignore the [coverage:report]
section and source
setting in the configuration, in favour of its own pytest arguments.We can set these in our pytest configuration’s addopts
setting.For example in our pytest.ini
we might have:
(Ignore the angry red from my blog’s syntax highlighter.)
Run pytest
again and you’ll see the coverage report at the end of the pytest report:
Hooray!
(Yup, still 99%.)
Browsing the Coverage HTML Report¶
The terminal report is great but it can be hard to join this data back with your code.Looking at uncovered lines requires:
- Remembering the file name and line numbers from the terminal report
- Opening the file in your text editor
- Navigating to those lines
- Repeat for each set of lines in each file
This gets tiring quickly!
Coverage.py has a very useful feature to automate this merging, the HTML report.
After running coverage run
, the coverage data is stored in the .coverage
file.Run this command to generate an HTML report from this file:
This creates a folder called htmlcov
.Open up htmlcov/index.html
and you’ll see something like this:
Click on an individual file to see line by line coverage information:
The highlighted red lines are not covered and need work.
Django itself uses this on its Jenkins test server.See the “HTML Coverage Report” on the djangoci.com project django-coverage.
With PyCharm¶
Coverage.py is built-in to this editor, in the “Run <name> with coverage” feature.
This is great for individual development but less so for a team as other developers may not use PyCharm.Also it won’t be automatically run in your tests or your Continuous Integration pipeline.
See more in this Jetbrains feature spotlight blog post.
Is 100% (Branch) Coverage Too Much?¶
Some advocate for 100% branch coverage on every project.Others are skeptical, and even believe it to be a waste of time.
For examples of this debate, see this Stack Overflow question and this one.
Like most things, it depends.
First, it depends on your project’s maturity.If you’re writing an MVP and moving fast with few tests, coverage will definitely slow you down.But if your project is supporting anything of value, it’s an investment for quality.
Second, it depends on your tests.If your tests are low quality, Coverage won’t magically improve them.That said, it can be a tool to help you work towards smaller, better targeted tests.
100% coverage certainly does not mean your tests cover all scenarios.Indeed, it’s impossible to cover all scenarios, due to the combinatorial explosion from multiplying branches.(See all-pairs testing for one way of tackling this explosion.)
Third, it depends on your code.Certain types of code are harder to test, for example branches dealing with concurrent conditions.
IF YOU’RE HAVING CONCURRENCY PROBLEMS I FEEL BAD FOR YOU SON
99 AIN’T GOT I BUT PROBLEMS CONCURRENCY ONE
—[@quinnypig on Twitter](https://twitter.com/QuinnyPig/status/1110567694837800961)
Some tools, such as unittest.mock
, help us reach those hard branches.However, it might be a lot of work to cover them all, taking time away from other means of verification.
Fourth, it depends on your other tooling.If you have good code review, quality tests, fast deploys, and detailed monitoring, you already have many defences against bugs.Perhaps 100% coverage won’t add much, but normally these areas are all a bit lacking or not possible.For example, if you’re working a solo project, you don’t have code review, so 100% coverage can be a great boon.
To conclude, I think that coverage is a great addition to any project, but it shouldn’t be the only priority.A pragmatic balance is to set up Coverage for 100% branch coverage, but to be unafraid of adding # pragma: no cover
.These comments may be ugly, but at least they mark untested sections intentionally.If no cover
code crashes in production, you should be less surprised.
Also, review these comments periodically with a simple search.You might learn more and change your mind about how easy it is to test those sections.
Fin¶
Go forth and cover your tests!
If you used this post to improve your test suite, I’d love to hear your story.Tell me via Twitter or email - contact details are on the front page.
—Adam
Thanks to Aidas Bendoraitis for reviewing this post.
🎉 My book Speed Up Your Django Tests is now up to date for Django 3.2. 🎉
Buy now on Gumroad
One summary email a week, no spam, I pinky promise.
Related posts:
Tags:django
© 2019 All rights reserved.
Note
This document assumes you are working from anin-development checkout of Python. If youare not then some things presented here may not work as they may dependon new features not available in earlier versions of Python.
Running¶
The shortest, simplest way of running the test suite is the following commandfrom the root directory of your checkout (after you havebuilt Python):
You may need to change this command as follows throughout this section.On most Mac OS X systems, replace ./python
with ./python.exe
. On Windows, use python.bat
. If usingPython 2.7, replace test
with test.regrtest
.
If you don’t have easy access to a command line, you can run the test suite froma Python or IDLE shell:
This will run the majority of tests, but exclude a small portion of them; theseexcluded tests use special kinds of resources: for example, accessing theInternet, or trying to play a sound or to display a graphical interface onyour desktop. They are disabled by default so that running the test suiteis not too intrusive. To enable some of these additional tests (and forother flags which can help debug various issues such as reference leaks), readthe help text:
If you want to run a single test file, simply specify the test file name(without the extension) as an argument. You also probably want to enableverbose mode (using -v
), so that individual failures are detailed:
To run a single test case, use the unittest
module, providing the importpath to the test case:
If you have a multi-core or multi-CPU machine, you can enable parallel testingusing several Python processes so as to speed up things:
If you are running a version of Python prior to 3.3 you must specify the numberof processes to run simultaneously (e.g. -j2
).
Finally, if you want to run tests under a more strenuous set of settings, youcan run test
as:
The various extra flags passed to Python cause it to be much stricter aboutvarious things (the -Wd
flag should be -Werror
at some point, but thetest suite has not reached a point where all warnings have been dealt with andso we cannot guarantee that a bug-free Python will properly complete a test runwith -Werror
). The -r
flag to the test runner causes it to run tests ina more random order which helps to check that the various tests do not interferewith each other. The -w
flag causes failing tests to be run again to seeif the failures are transient or consistent.The -uall
flag allows the use of all availableresources so as to not skip tests requiring, e.g., Internet access.
To check for reference leaks (only needed if you modified C code), use the-R
flag. For example, -R3:2
will first run the test 3 times to settledown the reference count, and then run it 2 more times to verify if there areany leaks.
You can also execute the Tools/scripts/run_tests.py
script as found in aCPython checkout. The script tries to balance speed with thoroughness. But ifyou want the most thorough tests you should use the strenuous approach shownabove.
Unexpected Skips¶
Sometimes when running the test suite, you will see “unexpected skips”reported. These represent cases where an entire test module has beenskipped, but the test suite normally expects the tests in that module tobe executed on that platform.
Often, the cause is that an optional module hasn’t been built due to missingbuild dependencies. In these cases, the missing module reported when the testis skipped should match one of the modules reported as failing to build whenCompile and build.
In other cases, the skip message should provide enough detail to help figureout and resolve the cause of the problem (for example, the default securitysettings on some platforms will disallow some tests)
Writing¶
Writing tests for Python is much like writing tests for your own code. Testsneed to be thorough, fast, isolated, consistently repeatable, and as simple aspossible. We try to have tests both for normal behaviour and for errorconditions. Tests live in the Lib/test
directory, where every file thatincludes tests has a test_
prefix.
One difference with ordinary testing is that you are encouraged to rely on thetest.support
module. It contains various helpers that are tailored toPython’s test suite and help smooth out common problems such as platformdifferences, resource consumption and cleanup, or warnings management.That module is not suitable for use outside of the standard library.
When you are adding tests to an existing test file, it is also recommendedthat you study the other tests in that file; it will teach you which precautionsyou have to take to make your tests robust and portable.
Django Test Case
Benchmarks¶
Run Django Tests With Coverage
Benchmarking is useful to test that a change does not degrade performance.
Django Test Example
The Python Benchmark Suitehas a collection of benchmarks for all Python implementations. Documentationabout running the benchmarks is in the README.txt of the repo.