Skip to content

Understanding the Unit Tests

The unit test suite for django-simple-deploy is likely to grow increasingly complex over time. If you haven't done extensive testing, it can be a little overwhelming to dig into the test suite as a whole. The goal of this page is to explain the structure of the unit test suite, so that it continues to run efficiently and effectively as the project evolves.

The complexity comes from trying to do all of the following:

  • Run tests that focus on the CLI.
  • Run tests that target multiple platforms.
  • Run tests that focus on different dependency management systems.
  • Run tests against nested and unnested versions of the sample project. (not yet implemented)
  • Run tests against multiple versions of Django. (not yet implemented)
  • Run tests against multiple versions of Python. (not yet implemented)

For every test run listed here, simple_deploy needs to be called against a fresh version of the sample project. That's a lot of testing!


Some of what you see here may be a little out of date, ie exact directory and file listings. However, the ideas here are kept fully up to date. If you see something in the unit tests that you don't understand, please feel free to open an issue and ask about it.

I recently removed all shell scripts from the unit tests, but don't have time to update this document at the moment. Wherever you see a shell script below, that work has been moved to a function in

Organization of the test suite

Here's the file structure of the unit test suite:

unit_tests $ tree -L 4
├── platform_agnostic_tests
├── platforms
   ├── fly_io
      ├── reference_files
         ├── Dockerfile
         ├── fly.toml
         ├── requirements.txt
   ├── heroku
      ├── reference_files
         ├── Procfile
         ├── placeholder.txt
         ├── requirements.txt
   └── platform_sh
       ├── reference_files
          ├── requirements.txt
          ├── services.yaml
└── utils

Let's go through this from top to bottom.

We need an file at the root of unit_tests/ so nested test files can import from utils/.

This file contains four fixtures1:

  • tmp_project() creates a temporary directory where we can set up a full virtual environment for the sample project we're going to test against. It calls utils/ which copies the sample project, builds a virtual environment, makes an initial commit, and adds simple_deploy to the test project's INSTALLED_APPS. It has a session-level scope2, and returns the absolute path to the temporary directory where the test project was created.
  • reset_test_project() resets the sample project so we can run simple_deploy repeatedly, without having to rebuild the entire test project for each set of tests. It does this by calling utils/ This fixture has a module-level scope.
  • run_simple_deploy() has a module-level scope, with autouse=True. This means the fixture runs automatically for all test modules in the test suite. An if block in the fixture makes sure it exits without doing anything if a specific platform is not being targeted. This fixture runs reset_test_project() immediately before running simple_deploy.
  • pkg_manager() has a function-level scope. Any function that includes this fixture will have access to the parameter that specifies which dependency management system is currently being tested. It will return req_txt, poetry, or pipenv. We use this to know what changes to expect in the modified project.


This directory contains a set of tests that don't relate to any specific platform.

  • tests invalid ways users might call simple_deploy, without a valid --platform argument.


This directory contains all platform-specific test files. Let's take a closer look at fly_io/:

  • reference_files/ contains files like, as they should look after a successful run of simple_deploy targeting deployment to
  • does the following:
    • Tests modifications to project files such as, requirements.txt, and .gitignore.
    • Verifies the creation of platform-specific files such as fly.toml, Dockerfile, and .dockerignore.
    • Tests that a log file is created, and that the log file contains platform-specific output.

The other directories in platforms/ contain similar files for the other supported platforms.


The file heroku/ is leftover from the earlier clunky way of starting to test special situations; this file will likely be removed before long.


The utils/ directory contains scrips that are used by multiple test files.

  • and are used to verify that the test sample project hasn't been changed after running invalid variations of the simple_deploy command.
  • modifies invalid commands to include the --unit-testing flag. This approach allows test functions to contain the exact variations of the simple_deploy command that we expect end users to accidentally use.
  • lets fixtures make commits against the test project.
  • contains one function, check_reference_file(), that's used in all platform-specific tests. As the tests grow and get refactored periodically, I expect this module to expand to more than this one function.


Some people think utils/ is a bad name, like misc, but it's used here to imply that these scripts have utility across multiple test modules.

Testing a single platform

With all that said, let's look at what actually happens when we test a single platform; we'll see in what order the resources described above are used. As an example, let's look at what happens when we issue the following call:

(dsd_env)unit_tests $ pytest platforms/fly_io/

Collecting tests

This command tells pytest to find all the files starting with test_ in the platforms/fly_io/ directory, and collect any function starting with test_ in those files. In this case, that's just, which contains seven functions starting with test_.

Once pytest finds a test function, it begins to set up the test run.

Fixtures, again

Here's where fixtures come into play. pytest first runs any fixture with a scope that's relevant to the tests that are about to be run:

  • All fixtures with scope="session" will be run.
  • Any fixture with a scope relevant to the test function that's about to be run, with autouse=True, will be run.
  • Any fixture whose name appears in the test function's list of parameters will be run.

The tmp_project() fixture

We have one fixture in with session-level scope, tmp_project(). Here's the most important parts of tmp_project():

def tmp_project(tmp_path_factory):
    tmp_proj_dir = tmp_path_factory.mktemp('blog_project')
    cmd = f'sh utils/ -d {tmp_proj_dir} -s {sd_root_dir}'
    return tmp_proj_dir

This fixture calls a built-in fixture, tmp_path_factory, which allows us to request temporary directories for use during test runs. These directories are managed entirely by pytest, so we don't have to worry about cleaning them up later.

We make a temporary directory, and assign the full path for this directory to tmp_proj_dir. We then call Finally, we return tmp_proj_dir. Any test function with tmp_project in its parameter list will have access to this value.

The file

Here's the most important parts of

rsync -ar ../sample_project/blog_project/ "$tmp_dir"
# Build a venv and install requirements.
python3 -m venv b_env
source b_env/bin/activate
pip install --no-index --find-links="$sd_root_dir/vendor/" -r requirements.txt

# Install local version of simple_deploy.
pip install -e "$sd_root_dir/"
# Make an initial commit.
git init
git add .
git commit -am "Initial commit."
git tag -am '' INITIAL_STATE

# Add simple_deploy to INSTALLED_APPS.
sed -i "" "s/# Third party apps./# Third party apps.\n    'simple_deploy',/" blog/

This file does a few important things, which last for the entire testing session:

  • Copy the sample blog project to the location specified by tmp_proj_dir.
  • Build a virtual environment.
  • Make an editable install of django-simple-deploy. This means it installs from your local copy, not from PyPI. Any changes you make to your local version of django-simple-deploy are picked up by the unit tests.
  • Make an initial commit of the test project, with the tag INITIAL_STATE.
  • Add simple_deploy to the test project's INSTALLED_APPS.

The run_simple_deploy() fixture

This fixture has a module-level scope, and autouse=True. It is called once for each test module. Here's the function definition:

@pytest.fixture(scope='module', autouse=True)
def run_simple_deploy(reset_test_project, tmp_project, request):

The important thing to note here is the inclusion of reset_test_project in the function's parameter list. When pytest sees this, it runs the reset_test_project() fixture before running the code in run_simple_deploy().

The reset_test_project() fixture

This fixture just calls, so let's look at that file:

git reset --hard INITIAL_STATE
sed -i "" "s/# Third party apps./# Third party apps.\n    'simple_deploy',/" blog/

This file calls git reset using the tag INITIAL_STATE, and then adds simple_deploy back into the test project's INSTALLED_APPS.

This reset happens once per test module. It's good to note that this happens even if only one test module is being run. This fixture has no effect in that situation, but it also doesn't cause any harm to the test run.

Back to run_simple_deploy()

Now, back to run_simple_deploy(). Here's the most important parts:

@pytest.fixture(scope='module', autouse=True)
def run_simple_deploy(reset_test_project, tmp_project, request):
    re_platform = r".*/unit_tests/platforms/(.*?)/.*"
    if m:
        platform =
    cmd = f"sh utils/ -d {tmp_project} -p {platform} -s {sd_root_dir}"

This function uses a regular expression to find out which platform is currently being tested, which is assigned to platform. If there's no platform name, we don't need to run simple_deploy, so the function returns early. The function now runs

The file


if [ "$target_platform" = fly_io ]; then
    python simple_deploy --unit-testing --platform "$target_platform" --deployed-project-name my_blog_project
elif [ "$target_platform" = platform_sh ]; then
    python simple_deploy --unit-testing --platform "$target_platform" --deployed-project-name my_blog_project
elif [ "$target_platform" = heroku ]; then
    python simple_deploy --unit-testing --platform "$target_platform"

This file calls simple_deploy with the --unit-testing flag, and any other parameters that are required for unit testing by the platform that's being tested.

We are finally finished with all of the fixtures, so we'll head back to one of the functions in

A single test function

Here's one of the test functions that we can focus on:

import unit_tests.utils.ut_helper_functions as hf

def test_creates_dockerfile(tmp_project, pkg_manager):
    """Verify that dockerfile is created correctly."""
    if pkg_manager == "req_txt":
        hf.check_reference_file(tmp_project, 'dockerfile', 'fly_io')
    elif pkg_manager == "poetry":
        hf.check_reference_file(tmp_project, "dockerfile", "fly_io",

The test file imports the utils/ module, which contains functions that are useful to test modules in different platform-specific directories.

The function test_creates_dockerfile() has two arguments, tmp_project and pkg_manager. Here pytest takes the return value from the tmp_project() fixture, and assigns it to the variable tmp_project. This can be a little confusing; we have a fixture in called tmp_project(), but in the current test function tmp_project refers to the return value of tmp_project(). If this is confusing, keep in mind that in this test function, tmp_proj is the path to the directory containing the test project.

The pkg_manager fixture tells us which dependency management system is currently being tested: a bare requirements.txt file, Poetry, or Pipenv. We need to know this because each of these uses a slightly different Dockerfile.

In the body of the test function we check the current value of pkg_manager and then call check_reference_file(), which compares a file from the test project against the corresponding reference file. For req_txt, we make sure the Dockerfile that's created during the test run matches the reference unit_tests/platforms/fly_io/reference_files/Dockerfile. If your current local version of django-simple-deploy generates a Dockerfile for deployments that doesn't match this file, you'll know. For Poetry, the reference filename doesn't match the generic generated filename, so we pass the optional reference_filename argument.


Reference filenames usually have a platform_name.generic_filename.file_extension pattern. It's really helpful to prepend the platform name so that we keep the original file extension, which enables syntax highlighting available when opening these files.


That's a whole lot of setup work for a relatively short test function! But the advantage of all that setup work is that we can write hundreds, or thousands of small test functions without adding much to the setup work. Each test also runs three times: once for req_txt, once for poetry, and once for pipenv.

The rest of the test functions in work in a similar way, except for test_log_dir(). That function inspects several aspects of the log directory, and the log file that should be found there.


The setup work will become more complex as we start to test multiple versions of Django, multiple versions of Python, and multiple OSes. But the overall approach described here shouldn't change significantly. We'll still have a bunch of setup work followed by a large number of smaller, specific test functions.

Testing multiple platforms

What's the difference when we test multiple platforms? Not much; let's consider what happens when we test against two platforms in one test run:

(dsd_env)unit_tests $ pytest platforms/fly_io platforms/platform_sh
==================== test session starts ====================
platform darwin -- Python 3.10.0, pytest-7.1.2, pluggy-1.0.0
rootdir: django-simple-deploy
collected 13 items

platforms/fly_io/ .......
platforms/platform_sh/ ......
==================== 13 passed in 16.07s ====================

Everything described above in Testing a single platform happens in this test run as well. When all of the test functions in the first test module have been run, pytest moves on to the second test module.

In this case, that next test module is Since the tmp_project() fixture has session-level scope, it doesn't run again. But all of the other fixtures have module-level scope, so they run again. Everything from run_simple_deploy() forward happens again, and at the end of all the setup work the test functions in run.

Note that the first test module doesn't finish running for about 12s (on my system). Much of that time is spent building the test project environment. Resetting the test project environment takes much less time, so the subsequent test modules finish much more quickly. It doesn't take much more time to test multiple platforms than it takes to test a single platform.

Running platform-agnostic tests

The platform-agnostic tests are structured a little differently. Right now these are just a couple tests to verify that invalid CLI calls that don't target a specific platform are handled appropriately.

Here's the relevant parts of

@pytest.fixture(scope="module", autouse=True)
def commit_test_project(reset_test_project, tmp_project):
    commit_msg = "Start with clean state before calling invalid command."
    cmd = f"sh utils/ {tmp_project}"

# --- Helper functions ---

def make_invalid_call(tmp_proj_dir, invalid_sd_command):
    cmd = f'sh utils/ {tmp_proj_dir}'

def check_project_unchanged(tmp_proj_dir, capfd):
    captured = capfd.readouterr()
    assert "On branch main\nnothing to commit, working tree clean" in captured.out

    captured = capfd.readouterr()
    assert "Start with clean state before calling invalid command." in captured.out

# --- Test modifications to ---

def test_bare_call(tmp_project, capfd):
    """Call simple_deploy with no arguments."""
    invalid_sd_command = "python simple_deploy"

    make_invalid_call(tmp_project, invalid_sd_command)
    captured = capfd.readouterr()

    assert "The --platform flag is required;" in captured.err
    assert "Please re-run the command with a --platform option specified." in captured.err
    assert "$ python simple_deploy --platform platform_sh" in captured.err
    check_project_unchanged(tmp_project, capfd)

def test_invalid_platform_call(tmp_project, capfd):

This test module has a single fixture with module-level scope, and autouse=True. This fixture is run once as soon as this test module is loaded. The fixture makes sure the test project is reset, and then makes a new commit. We want to verify that invalid commands don't change the project, and that's easiest to do with a clean git status.

Let's just look at the first test function, test_bare_call(). The highlighted line shows the exact command we want to test, python simple_deploy. This command is passed to the helper function make_invalid_call(), which calls utils/ Unit tests require the --unit-testing flag, which the shell script adds in before making the call to simple_deploy. The end result is that our test functions get to contain the exact invalid commands that we expect end users to accidentally use.

The test function makes sure the user sees an appropriate error message, by examining the captured terminal output. This is (mostly) the same output that end users will see. It also calls the helper function check_project_unchanged(), which makes sure there's a clean git status and that the last log message (returned by git log -1 --pretty=oneline) contains the same commit message that was used in the fixture at the top of this module.

Running the entire test suite

Running the entire test suite puts together everything described above:

(dsd_env)unit_tests $ pytest
==================== test session starts ===============================
platform darwin -- Python 3.10.0, pytest-7.1.2, pluggy-1.0.0
rootdir: django-simple-deploy
collected 93 items

platform_agnostic_tests/ .........
platform_agnostic_tests/ sss
platform_agnostic_tests/ ...
platforms/fly_io/ ...........................
platforms/heroku/ ...........................
platforms/platform_sh/ ........................
==================== 90 passed, 3 skipped in 25.78s ====================

The test project is set up once, and reset for each test module that's run.

Examining the modified test project

It can be really helpful to see exactly what a test run of simple_deploy does to the sample project. The original sample project is in sample_project/, and the modified version after running unit tests is stored wherever pytest makes tmp directories on your system. That's typically a subfolder in your system's default temporary directory.

A quick way to find the exact path to the temp directory is to uncomment the following highlighted line in the tmp_project() fixture in

def tmp_project(tmp_path_factory):
    # To see where pytest creates the tmp_proj_dir, uncomment the following line.
    #   All tests will fail, but the AssertionError will show you the full path
    #   to tmp_proj_dir.
    # assert not tmp_proj_dir

    return tmp_proj_dir

The next time you run the unit tests, this assert will fail, and the output will show you the path that was created:

(dsd_env)unit_tests $ pytest -x
>    assert not tmp_proj_dir
E    AssertionError: assert not PosixPath('/private/var/folders/md/4h9n_5l93qz76s_8sxkbnxpc0000gn/T/pytest-of-eric/pytest-274/blog_project0')

You can navigate to this folder in a terminal, and interact with the project in any way you want. It has a virtual environment, so you can activate it and run the project if you want.


The command pytest -x tells pytest to run the full test suite, but stop after the first failed test.

Updating reference files

Examining the test project is an efficient way to update reference files. Say you've just updated the code for generating a Dockerfile for a specific package management system, ie Poetry. You can run the test suite with pytest -x, and it will fail at the test that checks the Dockerfile for that platform when Poetry is in use. You can examine the test project, open the Dockerfile, and verify that it was generated correctly for the sample project. If it is, copy this file into the reference_files/ directory, and the tests should pass.

Examining failures for parametrized tests

When a parametrized test fails, there's some important information in the output. For example, here's some sample output from a failed test run:

FAILED platforms/platform_sh/[poetry] - AssertionError

This tells us that the test test_platform_app_yaml_file in failed, when the value of pkg_manager was poetry. So, we need to look at how simple_deploy configures the file when Poetry is in use.

Updating packages in vendor/

The sole purpose of the vendor/ directory is to facilitate unit testing. To add a new package to the directory:

(dsd_env) $ pip download --dest vendor/ package_name

To upgrade all packages in vendor/:

$ rm -rf vendor/
$ pip download --dest vendor/ -r sample_project/blog_project/requirements.txt

pytest references

  1. For the purposes of this project, a fixture is a function that prepares for a test, or set of tests. For example, we use a fixture to copy the sample project from django-simple-deploy/sample_project/blog_project/ to a temporary directory, and then build a full virtual environment for that project. Another fixture is used to call simple_deploy against the sample project; yet another fixture is used to reset the project for subsequent tests. To learn more, see About fixtures and How to use fixtures

  2. Every fixture has a scope, which controls how often a fixture is loaded when it's needed.

    • scope="session": The fixture is loaded once for the entire test run; the fixture that sets up the sample test project has session-level scope.
    • scope="module": The fixture is loaded once for each module where it's used; the fixture that resets the sample test project has module-level scope.
    • The other possibilities for scope are function, class, and package. These scopes are not currently used in the django-simple-deploy test suite. The default scope is function.
    • Read more at Scope: sharing fixtures across classes, modules, packages or session