Warning
This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.
Writing Tests for Galaxy¶
Other Sources of Documentation
An Overview of Galaxy Tests
Backend/Python Unit Tests
Frontend/ES6 Unit Tests
Tool Framework Tests
API Tests
Integration Tests
Selenium Tests
Selenium Integration Tests
Running Python Tests
Other Sources of Documentation¶
Over the last several years, the most up-to-date documentation on the
structure and running of Galaxy tests has been in the help text for
run_tests.sh script shipped with Galaxy (./run_tests.sh --help
).
High-level information on Galaxy’s CI process can be found in the
Galaxy Code Architecture Slides
and the corresponding YouTube playlist.
Some more specifics on running and writing Galaxy client unit tests
can be found in client/README.md
of the Galaxy codebase.
An Overview of Galaxy Tests¶
Galaxy has many test suites and frameworks. A potentially overwhelming question at first is, where does a given test belong? What testing suite or framework should it be added to? The following questions may be able to help find the right documentation for a given test one wishes to write.
Does this test require a running server and database to execute?
No
If no, this test should probably be implemented as a Galaxy unit test. Unit tests generally, and Galaxy ones specifically, are especially useful for complex components that are well architected to be tested in isolation. The best unit tests are unit tests that shield a lot of their potential complexity from their consumers and components that do not have a lot of dependencies - especially on the database or a web server.
Is the component under test a client (ES6) or backend (Python) component?
Client/ES6
These tests should be placed in
client/src
directly and executed via Jest. Checkout Frontend/ES6 Unit Tests below for more information.Backend/Python
These tests should be placed in
test/unit
or doctests and executed via pytest. Checkout Backend/Python Unit Tests below for more information.
Yes
In this case you’re looking at some sort of functional test that requires a running Galaxy server and the Galaxy database. All of these tests are currently implemented in Python.
Does this test require the Galaxy web interface?
No
Most of the time, we have found that these tests work best when they use the Galaxy API to drive the test. These tests are all Python tests executed by pytest. The testing frameworks provide everything you need to spin up a Galaxy instance, communicate with its API to invoke the component under test, and write expectations about the outcome of the test. There are three different (but very related) frameworks to do this and the choice between which is appropriate comes down to the following questions.
Does this test require a special configuration of Galaxy?
No
Does this test check only functionalities of Galaxy tools?
Yes
In this case you do not actually need to deal with the Galaxy API directly and you can just create a Galaxy tool test to check that the required functionality does work as expected. These are called Galaxy tool framework tests and are located in
test/functional/tools/
. Checkout Tool Framework Tests below for more information.No
In this case Galaxy API tests are likely the most appropriate way to implement the desired test. These tests are located in
lib/galaxy_test/api
. Checkout API Tests below for more information.
Yes
Tests that require a custom Galaxy with a very specific configuration are called Galaxy integration tests and are located in
test/integration
. Checkout Integration Tests below for more information.
Yes
The tests that exercise the Galaxy user interface and require a functional Galaxy server use Selenium to drive interaction with the Galaxy web interface. There are two frameworks or suites available for building tests like this and they both provide high level access to the Galaxy API like the tests above. The frameworks also take care of starting the Galaxy server.
The choice between these two different frameworks comes down to the answer to the following question.
Does this test require a special configuration of Galaxy?
No
These tests should be placed into
lib/galaxy_test/selenium
and implemented using the Selenium Tests framework describe below.Yes
Tests that require both a very specific Galaxy configuration as well as the ability to drive a running Galaxy web interface should be placed into
test/integration_selenium
. Checkout the Selenium Integration Tests information below for more information.
Backend/Python Unit Tests¶
These are Python unit tests either defined inside of test/unit
or
via doctests within a Python component. These should generally not require
a Galaxy instance and should quickly test just a component or a few
components of Galaxy’s backend code.
doctests to stand-alone tests?¶
doctests tend to be more brittle and more restrictive. I (@jmchilton) would strongly suggest writing stand-alone unit testing files separate from the code itself unless the tests are so clean and so isolated they serve as high-quality documentation for the component under test.
Slow ‘Unit’ Tests¶
There are tests in Galaxy that test integration with external sources that do not require a full Galaxy server. While these aren’t really “unit” tests in a traditional sense, they are unit tests from a Galaxy perspective because they do not depend on a Galaxy server.
These tests should be marked as requiring the environment variable
GALAXY_TEST_INCLUDE_SLOW
to run.
Continuous Integration¶
The Python unit tests are run against each pull request to Galaxy using
CircleCI. If any of these tests fail, the pull request will be marked
red. This test suite is moderately prone to having tests fail that are
unrelated to the pull request being tested; if this test suite fails on
a pull request with changes that seem to be unrelated to the pull request -
ping the Galaxy committers on the pull request and request a re-run. The
CircleCI test definition for these tests is located in .circleci/config.yml
below Galaxy’s root.
Frontend/ES6 Unit Tests¶
Detailed information on writing Galaxy client tests can be found in client/README.md.
Continuous Integration¶
The client tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is moderately prone to having tests fail that are
unrelated to the pull request being tested; if this test suite fails on
a pull request with changes that seem to be unrelated to the pull request -
ping the Galaxy committers on the pull request and request a re-run. The
GitHub actions workflow definition for these tests is located in
.github/workflows/jest.yaml
below Galaxy’s root.
Tool Framework Tests¶
A great deal of the complexity and interface exposed to Galaxy plugin developers comes in the form of Galaxy tool wrapper definition files. Likewise, a lot of the legacy behavior Galaxy needs to maintain is maintained for older tool definitions. For this reason, a lot of Galaxy’s complex internals can just be tested by simply running a tool test. Obviously Galaxy is much more complex than this, but a surprising amount of Galaxy’s tests are simply tool tests. This suite of tools that have their tests exercised is called the “Tool Framework Tests” or simply “Framework Tests”.
Adding a tool test is as simple as finding a related tool in the sample
tools (test/functional/tools
) and adding a test block to that file
or adding a new tool to this directory and referencing it in the
sample tool configuration XML (test/functional/tools/samples_tool_conf.xml
).
General information on writing Galaxy Tool Tests can be found in Planemo’s documentation - for instance in the Test-Driven Development section.
Continuous Integration¶
The Tool framework tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is fairly stable and typically there are not
transiently failed tests unrelated to the pull request being tested. The
GitHub actions workflow definition for these tests is located in
.github/workflows/framework.yaml
below Galaxy’s root.
API Tests¶
These tests are located in lib/galaxy_test/api
and test various aspects
of the Galaxy API, as well as general backend aspects of Galaxy using the API.
An Example: lib/galaxy_test/api/test_roles.py
¶
This test file shows a fairly typical API test. It demonstrates the basic
structure of a test, how to GET
and POST
against the API, and how to use
both typical user and admin user-only functionality.
Populating Test Data with lib/galaxy_test/base/populators.py
¶
The test_roles.py
example above also creates a DatasetPopulator
object that it uses to get some common information from the configured
Galaxy server under test. Populators are used extensively throughout
API tests as well as integration and Selenium tests to both populate
data to test (histories, workflows, collections, libraries, etc..)
as well as access information from the Galaxy server (e.g. fetch
information from datasets, users, Galaxy’s configuration, etc.).
Populators and API tests in general make heavy use of the requests library for Python.
API Test Assertions¶
There is a module with common assertions galaxy_test.base.api_asserts
used to check API request status codes, dictionary content, and Galaxy
specific error messages.
Continuous Integration¶
The API tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is fairly stable and typically there are not
transiently failed tests unrelated to the pull request being tested. The
GitHub actions workflow definition for these tests is located in
.github/workflows/api.yaml
below Galaxy’s root.
Integration Tests¶
These tests are located in test/integration
. These tests have access
to all the same API utilities as API tests described above, but can access
Galaxy internals and may define hooks for configuring Galaxy in certain ways
during startup.
Galaxy integration tests in some ways are more powerful than API tests - they can both control Galaxy’s configuration and can access Galaxy’s internals. However, this power comes at a real cost - each test case must spin up its own Galaxy server (a relatively expensive operation) and the tests cannot be executed against external Galaxy servers (it wouldn’t make sense to, given these custom hooks during configuration of the server). For these reasons, we bundle up Galaxy API tests for use in deployment testing of production setups. Therefore Galaxy API tests are generally preferred, while integration tests should be implemented only when an API test is not possible or practical.
Integration tests can make use of dataset populators and API assertions as described above in the API test documentation. It is worth reviewing that documentation before digging into integration examples.
An Example: test/integration/test_quota.py
¶
This is a really simple example that does some testing with the Quotas API of Galaxy. This API is off by default so it must be enabled for the test. The top of the test file demonstrates both how to create an integration test and how to modify Galaxy’s configuration for the test.
#...
from galaxy_test.driver import integration_util
class QuotaIntegrationTestCase(integration_util.IntegrationTestCase):
require_admin_user = True
@classmethod
def handle_galaxy_config_kwds(cls, config):
config["enable_quotas"] = True
#...
Integration test cases extend the IntegrationTestCase
class defined in
the galaxy_test.driver.integration_util
module (located in
lib/galaxy_test/driver/integration_uti.py
below Galaxy’s root).
The require_admin_user
option above tell the test framework that the
default user configured for API interactions must be an admin user.
This example overrides Galaxy’s configuration using the
handle_galaxy_config_kwds
class method. This method is called before
a Galaxy server is created, and is passed the testing server’s default
configuration as the config
argument to that class method. This
config
object is effectively the Python representation of the Galaxy
configuration file (galaxy.yml
) used to start the Python server.
Almost anything you can do in galaxy.yml
, you can modify the Galaxy
server to do using the same keys. Examples of various ways integration tests
have modified this dictionary include setting up custom object stores
(e.g. objectstore/test_mixed_store_by.py
),
setting up non-local job runners (e.g. test_cli_runners.py
), setting
up custom job destinations (e.g. test_job_recovery.py
), and configuring
Galaxy for tool shed operations (e.g. test_repository_operations.py
).
There may be cases where an integration test is used not to allow
some custom configuration of Galaxy but to access Galaxy’s internals.
Integration tests have direct access to Galaxy’s app
object via
self._app
and direct access to the database as a result. An example of
such a test is test_workflow_refactoring.py
. This test required accessing
the way workflow steps are stored in the database and not just how they are
serialized by the API, so it tests database models directly. Generally
though, this type of usage should be avoided.
Continuous Integration¶
The Integration tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is moderately prone to having tests fail that are
unrelated to the pull request being tested; if this test suite fails on
a pull request with changes that seem to be unrelated to the pull request -
ping the Galaxy committers on the pull request and request a re-run. The
GitHub actions workflow definition for these tests is located in
.github/workflows/integration.yaml
below Galaxy’s root.
Selenium Tests¶
These are full stack tests meant to test the Galaxy UI with real
browsers and are located in lib/galaxy_test/selenium
.
Continuous Integration¶
The Selenium tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is moderately prone to having tests fail that are
unrelated to the pull request being tested; if this test suite fails on
a pull request with changes that seem to be unrelated to the pull request -
ping the Galaxy committers on the pull request and request a re-run. The
GitHub actions workflow definition for these tests is located in
.github/workflows/selenium.yaml
below Galaxy’s root.
Selenium Integration Tests¶
These tests are located in test/integration_selenium
and simply
combine the capabilities of Selenium tests and Integration tests
(both described above) into test cases that can do both. There
are no new capabilities or gotchas of this test suite beyond
what is described above in these sections.
A quintessential example is test/integration_selenium/test_upload_ftp.py
.
Testing the FTP capabilities of the user interface requires both
Selenium to drive the test case and a custom Galaxy configuration
that mocks out an FTP directory and points the Galaxy server at it
with various options (ftp_upload_dir
, ftp_upload_site
).
Continuous Integration¶
The Selenium integration tests are run against each pull request to Galaxy using
GitHub actions. If any of these tests fail, the pull request will be marked
red. This test suite is moderately prone to having tests fail that are
unrelated to the pull request being tested; if this test suite fails on
a pull request with changes that seem to be unrelated to the pull request -
ping the Galaxy committers on the pull request and request a re-run. The
GitHub actions workflow definition for these tests is located in
.github/workflows/selenium_integration.yaml
below Galaxy’s root.
Running Python Tests¶
The best information about how to run Galaxy’s Python tests can be
found in the help output of run_tests.sh --help
.
'run_tests.sh -id bbb' for testing one tool with id 'bbb' ('bbb' is the tool id)
'run_tests.sh -sid ccc' for testing one section with sid 'ccc' ('ccc' is the string after 'section::')
'run_tests.sh -list' for listing all the tool ids
'run_tests.sh -api (test_path)' for running all the test scripts in the ./lib/galaxy_test/api directory, test_path
can be pytest selector
'run_tests.sh -integration (test_path)' for running all integration test scripts in the ./test/integration directory, test_path
can be pytest selector
'run_tests.sh -toolshed (test_path)' for running all the test scripts in the ./lib/tool_shed/test directory
'run_tests.sh -installed' for running tests of Tool Shed installed tools
'run_tests.sh -main' for running tests of tools shipped with Galaxy
'run_tests.sh -framework' for running through example tool tests testing framework features in test/functional/tools"
'run_tests.sh -framework -id toolid' for testing one framework tool (in test/functional/tools/) with id 'toolid'
'run_tests.sh -data_managers -id data_manager_id' for testing one Data Manager with id 'data_manager_id'
'run_tests.sh -unit' for running all unit tests (doctests and tests in test/unit)
'run_tests.sh -unit (test_selector)' for running unit tests on specified test path (using pytest selector syntax)
'run_tests.sh -selenium' for running all selenium web tests (in lib/galaxy_test/selenium)
'run_tests.sh -selenium (test_selector)' for running specified selenium web tests (using pytest selector syntax)
This wrapper script largely serves as a point documentation and convenience for
running Galaxy's Python tests. Most Python tests shipped with Galaxy can be run with
pytest directly. Galaxy's client unit tests can be run with make client-test
or yarn directly as documented in detail in client/README.md.
The main test types are as follows:
- API: These tests are located in lib/galaxy_test/api and test various aspects of the Galaxy
API and test general backend aspects of Galaxy using the API.
- Integration: These tests are located in test/integration and test special
configurations of Galaxy. All API tests assume a particular Galaxy configuration
defined by test/base/driver_util.py and integration tests can be used to
launch and test Galaxy in other configurations.
- Framework: These tests are all Galaxy tool tests and can be found in
test/functional/tools. These are for the most part meant to test and
demonstrate features of the tool evaluation environment and of Galaxy tool XML
files.
- Unit: These are Python unit tests either defined as doctests or inside of
test/unit. These should generally not require a Galaxy instance and should
quickly test just a component or a few components of Galaxy's backend code.
- Selenium: These are full stack tests meant to test the Galaxy UI with real
browsers and are located in lib/galaxy_test/selenium.
- ToolShed: These are web tests that use the older Python web testing
framework twill to test ToolShed related functionality. These are
located in lib/tool_shed/test.
Python testing is mostly done via pytest. Specific tests can be selected
using the pytest selector syntax is described at https://docs.pytest.org/en/latest/usage.html.
The spots these selectors can be used is described in the above usage documentation
as test_path. A few examples are shown below.
Run all API tests:
./run_tests.sh -api
The same test as above can be run using nosetests directly as follows:
pytest lib/galaxy_test/api
However when using pytest directly output options defined in this
file aren't respected and a new Galaxy instance will be created for each
TestCase class (this scripts optimizes it so all tests can share a Galaxy
instance).
Run a full class of API tests:
./run_tests.sh -api lib/galaxy_test/api/test_tools.py::ToolsTestCase
Run a specific API test:
./run_tests.sh -api lib/galaxy_test/api/test_tools.py::ToolsTestCase::test_map_over_with_output_format_actions
Run all selenium tests (Under Linux using Docker):
# Start selenium chrome Docker container
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.0.1-aluminum
GALAXY_TEST_SELENIUM_REMOTE=1 ./run_tests.sh -selenium
Run a specific selenium test (under Linux or Mac OS X after installing geckodriver or chromedriver):
./run_tests.sh -selenium lib/galaxy_test/selenium/test_registration.py::RegistrationTestCase::test_reregister_username_fails
Run a selenium test against a running server while watching client (fastest iterating on client tests):
./run.sh & # run Galaxy on 8080
make client-watch & # watch for client changes
export GALAXY_TEST_EXTERNAL=http://localhost:8080/ # Target tests at server.
. .venv/bin/activate # source the virtualenv so can skip run_tests.sh.
pytest lib/galaxy_test/selenium/test_workflow_editor.py::WorkflowEditorTestCase::test_data_input
Note About Selenium Tests:
If using a local selenium driver such as a Chrome or Firefox based one
either chromedriver or geckodriver needs to be installed an placed on
the PATH.
More information on geckodriver can be found at
https://github.com/mozilla/geckodriver and more information on
chromedriver can be found at
https://sites.google.com/a/chromium.org/chromedriver/.
By default Galaxy will check the PATH for these and pick
whichever it finds. This can be overridden by setting
GALAXY_TEST_SELENIUM_BROWSER to either FIREFOX, CHROME, or something
more esoteric (including OPERA and PHANTOMJS).
If PyVirtualDisplay is installed Galaxy will attempt to run this
browser in a headless mode. This can be disabled by setting
GALAXY_TEST_SELENIUM_HEADLESS to 0 however.
Selenium can also be setup a remote service - to target a service set
GALAXY_TEST_SELENIUM_REMOTE to 1. The target service may be configured
with GALAXY_TEST_SELENIUM_REMOTE_PORT and
GALAXY_TEST_SELENIUM_REMOTE_HOST. By default Galaxy will assume the
remote service being targetted is CHROME - but this can be overridden
with GALAXY_TEST_SELENIUM_BROWSER.
In this remote mode, please ensure that GALAXY_TEST_HOST is set to a
host that is accessible from the Selenium host. By default under Linux
if GALAXY_TEST_SELENIUM_REMOTE is set, Galaxy will set this to be the IP
address Docker exposes localhost on to its child containers. This trick
doesn't work on Mac OS X and so GALAXY_TEST_HOST will need to be crafted
carefully ahead of time.
For Selenium test cases a stack trace is usually insufficient to diagnose
problems. For this reason, GALAXY_TEST_ERRORS_DIRECTORY is populated with
a new directory of information for each failing test case. This information
includes a screenshot, a stack trace, and the DOM of the currently rendered
Galaxy instance. The new directories are created with names that include
information about the failed test method name and the timestamp. By default,
GALAXY_TEST_ERRORS_DIRECTORY will be set to database/errors.
The Selenium tests seem to be subject to transient failures at a higher
rate than the rest of the tests in Galaxy. Though this is unfortunate,
they have more moving pieces so this is perhaps not surprising. One can
set the GALAXY_TEST_SELENIUM_RETRIES to a number greater than 0 to
automatically retry every failed test case the specified number of times.
External Tests:
A small subset of tests can be run against an existing Galaxy
instance. The external Galaxy instance URL can be configured with
--external_url. If this is set, either --external_master_key or
--external_user_key must be set as well - more tests can be executed
with --external_master_key than with a user key.
Extra options:
--verbose_errors Force some tests produce more verbose error reporting.
--no_cleanup Do not delete temp files for Python functional tests
(-toolshed, -framework, etc...)
--debug On python test error or failure invoke a pdb shell for
interactive debugging of the test
--report_file Path of HTML report to produce (for Python Galaxy
functional tests). If not given, a default filename will
be used, and reported on stderr at the end of the run.
--xunit_report_file Path of XUnit report to produce (for Python Galaxy
functional tests).
--skip-venv Do not create .venv (passes this flag to
common_startup.sh)
--dockerize Run tests in a pre-configured Docker container (must be
first argument if present).
--db <type> For use with --dockerize, run tests using partially
migrated 'postgres', 'mysql', or 'sqlite' databases.
--external_url External URL to use for Galaxy testing (only certain
tests).
--external_master_key Master API key used to configure external tests.
--external_user_key User API used for external tests - not required if
external_master_key is specified.
--skip_flakey_fails Skip flakey tests on error (sets
GALAXY_TEST_SKIP_FLAKEY_TESTS_ON_ERROR=1).
Environment Variables:
In addition to the above command-line options, many environment variables
can be used to control the Galaxy functional testing processing. Command-line
options above like (--external_url) will set environment variables - in such
cases the command line argument takes precedent over environment variables set
at the time of running this script.
Functional Test Environment Variables
GALAXY_TEST_DBURI Database connection string used for functional
test database for Galaxy.
GALAXY_TEST_INSTALL_DBURI Database connection string used for functional
test database for Galaxy's install framework.
GALAXY_TEST_INSTALL_DB_MERGED Set to use same database for Galaxy and install
framework, this defaults to True for Galaxy
tests an False for shed tests.
GALAXY_TEST_DB_TEMPLATE If GALAXY_TEST_DBURI is unset, this URL can be
retrieved and should be an sqlite database that
will be upgraded and tested against.
GALAXY_TEST_TMP_DIR Temp directory used for files required by
Galaxy server setup for Galaxy functional tests.
GALAXY_TEST_SAVE Location to save certain test files (such as
tool outputs).
GALAXY_TEST_EXTERNAL Target an external Galaxy as part of testing.
GALAXY_TEST_JOB_CONFIG_FILE Job config file to use for the test.
GALAXY_CONFIG_MASTER_API_KEY Master or admin API key to use as part of
testing with GALAXY_TEST_EXTERNAL.
GALAXY_TEST_USER_API_KEY User API key to use as part of testing with
GALAXY_TEST_EXTERNAL.
GALAXY_TEST_VERBOSE_ERRORS Enable more verbose errors during API tests.
GALAXY_TEST_UPLOAD_ASYNC Upload tool test inputs asynchronously (may
overwhelm sqlite database).
GALAXY_TEST_RAW_DIFF Don't slice up tool test diffs to keep output
managable - print all output. (default off)
GALAXY_TEST_DEFAULT_WAIT Max time allowed for a tool test before Galaxy
gives up (default 86400) - tools may define a
maxseconds attribute to extend this.
GALAXY_TEST_TOOL_DEPENDENCY_DIR tool dependency dir to use for Galaxy during
functional tests.
GALAXY_TEST_FILE_DIR Test data sources (default to
test-data,https://github.com/galaxyproject/galaxy-test-data.git)
GALAXY_TEST_DIRECTORY /test
GALAXY_TEST_TOOL_DATA_PATH Set to override tool data path during tool
shed tests.
GALAXY_TEST_FETCH_DATA Fetch remote test data to
GALAXY_TEST_DATA_REPO_CACHE as part of tool
tests if it is not available locally (default
to True). Requires git to be available on the
command-line.
GALAXY_TEST_DATA_REPO_CACHE Where to cache remote test data to (default to
test-data-cache).
GALAXY_TEST_SKIP_FLAKEY_TESTS_ON_ERROR
Skip tests annotated with @flakey on test errors.
HTTP_ACCEPT_LANGUAGE Defaults to 'en'
GALAXY_TEST_NO_CLEANUP Do not cleanup main test directory after tests,
the deprecated option TOOL_SHED_TEST_NO_CLEANUP
does the same thing.
GALAXY_TEST_HOST Host to use for Galaxy server setup for
testing.
GALAXY_TEST_PORT Port to use for Galaxy server setup for
testing.
GALAXY_TEST_TOOL_PATH Path defaulting to 'tools'.
GALAXY_TEST_SHED_TOOL_CONF Shed toolbox conf (defaults to
config/shed_tool_conf.xml) used when testing
installed to tools with -installed.
GALAXY_TEST_HISTORY_ID Some tests can target existing history ids, this option
is fairly limited and not compatible with parrallel testing
so should be limited to debugging one off tests.
TOOL_SHED_TEST_HOST Host to use for shed server setup for testing.
TOOL_SHED_TEST_PORT Port to use for shed server setup for testing.
TOOL_SHED_TEST_FILE_DIR Defaults to lib/tool_shed/test/test_data.
TOOL_SHED_TEST_TMP_DIR Defaults to random /tmp directory - place for
tool shed test server files to be placed.
TOOL_SHED_TEST_OMIT_GALAXY Do not launch a Galaxy server for tool shed
testing.
Unit Test Environment Variables
GALAXY_TEST_INCLUDE_SLOW - Used in unit tests to trigger slower tests that
aren't included by default with --unit/-u.