Warning

This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.

galaxy_test.base package

Submodules

galaxy_test.base.api module

class galaxy_test.base.api.UsesApiTestCaseMixin[source]

Bases: object

tearDown()[source]
class galaxy_test.base.api.ApiTestInteractor(test_case, api_key=None)[source]

Bases: galaxy_test.base.interactor.TestCaseGalaxyInteractor

Specialized variant of the API interactor (originally developed for tool functional tests) for testing the API generally.

__init__(test_case, api_key=None)[source]

Initialize self. See help(type(self)) for accurate signature.

get(*args, **kwds)[source]
post(*args, **kwds)[source]
delete(*args, **kwds)[source]
patch(*args, **kwds)[source]
put(*args, **kwds)[source]

galaxy_test.base.api_asserts module

Utility methods for making assertions about Galaxy API responses, etc…

galaxy_test.base.api_asserts.assert_status_code_is(response: requests.models.Response, expected_status_code: int)[source]

Assert that the supplied response has the expect status code.

galaxy_test.base.api_asserts.assert_status_code_is_ok(response: requests.models.Response)[source]

Assert that the supplied response is okay.

The easier alternative response.raise_for_status() might be perferable generally.

galaxy_test.base.api_asserts.assert_has_keys(response: dict, *keys: str)[source]

Assert that the supplied response (dict) has the supplied keys.

galaxy_test.base.api_asserts.assert_not_has_keys(response: dict, *keys: str)[source]

Assert that the supplied response (dict) does not have the supplied keys.

galaxy_test.base.api_asserts.assert_error_code_is(response: Union[requests.models.Response, dict], error_code: int)[source]

Assert that the supplied response has the supplied Galaxy error code.

Galaxy error codes can be imported from galaxy.exceptions.error_codes to test against.

from galaxy.exceptions import error_codes
assert_error_code_is(response, error_codes.USER_REQUEST_MISSING_PARAMETER)
galaxy_test.base.api_asserts.assert_object_id_error(response: requests.models.Response)[source]
galaxy_test.base.api_asserts.assert_error_message_contains(response: Union[requests.models.Response, dict], expected_contains: str)[source]
galaxy_test.base.api_asserts.assert_has_key(response: dict, *keys: str)

Assert that the supplied response (dict) has the supplied keys.

galaxy_test.base.api_util module

galaxy_test.base.api_util.get_master_api_key()[source]

Test master API key to use for functional test. This key should be configured as a master API key and should be able to create additional users and keys.

galaxy_test.base.api_util.get_user_api_key()[source]

Test user API key to use for functional tests. If set, this should drive API based testing - if not set master API key should be used to create a new user and API key for tests.

galaxy_test.base.constants module

Just constants useful for testing across test types.

galaxy_test.base.env module

Base utilities for working Galaxy test environments.

galaxy_test.base.env.setup_keep_outdir()[source]
galaxy_test.base.env.target_url_parts()[source]
galaxy_test.base.env.get_ip_address(ifname)[source]

galaxy_test.base.instrument module

Utilities to help instrument tool tests.

Including structured data nose plugin that allows storing arbitrary structured data on a per test case basis - used by tool test to store inputs, output problems, job tests, etc… but could easily by used by other test types in a different way.

galaxy_test.base.instrument.register_job_data(data)[source]
galaxy_test.base.instrument.fetch_job_data()[source]
class galaxy_test.base.instrument.StructuredTestDataPlugin[source]

Bases: nose.plugins.base.Plugin

name = 'structureddata'
options(parser, env)[source]

Register commandline options.

Implement this method for normal options behavior with protection from OptionConflictErrors. If you override this method and want the default –with-$name option to be registered, be sure to call super().

configure(options, conf)[source]

Configure the plugin and system, based on selected options.

The base plugin class sets the plugin to enabled if the enable option for the plugin (self.enableOpt) is true.

finalize(result)[source]
addError(test, *args, **kwds)
addFailure(test, *args, **kwds)
addSuccess(test, *args, **kwds)
report(stream)[source]

galaxy_test.base.interactor module

class galaxy_test.base.interactor.TestCaseGalaxyInteractor(functional_test_case, test_user=None, api_key=None)[source]

Bases: galaxy.tool_util.verify.interactor.GalaxyInteractorApi

__init__(functional_test_case, test_user=None, api_key=None)[source]

Initialize self. See help(type(self)) for accurate signature.

galaxy_test.base.nose_util module

Utilities for dealing with nose.

There was some duplication between Galaxy, Tool Shed, and Install/Test, trying to reduce that here.

galaxy_test.base.nose_util.run(test_config, plugins=None)[source]

galaxy_test.base.populators module

galaxy_test.base.populators.flakey(method)[source]
galaxy_test.base.populators.skip_without_tool(tool_id)[source]

Decorate an API test method as requiring a specific tool.

Have test framework skip the test case if the tool is unavailable.

galaxy_test.base.populators.skip_without_datatype(extension)[source]

Decorate an API test method as requiring a specific datatype.

Have test framework skip the test case if the datatype is unavailable.

galaxy_test.base.populators.is_site_up(url)[source]
galaxy_test.base.populators.skip_if_site_down(url)[source]
galaxy_test.base.populators.skip_if_toolshed_down(method)
galaxy_test.base.populators.skip_if_github_down(method)
galaxy_test.base.populators.summarize_instance_history_on_error(method)[source]
galaxy_test.base.populators.uses_test_history(**test_history_kwd)[source]

Can override require_new and cancel_executions using kwds to decorator.

class galaxy_test.base.populators.TestsDatasets[source]

Bases: object

class galaxy_test.base.populators.BaseDatasetPopulator[source]

Bases: object

Abstract description of API operations optimized for testing Galaxy - implementations must implement _get, _post and _delete.

new_dataset(history_id, content=None, wait=False, **kwds)[source]
new_dataset_request(history_id, content=None, wait=False, **kwds)[source]
fetch(payload, assert_ok=True, timeout=60, wait=None)[source]
wait_for_tool_run(history_id, run_response, timeout=60, assert_ok=True)[source]
check_run(run_response)[source]
wait_for_history(history_id, assert_ok=False, timeout=60)[source]
wait_for_history_jobs(history_id, assert_ok=False, timeout=60)[source]
wait_for_job(job_id, assert_ok=False, timeout=60)[source]
get_job_details(job_id, full=False)[source]
cancel_history_jobs(history_id, wait=True)[source]
history_jobs(history_id)[source]
active_history_jobs(history_id)[source]
cancel_job(job_id)[source]
delete_history(history_id)[source]
delete_dataset(history_id, content_id)[source]
create_tool_from_path(tool_path)[source]
create_tool(representation, tool_directory=None)[source]
list_dynamic_tools()[source]
show_dynamic_tool(uuid)[source]
deactivate_dynamic_tool(uuid)[source]
test_history(**kwds)[source]
new_history(**kwds)[source]
upload_payload(history_id, content=None, **kwds)[source]
get_remote_files(target='ftp')[source]
run_tool_payload(tool_id, inputs, history_id, **kwds)[source]
run_tool(tool_id, inputs, history_id, assert_ok=True, **kwds)[source]
tools_post(payload, url='tools')[source]
get_history_dataset_content(history_id, wait=True, filename=None, type='text', raw=False, **kwds)[source]
get_history_dataset_details(history_id, **kwds)[source]
get_history_dataset_details_raw(history_id, dataset_id)[source]
get_history_dataset_extra_files(history_id, **kwds)[source]
get_history_collection_details(history_id, **kwds)[source]
run_collection_creates_list(history_id, hdca_id)[source]
run_exit_code_from_file(history_id, hdca_id)[source]
ds_entry(history_content)[source]
dataset_storage_info(dataset_id)[source]
get_roles()[source]
user_email()[source]
user_id()[source]
user_private_role_id()[source]
create_role(user_ids, description=None)[source]
create_quota(quota_payload)[source]
get_quotas()[source]
make_private(history_id, dataset_id)[source]
validate_dataset(history_id, dataset_id)[source]
validate_dataset_and_wait(history_id, dataset_id)[source]
setup_history_for_export_testing(history_name)[source]
prepare_export(history_id, data)[source]
export_url(history_id, data, check_download=True)[source]
get_export_url(export_url)[source]
import_history(import_data)[source]
import_history_and_wait_for_name(import_data, history_name)[source]
rename_history(history_id, new_name)[source]
get_histories()[source]
wait_on_history_length(history_id, wait_on_history_length)[source]
history_length(history_id)[source]
reimport_history(history_id, history_name, wait_on_history_length, export_kwds, url, api_key)[source]
get_random_name(prefix=None, suffix=None, len=10)[source]
class galaxy_test.base.populators.DatasetPopulator(galaxy_interactor)[source]

Bases: galaxy_test.base.populators.BaseDatasetPopulator

__init__(galaxy_interactor)[source]

Initialize self. See help(type(self)) for accurate signature.

wait_for_dataset(history_id, dataset_id, assert_ok=False, timeout=60)[source]
class galaxy_test.base.populators.BaseWorkflowPopulator[source]

Bases: object

load_workflow(name, content='{\n "a_galaxy_workflow": "true", \n "annotation": "simple workflow",\n "format-version": "0.1", \n "name": "TestWorkflow1", \n "steps": {\n "0": {\n "annotation": "input1 description", \n "id": 0, \n "input_connections": {}, \n "inputs": [\n {\n "description": "input1 description", \n "name": "WorkflowInput1"\n }\n ], \n "name": "Input dataset", \n "outputs": [], \n "position": {\n "left": 199.55555772781372, \n "top": 200.66666460037231\n }, \n "tool_errors": null, \n "tool_id": null, \n "tool_state": "{\\"name\\": \\"WorkflowInput1\\"}", \n "tool_version": null, \n "type": "data_input", \n "user_outputs": []\n }, \n "1": {\n "annotation": "", \n "id": 1, \n "input_connections": {}, \n "inputs": [\n {\n "description": "", \n "name": "WorkflowInput2"\n }\n ], \n "name": "Input dataset", \n "outputs": [], \n "position": {\n "left": 206.22221422195435, \n "top": 327.33335161209106\n }, \n "tool_errors": null, \n "tool_id": null, \n "tool_state": "{\\"name\\": \\"WorkflowInput2\\"}", \n "tool_version": null, \n "type": "data_input", \n "user_outputs": []\n }, \n "2": {\n "annotation": "", \n "id": 2, \n "input_connections": {\n "input1": {\n "id": 0, \n "output_name": "output"\n }, \n "queries_0|input2": {\n "id": 1, \n "output_name": "output"\n }\n }, \n "inputs": [], \n "name": "Concatenate datasets", \n "outputs": [\n {\n "name": "out_file1", \n "type": "input"\n }\n ], \n "position": {\n "left": 419.33335876464844, \n "top": 200.44446563720703\n }, \n "post_job_actions": {}, \n "tool_errors": null, \n "tool_id": "cat1", \n "tool_state": "{\\"__page__\\": 0, \\"__rerun_remap_job_id__\\": null, \\"input1\\": \\"null\\", \\"queries\\": \\"[{\\\\\\"input2\\\\\\": null, \\\\\\"__index__\\\\\\": 0}]\\"}", \n "tool_version": "1.0.0", \n "type": "tool", \n "user_outputs": []\n }\n }\n}\n', add_pja=False)[source]
load_random_x2_workflow(name)[source]
load_workflow_from_resource(name, filename=None)[source]
simple_workflow(name, **create_kwds)[source]
import_workflow_from_path(from_path)[source]
create_workflow(workflow, **create_kwds)[source]
create_workflow_response(workflow, **create_kwds)[source]
upload_yaml_workflow(has_yaml, **kwds)[source]
wait_for_invocation(workflow_id, invocation_id, timeout=60, assert_ok=True)[source]
history_invocations(history_id)[source]
wait_for_history_workflows(history_id, assert_ok=True, timeout=60, expected_invocation_count=None)[source]
wait_for_workflow(workflow_id, invocation_id, history_id, assert_ok=True, timeout=60)[source]

Wait for a workflow invocation to completely schedule and then history to be complete.

get_invocation(invocation_id)[source]
get_biocompute_object(invocation_id)[source]
validate_biocompute_object(bco, expected_schema_version='https://w3id.org/ieee/ieee-2791-schema/2791object.json')[source]
invoke_workflow_raw(workflow_id, request)[source]
invoke_workflow(history_id, workflow_id, inputs=None, request=None, assert_ok=True)[source]
workflow_report_json(workflow_id, invocation_id)[source]
download_workflow(workflow_id, style=None, history_id=None)[source]
update_workflow(workflow_id, workflow_object)[source]
refactor_workflow(workflow_id, actions, dry_run=None, style=None)[source]
export_for_update(workflow_id)[source]
run_workflow(has_workflow, test_data=None, history_id=None, wait=True, source_type=None, jobs_descriptions=None, expected_response=200, assert_ok=True, client_convert=None, round_trip_format_conversion=False, raw_yaml=False)[source]

High-level wrapper around workflow API, etc. to invoke format 2 workflows.

dump_workflow(workflow_id, style=None)[source]
class galaxy_test.base.populators.RunJobsSummary(history_id, workflow_id, invocation_id, inputs, jobs, invocation, workflow_request)

Bases: tuple

property history_id

Alias for field number 0

property inputs

Alias for field number 3

property invocation

Alias for field number 5

property invocation_id

Alias for field number 2

property jobs

Alias for field number 4

property workflow_id

Alias for field number 1

property workflow_request

Alias for field number 6

class galaxy_test.base.populators.WorkflowPopulator(galaxy_interactor)[source]

Bases: galaxy_test.base.populators.BaseWorkflowPopulator, gxformat2.interface.ImporterGalaxyInterface

__init__(galaxy_interactor)[source]

Initialize self. See help(type(self)) for accurate signature.

import_workflow(workflow, **kwds)[source]

Import a workflow via POST /api/workflows or comparable interface into Galaxy.

import_tool(tool)[source]

Import a workflow via POST /api/workflows or comparable interface into Galaxy.

class galaxy_test.base.populators.LibraryPopulator(galaxy_interactor)[source]

Bases: object

__init__(galaxy_interactor)[source]

Initialize self. See help(type(self)) for accurate signature.

get_libraries()[source]
new_private_library(name)[source]
new_library(name)[source]
set_permissions(library_id, role_id=None)[source]
user_email()[source]
user_private_role_id()[source]
create_dataset_request(library, **kwds)[source]
new_library_dataset(name, **create_dataset_kwds)[source]
wait_on_library_dataset(library, dataset)[source]
raw_library_contents_create(library_id, payload, files=None)[source]
show_ldda(library_id, library_dataset_id)[source]
new_library_dataset_in_private_library(library_name='private_dataset', wait=True)[source]
get_library_contents_with_path(library_id, path)[source]
setup_fetch_to_folder(test_name)[source]
class galaxy_test.base.populators.BaseDatasetCollectionPopulator[source]

Bases: object

create_list_from_pairs(history_id, pairs, name='Dataset Collection from pairs')[source]
nested_collection_identifiers(history_id, collection_type)[source]
create_nested_collection(history_id, collection_type, name=None, collection=None, element_identifiers=None)[source]

Create a nested collection either from collection or using collection_type).

create_list_of_pairs_in_history(history_id, **kwds)[source]
create_list_of_list_in_history(history_id, **kwds)[source]
create_pair_in_history(history_id, **kwds)[source]
create_list_in_history(history_id, **kwds)[source]
upload_collection(history_id, collection_type, elements, **kwds)[source]
create_list_payload(history_id, **kwds)[source]
create_pair_payload(history_id, **kwds)[source]
wait_for_fetched_collection(fetch_response)[source]
pair_identifiers(history_id, contents=None)[source]
list_identifiers(history_id, contents=None)[source]
wait_for_dataset_collection(create_payload, assert_ok=False, timeout=60)[source]
class galaxy_test.base.populators.DatasetCollectionPopulator(galaxy_interactor)[source]

Bases: galaxy_test.base.populators.BaseDatasetCollectionPopulator

__init__(galaxy_interactor)[source]

Initialize self. See help(type(self)) for accurate signature.

galaxy_test.base.populators.load_data_dict(history_id, test_data, dataset_populator, dataset_collection_populator)[source]

Load a dictionary as inputs to a workflow (test data focused).

galaxy_test.base.populators.stage_inputs(galaxy_interactor, history_id, job, use_path_paste=True, use_fetch_api=True, to_posix_lines=True)[source]

Alternative to load_data_dict that uses production-style workflow inputs.

galaxy_test.base.populators.stage_rules_example(galaxy_interactor, history_id, example)[source]

Wrapper around stage_inputs for staging collections defined by rules spec DSL.

galaxy_test.base.populators.wait_on_state(state_func, desc='state', skip_states=None, ok_states=None, assert_ok=False, timeout=60)[source]
class galaxy_test.base.populators.GiPostGetMixin[source]

Bases: object

Mixin for adapting Galaxy testing populators helpers to bioblend.

class galaxy_test.base.populators.GiDatasetPopulator(gi)[source]

Bases: galaxy_test.base.populators.BaseDatasetPopulator, galaxy_test.base.populators.GiPostGetMixin

Implementation of BaseDatasetPopulator backed by bioblend.

__init__(gi)[source]

Construct a dataset populator from a bioblend GalaxyInstance.

class galaxy_test.base.populators.GiDatasetCollectionPopulator(gi)[source]

Bases: galaxy_test.base.populators.BaseDatasetCollectionPopulator, galaxy_test.base.populators.GiPostGetMixin

Implementation of BaseDatasetCollectionPopulator backed by bioblend.

__init__(gi)[source]

Construct a dataset collection populator from a bioblend GalaxyInstance.

class galaxy_test.base.populators.GiWorkflowPopulator(gi)[source]

Bases: galaxy_test.base.populators.BaseWorkflowPopulator, galaxy_test.base.populators.GiPostGetMixin

Implementation of BaseWorkflowPopulator backed by bioblend.

__init__(gi)[source]

Construct a workflow populator from a bioblend GalaxyInstance.

galaxy_test.base.populators.wait_on(function, desc, timeout=60)[source]

galaxy_test.base.rules_test_data module

galaxy_test.base.rules_test_data.check_example_1(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_2(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_3(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_4(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_5(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_6(hdca, dataset_populator)[source]

galaxy_test.base.ssh_util module

galaxy_test.base.ssh_util.generate_ssh_keys()[source]

Returns a named tuple with private and public key and their paths.

galaxy_test.base.testcase module

class galaxy_test.base.testcase.FunctionalTestCase(methodName='runTest')[source]

Bases: unittest.case.TestCase

Base class for tests targetting actual Galaxy servers.

Subclass should override galaxy_driver_class if a Galaxy server needs to be launched to run the test, this base class assumes a server is already running.

galaxy_driver_class: Optional[type] = None
setUp()[source]

Hook method for setting up the test fixture before exercising it.

classmethod setUpClass()[source]

Configure and start Galaxy for a test.

classmethod tearDownClass()[source]

Shutdown Galaxy server and cleanup temp directory.

get_filename(filename)[source]

galaxy_test.base.workflow_fixtures module