Warning
This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.
galaxy_test.base package¶
Submodules¶
galaxy_test.base.api module¶
- class galaxy_test.base.api.UsesApiTestCaseMixin[source]¶
Bases:
object
- property galaxy_interactor: ApiTestInteractor¶
- class galaxy_test.base.api.ApiTestInteractor(test_case, api_key=None)[source]¶
Bases:
TestCaseGalaxyInteractor
Specialized variant of the API interactor (originally developed for tool functional tests) for testing the API generally.
- cookies: Optional[RequestsCookieJar]¶
galaxy_test.base.api_asserts module¶
Utility methods for making assertions about Galaxy API responses, etc…
- galaxy_test.base.api_asserts.assert_status_code_is(response: Response, expected_status_code: int)[source]¶
Assert that the supplied response has the expect status code.
- galaxy_test.base.api_asserts.assert_status_code_is_ok(response: Response)[source]¶
Assert that the supplied response is okay.
The easier alternative
response.raise_for_status()
might be perferable generally.See also
- galaxy_test.base.api_asserts.assert_has_keys(response: dict, *keys: str)[source]¶
Assert that the supplied response (dict) has the supplied keys.
- galaxy_test.base.api_asserts.assert_not_has_keys(response: dict, *keys: str)[source]¶
Assert that the supplied response (dict) does not have the supplied keys.
- galaxy_test.base.api_asserts.assert_error_code_is(response: Union[Response, dict], error_code: Union[int, ErrorCode])[source]¶
Assert that the supplied response has the supplied Galaxy error code.
Galaxy error codes can be imported from
galaxy.exceptions.error_codes
to test against.
galaxy_test.base.api_util module¶
- galaxy_test.base.api_util.get_admin_api_key() str [source]¶
Test admin API key to use for functional tests.
This key should be configured as a admin API key and should be able to create additional users and keys.
galaxy_test.base.constants module¶
Just constants useful for testing across test types.
galaxy_test.base.env module¶
Base utilities for working Galaxy test environments.
galaxy_test.base.instrument module¶
galaxy_test.base.interactor module¶
- class galaxy_test.base.interactor.TestCaseGalaxyInteractor(functional_test_case, test_user=None, api_key=None)[source]¶
Bases:
GalaxyInteractorApi
- cookies: Optional[RequestsCookieJar]¶
galaxy_test.base.populators module¶
Abstractions used by the Galaxy testing frameworks for interacting with the Galaxy API.
These abstractions are geared toward testing use cases and populating fixtures. For a more general framework for working with the Galaxy API checkout bioblend.
The populators are broken into different categories of data one might want to populate
and work with (datasets, histories, libraries, and workflows). Within each populator
type abstract classes describe high-level functionality that depend on abstract
HTTP verbs executions (e.g. methods for executing GET, POST, DELETE). The abstract
classes are galaxy_test.base.populators.BaseDatasetPopulator
,
galaxy_test.base.populators.BaseWorkflowPopulator
, and
galaxy_test.base.populators.BaseDatasetCollectionPopulator
.
There are a few different concrete ways to supply these low-level verb executions.
For instance galaxy_test.base.populators.DatasetPopulator
implements the abstract
galaxy_test.base.populators.BaseDatasetPopulator
by leveraging a galaxy interactor
galaxy.tool_util.interactor.GalaxyInteractorApi
. It is non-intuitive
that the Galaxy testing framework uses the tool testing code inside Galaxy’s code
base for a lot of heavy lifting. This is due to the API testing framework organically
growing from the tool testing framework that predated it and then the tool testing
framework being extracted for re-use in Planemo, etc..
These other two concrete implementation of the populators are much more
direct and intuitive. galaxy_test.base.populators.GiDatasetPopulator
, et. al.
are populators built based on Bioblend gi
objects to build URLs and describe
API keys. galaxy_test.selenium.framework.SeleniumSessionDatasetPopulator
,
et al. are populators built based on Selenium sessions to leverage Galaxy cookies
for auth for instance.
All three of these implementations are now effectively light wrappers around requests. Not leveraging requests directly is a bit ugly and this ugliness again stems from these organically growing from a framework that originally didn’t use requests at all.
API tests and Selenium tests routinely use requests directly and that is totally fine, requests should just be filtered through the verb abstractions if that functionality is then added to populators to be shared across tests or across testing frameworks.
- galaxy_test.base.populators.skip_without_tool(tool_id)[source]¶
Decorate an API test method as requiring a specific tool.
Have test framework skip the test case if the tool is unavailable.
- galaxy_test.base.populators.skip_without_datatype(extension)[source]¶
Decorate an API test method as requiring a specific datatype.
Have test framework skip the test case if the datatype is unavailable.
- galaxy_test.base.populators.conformance_tests_gen(directory, filename='conformance_tests.yaml')[source]¶
- class galaxy_test.base.populators.CwlToolRun(dataset_populator, history_id, run_response)[source]¶
Bases:
CwlRun
- property job_id¶
- class galaxy_test.base.populators.CwlWorkflowRun(dataset_populator, workflow_populator, history_id, workflow_id, invocation_id)[source]¶
Bases:
CwlRun
- class galaxy_test.base.populators.BasePopulator[source]¶
Bases:
object
- galaxy_interactor: ApiTestInteractor¶
- class galaxy_test.base.populators.BaseDatasetPopulator[source]¶
Bases:
BasePopulator
Abstract description of API operations optimized for testing Galaxy - implementations must implement _get, _post and _delete.
- new_dataset(history_id: str, content=None, wait: bool = False, fetch_data=True, to_posix_lines=True, auto_decompress=True, **kwds) Dict[str, Any] [source]¶
Create a new history dataset instance (HDA).
- Returns
a dictionary describing the new HDA
- new_dataset_request(history_id: str, content=None, wait: bool = False, fetch_data=True, **kwds) Response [source]¶
Lower-level dataset creation that returns the upload tool response object.
- fetch(payload: dict, assert_ok: bool = True, timeout: Union[int, float] = 60, wait: Optional[bool] = None)[source]¶
- fetch_hdas(history_id: str, items: List[Dict[str, Any]], wait: bool = True) List[Dict[str, Any]] [source]¶
- create_from_store(store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None, model_store_format: Optional[str] = None) Dict[str, Any] [source]¶
- create_from_store_async(store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None, model_store_format: Optional[str] = None) Dict[str, Any] [source]¶
- create_contents_from_store(history_id: str, store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None) List[Dict[str, Any]] [source]¶
- download_contents_to_store(history_id: str, history_content: Dict[str, Any], extension='.tgz') str [source]¶
- wait_for_tool_run(history_id: str, run_response: Response, timeout: Union[int, float] = 60, assert_ok: bool = True)[source]¶
- wait_for_history(history_id: str, assert_ok: bool = False, timeout: Union[int, float] = 60) str [source]¶
- wait_for_history_jobs(history_id: str, assert_ok: bool = False, timeout: Union[int, float] = 60)[source]¶
- wait_for_jobs(jobs: Union[List[dict], List[str]], assert_ok: bool = False, timeout: Union[int, float] = 60, ok_states=None)[source]¶
- wait_for_job(job_id: str, assert_ok: bool = False, timeout: Union[int, float] = 60, ok_states=None)[source]¶
- compute_hash(dataset_id: str, hash_function: Optional[str] = 'MD5', extra_files_path: Optional[str] = None, wait: bool = True) Response [source]¶
- delete_dataset(history_id: str, content_id: str, purge: bool = False, stop_job: bool = False, wait_for_purge: bool = False, use_query_params: bool = False) Response [source]¶
- fetch_payload(history_id: str, content: str, auto_decompress: bool = False, file_type: str = 'txt', dbkey: str = '?', name: str = 'Test_Dataset', **kwds) dict [source]¶
- get_history_dataset_content(history_id: str, wait=True, filename=None, type='text', to_ext=None, raw=False, **kwds)[source]¶
- assert_download_request_ok(download_request_response: Response) UUID [source]¶
Assert response is valid and okay and extract storage request ID.
- reimport_history(history_id, history_name, wait_on_history_length, export_kwds, task_based=False)[source]¶
- get_random_name(prefix: Optional[str] = None, suffix: Optional[str] = None, len: int = 10) str [source]¶
- wait_for_dataset(history_id: str, dataset_id: str, assert_ok: bool = False, timeout: Union[int, float] = 60) str [source]¶
- new_page(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: Optional[str] = None) Dict[str, Any] [source]¶
- new_page_raw(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: Optional[str] = None) Response [source]¶
- new_page_payload(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: Optional[str] = None) Dict[str, str] [source]¶
- export_history_to_uri_async(history_id: str, target_uri: str, model_store_format: str = 'tgz', include_files: bool = True)[source]¶
- download_history_to_store(history_id: str, extension: str = 'tgz', serve_file: bool = False)[source]¶
- galaxy_interactor: ApiTestInteractor¶
- class galaxy_test.base.populators.GalaxyInteractorHttpMixin[source]¶
Bases:
object
- galaxy_interactor: ApiTestInteractor¶
- class galaxy_test.base.populators.DatasetPopulator(galaxy_interactor: ApiTestInteractor)[source]¶
Bases:
GalaxyInteractorHttpMixin
,BaseDatasetPopulator
- __init__(galaxy_interactor: ApiTestInteractor) None [source]¶
- galaxy_interactor: ApiTestInteractor¶
- class galaxy_test.base.populators.BaseWorkflowPopulator[source]¶
Bases:
BasePopulator
- dataset_populator: BaseDatasetPopulator¶
- dataset_collection_populator: BaseDatasetCollectionPopulator¶
- load_workflow(name: str, content: str = '{\n "a_galaxy_workflow": "true", \n "annotation": "simple workflow",\n "format-version": "0.1", \n "name": "TestWorkflow1", \n "steps": {\n "0": {\n "annotation": "input1 description", \n "id": 0, \n "input_connections": {}, \n "inputs": [\n {\n "description": "input1 description", \n "name": "WorkflowInput1"\n }\n ], \n "name": "Input dataset", \n "outputs": [], \n "position": {\n "left": 199.55555772781372, \n "top": 200.66666460037231\n }, \n "tool_errors": null, \n "tool_id": null, \n "tool_state": "{\\"name\\": \\"WorkflowInput1\\"}", \n "tool_version": null, \n "type": "data_input", \n "user_outputs": []\n }, \n "1": {\n "annotation": "", \n "id": 1, \n "input_connections": {}, \n "inputs": [\n {\n "description": "", \n "name": "WorkflowInput2"\n }\n ], \n "name": "Input dataset", \n "outputs": [], \n "position": {\n "left": 206.22221422195435, \n "top": 327.33335161209106\n }, \n "tool_errors": null, \n "tool_id": null, \n "tool_state": "{\\"name\\": \\"WorkflowInput2\\"}", \n "tool_version": null, \n "type": "data_input", \n "user_outputs": []\n }, \n "2": {\n "annotation": "", \n "id": 2, \n "input_connections": {\n "input1": {\n "id": 0, \n "output_name": "output"\n }, \n "queries_0|input2": {\n "id": 1, \n "output_name": "output"\n }\n }, \n "inputs": [], \n "name": "Concatenate datasets", \n "outputs": [\n {\n "name": "out_file1", \n "type": "input"\n }\n ], \n "position": {\n "left": 419.33335876464844, \n "top": 200.44446563720703\n }, \n "post_job_actions": {}, \n "tool_errors": null, \n "tool_id": "cat1", \n "tool_state": "{\\"__page__\\": 0, \\"__rerun_remap_job_id__\\": null, \\"input1\\": \\"null\\", \\"queries\\": \\"[{\\\\\\"input2\\\\\\": null, \\\\\\"__index__\\\\\\": 0}]\\"}", \n "tool_version": "1.0.0", \n "type": "tool", \n "user_outputs": []\n }\n }\n}\n', add_pja=False) dict [source]¶
- wait_for_invocation(workflow_id: str, invocation_id: str, timeout: Union[int, float] = 60, assert_ok: bool = True) str [source]¶
- wait_for_history_workflows(history_id: str, assert_ok: bool = True, timeout: Union[int, float] = 60, expected_invocation_count: Optional[int] = None) None [source]¶
- wait_for_workflow(workflow_id: str, invocation_id: str, history_id: str, assert_ok: bool = True, timeout: Union[int, float] = 60) None [source]¶
Wait for a workflow invocation to completely schedule and then history to be complete.
- create_invocation_from_store_raw(history_id: str, store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None, model_store_format: Optional[str] = None) Response [source]¶
- create_invocation_from_store(history_id: str, store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None, model_store_format: Optional[str] = None) Response [source]¶
- validate_biocompute_object(bco, expected_schema_version='https://w3id.org/ieee/ieee-2791-schema/2791object.json')[source]¶
- invoke_workflow(workflow_id: str, history_id: Optional[str] = None, inputs: Optional[dict] = None, request: Optional[dict] = None, inputs_by: str = 'step_index') Response [source]¶
- invoke_workflow_and_assert_ok(workflow_id: str, history_id: Optional[str] = None, inputs: Optional[dict] = None, request: Optional[dict] = None, inputs_by: str = 'step_index') str [source]¶
- invoke_workflow_and_wait(workflow_id: str, history_id: Optional[str] = None, inputs: Optional[dict] = None, request: Optional[dict] = None) Response [source]¶
- download_workflow(workflow_id: str, style: Optional[str] = None, history_id: Optional[str] = None) dict [source]¶
- refactor_workflow(workflow_id: str, actions: list, dry_run: Optional[bool] = None, style: Optional[str] = None) Response [source]¶
- run_workflow(has_workflow: Union[str, PathLike, dict], test_data: Optional[Union[str, dict]] = None, history_id: Optional[str] = None, wait: bool = True, source_type: Optional[str] = None, jobs_descriptions=None, expected_response: int = 200, assert_ok: bool = True, client_convert: Optional[bool] = None, round_trip_format_conversion: bool = False, invocations: int = 1, raw_yaml: bool = False)[source]¶
High-level wrapper around workflow API, etc. to invoke format 2 workflows.
- setup_workflow_run(workflow: Optional[Dict[str, Any]] = None, inputs_by: str = 'step_id', history_id: Optional[str] = None, workflow_id: Optional[str] = None) Tuple[Dict[str, Any], str, str] [source]¶
- wait_for_invocation_and_jobs(history_id: str, workflow_id: str, invocation_id: str, assert_ok: bool = True) None [source]¶
- index(show_shared: Optional[bool] = None, show_published: Optional[bool] = None, sort_by: Optional[str] = None, sort_desc: Optional[bool] = None, limit: Optional[int] = None, offset: Optional[int] = None, search: Optional[str] = None, skip_step_counts: Optional[bool] = None)[source]¶
- index_ids(show_shared: Optional[bool] = None, show_published: Optional[bool] = None, sort_by: Optional[str] = None, sort_desc: Optional[bool] = None, limit: Optional[int] = None, offset: Optional[int] = None, search: Optional[str] = None)[source]¶
- class galaxy_test.base.populators.RunJobsSummary(history_id, workflow_id, invocation_id, inputs, jobs, invocation, workflow_request)[source]¶
Bases:
tuple
- property history_id¶
Alias for field number 0
- property workflow_id¶
Alias for field number 1
- property invocation_id¶
Alias for field number 2
- property inputs¶
Alias for field number 3
- property jobs¶
Alias for field number 4
- property invocation¶
Alias for field number 5
- property workflow_request¶
Alias for field number 6
- class galaxy_test.base.populators.WorkflowPopulator(galaxy_interactor)[source]¶
Bases:
GalaxyInteractorHttpMixin
,BaseWorkflowPopulator
,ImporterGalaxyInterface
- galaxy_interactor: ApiTestInteractor¶
- import_workflow(workflow, **kwds) Dict[str, Any] [source]¶
Import a workflow via POST /api/workflows or comparable interface into Galaxy.
- class galaxy_test.base.populators.CwlPopulator(dataset_populator: DatasetPopulator, workflow_populator: WorkflowPopulator)[source]¶
Bases:
object
- __init__(dataset_populator: DatasetPopulator, workflow_populator: WorkflowPopulator)[source]¶
- run_cwl_job(artifact: str, job_path: Optional[str] = None, job: Optional[Dict] = None, test_data_directory: Optional[str] = None, history_id: Optional[str] = None, assert_ok: bool = True) CwlRun [source]¶
- Parameters
artifact – CWL tool id, or (absolute or relative) path to a CWL tool or workflow file
- class galaxy_test.base.populators.LibraryPopulator(galaxy_interactor)[source]¶
Bases:
object
- create_from_store(store_dict: Optional[Dict[str, Any]] = None, store_path: Optional[str] = None) List[Dict[str, Any]] [source]¶
- class galaxy_test.base.populators.BaseDatasetCollectionPopulator[source]¶
Bases:
object
- dataset_populator: BaseDatasetPopulator¶
- class galaxy_test.base.populators.DatasetCollectionPopulator(galaxy_interactor: ApiTestInteractor)[source]¶
Bases:
BaseDatasetCollectionPopulator
- __init__(galaxy_interactor: ApiTestInteractor)[source]¶
- dataset_populator: BaseDatasetPopulator¶
- galaxy_test.base.populators.load_data_dict(history_id: str, test_data: Dict[str, Any], dataset_populator: BaseDatasetPopulator, dataset_collection_populator: BaseDatasetCollectionPopulator) Tuple[Dict[str, Any], Dict[str, Any], bool] [source]¶
Load a dictionary as inputs to a workflow (test data focused).
- galaxy_test.base.populators.stage_inputs(galaxy_interactor: ApiTestInteractor, history_id: str, job: Dict[str, Any], use_path_paste: bool = True, use_fetch_api: bool = True, to_posix_lines: bool = True, tool_or_workflow: typing_extensions.Literal[tool, workflow] = 'workflow', job_dir: Optional[str] = None) Tuple[Dict[str, Any], List[Dict[str, Any]]] [source]¶
Alternative to load_data_dict that uses production-style workflow inputs.
- galaxy_test.base.populators.stage_rules_example(galaxy_interactor: ApiTestInteractor, history_id: str, example: Dict[str, Any]) Dict[str, Any] [source]¶
Wrapper around stage_inputs for staging collections defined by rules spec DSL.
- galaxy_test.base.populators.wait_on_state(state_func: Callable, desc: str = 'state', skip_states=None, ok_states=None, assert_ok: bool = False, timeout: Union[int, float] = 60) str [source]¶
- class galaxy_test.base.populators.GiHttpMixin[source]¶
Bases:
object
Mixin for adapting Galaxy testing populators helpers to bioblend.
- class galaxy_test.base.populators.GiDatasetPopulator(gi)[source]¶
Bases:
GiHttpMixin
,BaseDatasetPopulator
Implementation of BaseDatasetPopulator backed by bioblend.
- class galaxy_test.base.populators.GiDatasetCollectionPopulator(gi)[source]¶
Bases:
GiHttpMixin
,BaseDatasetCollectionPopulator
Implementation of BaseDatasetCollectionPopulator backed by bioblend.
- class galaxy_test.base.populators.GiWorkflowPopulator(gi)[source]¶
Bases:
GiHttpMixin
,BaseWorkflowPopulator
Implementation of BaseWorkflowPopulator backed by bioblend.
galaxy_test.base.rules_test_data module¶
galaxy_test.base.testcase module¶
- class galaxy_test.base.testcase.FunctionalTestCase[source]¶
Bases:
TestCase
Base class for tests targetting actual Galaxy servers.
Subclass should override galaxy_driver_class if a Galaxy server needs to be launched to run the test, this base class assumes a server is already running.
- test_data_resolver: TestDataResolver¶