Warning

This document is for an in-development version of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.

galaxy_test.base package

Submodules

galaxy_test.base.api module

galaxy_test.base.api.celery_config()[source]
class galaxy_test.base.api.UsesCeleryTasks[source]

Bases: object

classmethod handle_galaxy_config_kwds(config: Dict[str, Any]) None[source]
celery_worker_parameters()[source]
celery_parameters()[source]
class galaxy_test.base.api.HasAnonymousGalaxyInteractor(*args, **kwargs)[source]

Bases: Protocol

property anonymous_galaxy_interactor: ApiTestInteractor

Return an optionally anonymous galaxy interactor.

__init__(*args, **kwargs)
class galaxy_test.base.api.UsesApiTestCaseMixin[source]

Bases: object

url: str
tearDown()[source]
property anonymous_galaxy_interactor: ApiTestInteractor

Return an optionally anonymous galaxy interactor.

Lighter requirements for use with API requests that may not required an API key.

property galaxy_interactor: ApiTestInteractor
class galaxy_test.base.api.ApiTestInteractor(test_case, api_key=None)[source]

Bases: TestCaseGalaxyInteractor

Specialized variant of the API interactor (originally developed for tool functional tests) for testing the API generally.

__init__(test_case, api_key=None)[source]
cookies: RequestsCookieJar | None
get(*args, **kwds)[source]
head(*args, **kwds)[source]
post(*args, **kwds)[source]
delete(*args, **kwds)[source]
patch(*args, **kwds)[source]
put(*args, **kwds)[source]
api_key: str | None
keep_outputs_dir: str | None
class galaxy_test.base.api.AnonymousGalaxyInteractor(test_case)[source]

Bases: ApiTestInteractor

__init__(test_case)[source]
api_key: str | None
cookies: RequestsCookieJar | None
keep_outputs_dir: str | None

galaxy_test.base.api_asserts module

Utility methods for making assertions about Galaxy API responses, etc…

galaxy_test.base.api_asserts.assert_status_code_is(response: Response, expected_status_code: int, failure_message: str | None = None)[source]

Assert that the supplied response has the expect status code.

galaxy_test.base.api_asserts.assert_status_code_is_ok(response: Response, failure_message: str | None = None)[source]

Assert that the supplied response is okay.

This is an alternative to response.raise_for_status() with a more detailed error message.

galaxy_test.base.api_asserts.assert_status_code_is_not_ok(response: Response, failure_message: str | None = None)[source]

Assert that the supplied response is not okay.

galaxy_test.base.api_asserts.assert_has_keys(response: dict, *keys: str)[source]

Assert that the supplied response (dict) has the supplied keys.

galaxy_test.base.api_asserts.assert_not_has_keys(response: dict, *keys: str)[source]

Assert that the supplied response (dict) does not have the supplied keys.

galaxy_test.base.api_asserts.assert_error_code_is(response: Response | dict, error_code: int | ErrorCode)[source]

Assert that the supplied response has the supplied Galaxy error code.

Galaxy error codes can be imported from galaxy.exceptions.error_codes to test against.

galaxy_test.base.api_asserts.assert_object_id_error(response: Response)[source]
galaxy_test.base.api_asserts.assert_error_message_contains(response: Response | dict, expected_contains: str)[source]
galaxy_test.base.api_asserts.assert_has_key(response: dict, *keys: str)

Assert that the supplied response (dict) has the supplied keys.

galaxy_test.base.api_util module

galaxy_test.base.api_util.get_admin_api_key() str[source]

Test admin API key to use for functional tests.

This key should be configured as a admin API key and should be able to create additional users and keys.

galaxy_test.base.api_util.get_user_api_key() str | None[source]

Test user API key to use for functional tests.

If set, this should drive API based testing - if not set an admin API key will be used to create a new user and API key for tests.

galaxy_test.base.api_util.baseauth_headers(username: str, password: str) Dict[str, str][source]
galaxy_test.base.api_util.random_name(prefix: str | None = None, suffix: str | None = None, len: int = 10) str[source]

galaxy_test.base.constants module

Just constants useful for testing across test types.

galaxy_test.base.env module

Base utilities for working Galaxy test environments.

galaxy_test.base.env.setup_keep_outdir() str[source]
galaxy_test.base.env.target_url_parts() Tuple[str, str | None, str][source]
galaxy_test.base.env.get_ip_address(ifname: str) str[source]

galaxy_test.base.instrument module

galaxy_test.base.interactor module

class galaxy_test.base.interactor.TestCaseGalaxyInteractor(functional_test_case, test_user=None, api_key=None)[source]

Bases: GalaxyInteractorApi

__init__(functional_test_case, test_user=None, api_key=None)[source]
api_key: str | None
cookies: RequestsCookieJar | None
keep_outputs_dir: str | None

galaxy_test.base.populators module

Abstractions used by the Galaxy testing frameworks for interacting with the Galaxy API.

These abstractions are geared toward testing use cases and populating fixtures. For a more general framework for working with the Galaxy API checkout bioblend.

The populators are broken into different categories of data one might want to populate and work with (datasets, histories, libraries, and workflows). Within each populator type abstract classes describe high-level functionality that depend on abstract HTTP verbs executions (e.g. methods for executing GET, POST, DELETE). The abstract classes are galaxy_test.base.populators.BaseDatasetPopulator, galaxy_test.base.populators.BaseWorkflowPopulator, and galaxy_test.base.populators.BaseDatasetCollectionPopulator.

There are a few different concrete ways to supply these low-level verb executions. For instance galaxy_test.base.populators.DatasetPopulator implements the abstract galaxy_test.base.populators.BaseDatasetPopulator by leveraging a galaxy interactor galaxy.tool_util.interactor.GalaxyInteractorApi. It is non-intuitive that the Galaxy testing framework uses the tool testing code inside Galaxy’s code base for a lot of heavy lifting. This is due to the API testing framework organically growing from the tool testing framework that predated it and then the tool testing framework being extracted for re-use in Planemo, etc..

These other two concrete implementation of the populators are much more direct and intuitive. galaxy_test.base.populators.GiDatasetPopulator, et. al. are populators built based on Bioblend gi objects to build URLs and describe API keys. galaxy_test.selenium.framework.SeleniumSessionDatasetPopulator, et al. are populators built based on Selenium sessions to leverage Galaxy cookies for auth for instance.

All three of these implementations are now effectively light wrappers around requests. Not leveraging requests directly is a bit ugly and this ugliness again stems from these organically growing from a framework that originally didn’t use requests at all.

API tests and Selenium tests routinely use requests directly and that is totally fine, requests should just be filtered through the verb abstractions if that functionality is then added to populators to be shared across tests or across testing frameworks.

galaxy_test.base.populators.flakey(method)[source]
galaxy_test.base.populators.get_tool_ids(interactor: AnonymousGalaxyInteractor)[source]
galaxy_test.base.populators.skip_without_tool(tool_id: str)[source]

Decorate an API test method as requiring a specific tool.

Have test framework skip the test case if the tool is unavailable.

galaxy_test.base.populators.skip_without_asgi(method)[source]
galaxy_test.base.populators.skip_without_datatype(extension: str)[source]

Decorate an API test method as requiring a specific datatype.

Have test framework skip the test case if the datatype is unavailable.

galaxy_test.base.populators.skip_without_visualization_plugin(plugin_name: str)[source]
galaxy_test.base.populators.summarize_instance_history_on_error(method)[source]
galaxy_test.base.populators.check_missing_tool(check)[source]
galaxy_test.base.populators.conformance_tests_gen(directory, filename='conformance_tests.yaml')[source]
class galaxy_test.base.populators.CwlRun(dataset_populator, history_id)[source]

Bases: object

__init__(dataset_populator, history_id)[source]
get_output_as_object(output_name, download_folder=None)[source]
abstract wait()[source]

Wait for the completion of the job(s) generated by this run.

class galaxy_test.base.populators.CwlToolRun(dataset_populator, history_id, run_response)[source]

Bases: CwlRun

__init__(dataset_populator, history_id, run_response)[source]
property job_id
wait()[source]

Wait for the completion of the job(s) generated by this run.

class galaxy_test.base.populators.CwlWorkflowRun(dataset_populator, workflow_populator, history_id, workflow_id, invocation_id)[source]

Bases: CwlRun

__init__(dataset_populator, workflow_populator, history_id, workflow_id, invocation_id)[source]
wait()[source]

Wait for the completion of the job(s) generated by this run.

class galaxy_test.base.populators.BasePopulator[source]

Bases: object

galaxy_interactor: ApiTestInteractor
class galaxy_test.base.populators.BaseDatasetPopulator[source]

Bases: BasePopulator

Abstract description of API operations optimized for testing Galaxy - implementations must implement _get, _post and _delete.

new_dataset(history_id: str, content=None, wait: bool = False, fetch_data=True, to_posix_lines=True, auto_decompress=True, **kwds) Dict[str, Any][source]

Create a new history dataset instance (HDA).

Returns:

a dictionary describing the new HDA

new_dataset_request(history_id: str, content=None, wait: bool = False, fetch_data=True, **kwds) Response[source]

Lower-level dataset creation that returns the upload tool response object.

new_bam_dataset(history_id: str, test_data_resolver)[source]
fetch(payload: dict, assert_ok: bool = True, timeout: int | float = 60, wait: bool | None = None)[source]
fetch_hdas(history_id: str, items: List[Dict[str, Any]], wait: bool = True) List[Dict[str, Any]][source]
fetch_hda(history_id: str, item: Dict[str, Any], wait: bool = True) Dict[str, Any][source]
create_deferred_hda(history_id, uri: str, ext: str | None = None) Dict[str, Any][source]
export_dataset_to_remote_file(history_id: str, content: str, name: str, target_uri: str)[source]
tag_dataset(history_id, hda_id, tags, raise_on_error=True)[source]
create_from_store_raw(payload: Dict[str, Any]) Response[source]
create_from_store_raw_async(payload: Dict[str, Any]) Response[source]
create_from_store(store_dict: Dict[str, Any] | None = None, store_path: str | None = None, model_store_format: str | None = None) Dict[str, Any][source]
create_from_store_async(store_dict: Dict[str, Any] | None = None, store_path: str | None = None, model_store_format: str | None = None) Dict[str, Any][source]
create_contents_from_store_raw(history_id: str, payload: Dict[str, Any]) Response[source]
create_contents_from_store(history_id: str, store_dict: Dict[str, Any] | None = None, store_path: str | None = None) List[Dict[str, Any]][source]
download_contents_to_store(history_id: str, history_content: Dict[str, Any], extension='.tgz') str[source]
reupload_contents(history_content: Dict[str, Any])[source]
wait_for_tool_run(history_id: str, run_response: Response, timeout: int | float = 60, assert_ok: bool = True)[source]
check_run(run_response: Response) dict[source]
wait_for_history(history_id: str, assert_ok: bool = False, timeout: int | float = 60) str[source]
wait_for_history_jobs(history_id: str, assert_ok: bool = False, timeout: int | float = 60)[source]
wait_for_jobs(jobs: List[dict] | List[str], assert_ok: bool = False, timeout: int | float = 60, ok_states=None)[source]
wait_for_job(job_id: str, assert_ok: bool = False, timeout: int | float = 60, ok_states=None)[source]
get_job_details(job_id: str, full: bool = False) Response[source]
job_outputs(job_id: str) List[Dict[str, Any]][source]
compute_hash(dataset_id: str, hash_function: str | None = 'MD5', extra_files_path: str | None = None, wait: bool = True) Response[source]
cancel_history_jobs(history_id: str, wait=True) None[source]
history_jobs(history_id: str) List[Dict[str, Any]][source]
history_jobs_for_tool(history_id: str, tool_id: str) List[Dict[str, Any]][source]
invocation_jobs(invocation_id: str) List[Dict[str, Any]][source]
active_history_jobs(history_id: str) list[source]
cancel_job(job_id: str) Response[source]
delete_history(history_id: str) None[source]
delete_dataset(history_id: str, content_id: str, purge: bool = False, stop_job: bool = False, wait_for_purge: bool = False, use_query_params: bool = False) Response[source]
wait_for_purge(history_id, content_id)[source]
create_tool_landing(payload: CreateToolLandingRequestPayload) ToolLandingRequest[source]
create_workflow_landing(payload: CreateWorkflowLandingRequestPayload) WorkflowLandingRequest[source]
claim_tool_landing(uuid: UUID[UUID]) ToolLandingRequest[source]
claim_workflow_landing(uuid: UUID[UUID]) WorkflowLandingRequest[source]
use_workflow_landing(uuid: UUID[UUID]) WorkflowLandingRequest[source]
create_tool_from_path(tool_path: str) Dict[str, Any][source]
create_tool(representation, tool_directory: str | None = None) Dict[str, Any][source]
list_dynamic_tools() list[source]
show_dynamic_tool(uuid) dict[source]
deactivate_dynamic_tool(uuid) dict[source]
test_history_for(method) Generator[str, None, None][source]
test_history(require_new: bool = True, name: str | None = None) Generator[str, None, None][source]
new_history(name='API Test History', **kwds) str[source]
copy_history(history_id, name='API Test Copied History', **kwds) Response[source]
fetch_payload(history_id: str, content: str, auto_decompress: bool = False, file_type: str = 'txt', dbkey: str = '?', name: str = 'Test_Dataset', **kwds) dict[source]
upload_payload(history_id: str, content: str | None = None, **kwds) dict[source]
get_remote_files(target: str = 'ftp') dict[source]
run_tool_payload(tool_id: str | None, inputs: dict, history_id: str, **kwds) dict[source]
build_tool_state(tool_id: str, history_id: str)[source]
run_tool_raw(tool_id: str | None, inputs: dict, history_id: str, **kwds) Response[source]
run_tool(tool_id: str, inputs: dict, history_id: str, **kwds)[source]
tools_post(payload: dict, url='tools') Response[source]
describe_tool_execution(tool_id: str) DescribeToolExecution[source]
materialize_dataset_instance(history_id: str, id: str, source: str = 'hda')[source]
get_history_dataset_content(history_id: str, wait=True, filename=None, type='text', to_ext=None, raw=False, **kwds)[source]
display_chunk(dataset_id: str, offset: int = 0, ck_size: int | None = None) Dict[str, Any][source]
get_history_dataset_source_transform_actions(history_id: str, **kwd) Set[str][source]
get_history_dataset_details(history_id: str, keys: str | None = None, **kwds) Dict[str, Any][source]
get_history_dataset_details_raw(history_id: str, dataset_id: str, keys: str | None = None) Response[source]
get_history_dataset_extra_files(history_id: str, **kwds) list[source]
get_history_collection_details(history_id: str, **kwds) dict[source]
run_collection_creates_list(history_id: str, hdca_id: str) Response[source]
new_error_dataset(history_id: str) str[source]
report_job_error_raw(job_id: str, dataset_id: str, message: str = '', email: str | None = None) Response[source]
report_job_error(job_id: str, dataset_id: str, message: str = '', email: str | None = None) Response[source]
run_detect_errors(history_id: str, exit_code: int, stdout: str = '', stderr: str = '') dict[source]
run_exit_code_from_file(history_id: str, hdca_id: str) dict[source]
get_history_contents(history_id: str, data=None) List[Dict[str, Any]][source]
ds_entry(history_content: dict) dict[source]
dataset_storage_info(dataset_id: str) Dict[str, Any][source]
dataset_storage_info_raw(dataset_id: str) Response[source]
get_roles() list[source]
get_configuration(admin=False) Dict[str, Any][source]
user_email() str[source]
user_id() str[source]
user_private_role_id() str[source]
get_usage() List[Dict[str, Any]][source]
get_usage_for(label: str | None) Dict[str, Any][source]
update_user(properties: Dict[str, Any]) Dict[str, Any][source]
set_user_preferred_object_store_id(store_id: str | None) None[source]
update_user_raw(properties: Dict[str, Any]) Response[source]
total_disk_usage() float[source]
update_object_store_id(dataset_id: str, object_store_id: str)[source]
create_role(user_ids: list, description: str | None = None) dict[source]
create_quota(quota_payload: dict) dict[source]
get_quotas() list[source]
make_private(history_id: str, dataset_id: str) dict[source]
make_dataset_public_raw(history_id: str, dataset_id: str) Response[source]
update_permissions_raw(history_id: str, dataset_id: str, payload: dict) Response[source]
make_public(history_id: str) dict[source]
validate_dataset(history_id: str, dataset_id: str) Dict[str, Any][source]
validate_dataset_and_wait(history_id, dataset_id) str | None[source]
setup_history_for_export_testing(history_name)[source]
prepare_export(history_id, data)[source]
export_url(history_id: str, data, check_download: bool = True) str[source]
get_export_url(export_url) Response[source]
import_history(import_data)[source]
wait_for_history_with_name(history_name: str, desc: str) Dict[str, Any][source]
import_history_and_wait_for_name(import_data, history_name)[source]
history_names() Dict[str, Dict][source]
rename_history(history_id: str, new_name: str)[source]
update_history(history_id: str, payload: Dict[str, Any]) Response[source]
get_histories()[source]
wait_on_history_length(history_id: str, wait_on_history_length: int)[source]
wait_on_download(download_request_response: Response) Response[source]
assert_download_request_ok(download_request_response: Response) UUID[source]

Assert response is valid and okay and extract storage request ID.

wait_for_download_ready(storage_request_id: UUID)[source]
wait_on_task(async_task_response: Response)[source]
wait_on_task_id(task_id: str)[source]
wait_on_download_request(storage_request_id: UUID) Response[source]
history_length(history_id)[source]
reimport_history(history_id, history_name, wait_on_history_length, export_kwds, task_based=False)[source]
get_random_name(prefix: str | None = None, suffix: str | None = None, len: int = 10) str[source]
wait_for_dataset(history_id: str, dataset_id: str, assert_ok: bool = False, timeout: int | float = 60) str[source]
create_object_store_raw(payload: Dict[str, Any]) Response[source]
create_object_store(payload: Dict[str, Any]) Dict[str, Any][source]
upgrade_object_store_raw(id: str, payload: Dict[str, Any]) Response[source]
upgrade_object_store(id: str, payload: Dict[str, Any]) Dict[str, Any][source]
update_object_store_raw(id: str, payload: Dict[str, Any]) Response
update_object_store(id: str, payload: Dict[str, Any]) Dict[str, Any]
selectable_object_stores() List[Dict[str, Any]][source]
selectable_object_store_ids() List[str][source]
new_page(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: str | None = None) Dict[str, Any][source]
new_page_raw(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: str | None = None) Response[source]
new_page_payload(slug: str = 'mypage', title: str = 'MY PAGE', content_format: str = 'html', content: str | None = None) Dict[str, str][source]
export_history_to_uri_async(history_id: str, target_uri: str, model_store_format: str = 'tgz', include_files: bool = True)[source]
import_history_from_uri_async(target_uri: str, model_store_format: str)[source]
download_history_to_store(history_id: str, extension: str = 'tgz', serve_file: bool = False)[source]
get_history_export_tasks(history_id: str)[source]
make_page_public(page_id: str) Dict[str, Any][source]
wait_for_export_task_on_record(export_record)[source]
archive_history(history_id: str, export_record_id: str | None = None, purge_history: bool | None = False) Response[source]
restore_archived_history(history_id: str, force: bool | None = None) Response[source]
get_archived_histories(query: str | None = None) List[Dict[str, Any]][source]
galaxy_interactor: ApiTestInteractor
class galaxy_test.base.populators.GalaxyInteractorHttpMixin[source]

Bases: object

galaxy_interactor: ApiTestInteractor
class galaxy_test.base.populators.DatasetPopulator(galaxy_interactor: ApiTestInteractor)[source]

Bases: GalaxyInteractorHttpMixin, BaseDatasetPopulator

__init__(galaxy_interactor: ApiTestInteractor) None[source]
galaxy_interactor: ApiTestInteractor
class galaxy_test.base.populators.BaseWorkflowPopulator[source]

Bases: BasePopulator

dataset_populator: BaseDatasetPopulator
dataset_collection_populator: BaseDatasetCollectionPopulator
load_workflow(name: str, content: str = '{\n    "a_galaxy_workflow": "true", \n    "annotation": "simple workflow",\n    "format-version": "0.1", \n    "name": "TestWorkflow1", \n    "steps": {\n        "0": {\n            "annotation": "input1 description", \n            "id": 0, \n            "input_connections": {}, \n            "inputs": [\n                {\n                    "description": "input1 description", \n                    "name": "WorkflowInput1"\n                }\n            ], \n            "name": "Input dataset", \n            "outputs": [], \n            "position": {\n                "left": 199.55555772781372, \n                "top": 200.66666460037231\n            }, \n            "tool_errors": null, \n            "tool_id": null, \n            "tool_state": "{\\"name\\": \\"WorkflowInput1\\"}", \n            "tool_version": null, \n            "type": "data_input", \n            "user_outputs": []\n        }, \n        "1": {\n            "annotation": "", \n            "id": 1, \n            "input_connections": {}, \n            "inputs": [\n                {\n                    "description": "", \n                    "name": "WorkflowInput2"\n                }\n            ], \n            "name": "Input dataset", \n            "outputs": [], \n            "position": {\n                "left": 206.22221422195435, \n                "top": 327.33335161209106\n            }, \n            "tool_errors": null, \n            "tool_id": null, \n            "tool_state": "{\\"name\\": \\"WorkflowInput2\\"}", \n            "tool_version": null, \n            "type": "data_input", \n            "user_outputs": []\n        }, \n        "2": {\n            "annotation": "", \n            "id": 2, \n            "input_connections": {\n                "input1": {\n                    "id": 0, \n                    "output_name": "output"\n                }, \n                "queries_0|input2": {\n                    "id": 1, \n                    "output_name": "output"\n                }\n            }, \n            "inputs": [], \n            "name": "Concatenate datasets", \n            "outputs": [\n                {\n                    "name": "out_file1", \n                    "type": "input"\n                }\n            ], \n            "position": {\n                "left": 419.33335876464844, \n                "top": 200.44446563720703\n            }, \n            "post_job_actions": {}, \n            "tool_errors": null, \n            "tool_id": "cat1", \n            "tool_state": "{\\"__page__\\": 0, \\"__rerun_remap_job_id__\\": null, \\"input1\\": \\"null\\", \\"queries\\": \\"[{\\\\\\"input2\\\\\\": null, \\\\\\"__index__\\\\\\": 0}]\\"}", \n            "tool_version": "1.0.0", \n            "type": "tool", \n            "user_outputs": []\n        }\n    }\n}\n', add_pja=False) dict[source]
load_random_x2_workflow(name: str) dict[source]
load_workflow_from_resource(name: str, filename: str | None = None) dict[source]
simple_workflow(name: str, **create_kwds) str[source]
import_workflow_from_path_raw(from_path: str, object_id: str | None = None) Response[source]
import_workflow_from_path(from_path: str, object_id: str | None = None) str[source]
create_workflow(workflow: Dict[str, Any], **create_kwds) str[source]
create_workflow_response(workflow: Dict[str, Any], **create_kwds) Response[source]
upload_yaml_workflow(yaml_content: str | PathLike | dict, **kwds) str[source]
wait_for_invocation(workflow_id: str | None, invocation_id: str, timeout: int | float = 60, assert_ok: bool = True) str[source]
workflow_invocations(workflow_id: str, include_nested_invocations=True) List[Dict[str, Any]][source]
cancel_invocation(invocation_id: str)[source]
history_invocations(history_id: str, include_nested_invocations: bool = True) List[Dict[str, Any]][source]
wait_for_history_workflows(history_id: str, assert_ok: bool = True, timeout: int | float = 60, expected_invocation_count: int | None = None) None[source]
wait_for_workflow(workflow_id: str | None, invocation_id: str, history_id: str, assert_ok: bool = True, timeout: int | float = 60) None[source]

Wait for a workflow invocation to completely schedule and then history to be complete.

get_invocation(invocation_id, step_details=False)[source]
download_invocation_to_store(invocation_id, include_files=False, extension='tgz')[source]
download_invocation_to_uri(invocation_id, target_uri, extension='tgz')[source]
create_invocation_from_store_raw(history_id: str, store_dict: Dict[str, Any] | None = None, store_path: str | None = None, model_store_format: str | None = None) Response[source]
create_invocation_from_store(history_id: str, store_dict: Dict[str, Any] | None = None, store_path: str | None = None, model_store_format: str | None = None) Response[source]
validate_biocompute_object(bco, expected_schema_version='https://w3id.org/ieee/ieee-2791-schema/2791object.json')[source]
get_ro_crate(invocation_id, include_files=False)[source]
validate_invocation_crate_directory(crate_directory)[source]
invoke_workflow_raw(workflow_id: str, request: dict, assert_ok: bool = False) Response[source]
invoke_workflow(workflow_id: str, history_id: str | None = None, inputs: dict | None = None, request: dict | None = None, inputs_by: str = 'step_index') Response[source]
invoke_workflow_and_assert_ok(workflow_id: str, history_id: str | None = None, inputs: dict | None = None, request: dict | None = None, inputs_by: str = 'step_index') str[source]
invoke_workflow_and_wait(workflow_id: str, history_id: str | None = None, inputs: dict | None = None, request: dict | None = None, assert_ok: bool = True) Response[source]
workflow_report_json(workflow_id: str, invocation_id: str) dict[source]
workflow_report_pdf(workflow_id: str, invocation_id: str) Response[source]
download_workflow(workflow_id: str, style: str | None = None, history_id: str | None = None) dict[source]
invocation_to_request(invocation_id: str)[source]
set_tags(workflow_id: str, tags: List[str]) None[source]
update_workflow(workflow_id: str, workflow_object: dict) Response[source]
refactor_workflow(workflow_id: str, actions: list, dry_run: bool | None = None, style: str | None = None) Response[source]
export_for_update(workflow_id)[source]
run_workflow(has_workflow: str | PathLike | dict, test_data: str | dict | None = None, history_id: str | None = None, wait: bool = True, source_type: str | None = None, jobs_descriptions=None, expected_response: int = 200, assert_ok: bool = True, client_convert: bool | None = None, extra_invocation_kwds: Dict[str, Any] | None = None, round_trip_format_conversion: bool = False, invocations: int = 1, raw_yaml: bool = False)[source]

High-level wrapper around workflow API, etc. to invoke format 2 workflows.

rerun(run_jobs_summary: RunJobsSummary, wait: bool = True, assert_ok: bool = True) RunJobsSummary[source]
dump_workflow(workflow_id, style=None)[source]
workflow_inputs(workflow_id: str) Dict[str, Dict[str, Any]][source]
build_ds_map(workflow_id: str, label_map: Dict[str, Any]) str[source]
setup_workflow_run(workflow: Dict[str, Any] | None = None, inputs_by: str = 'step_id', history_id: str | None = None, workflow_id: str | None = None) Tuple[Dict[str, Any], str, str][source]
get_invocation_jobs(invocation_id: str) List[Dict[str, Any]][source]
wait_for_invocation_and_jobs(history_id: str, workflow_id: str, invocation_id: str, assert_ok: bool = True) None[source]
index(show_shared: bool | None = None, show_published: bool | None = None, sort_by: str | None = None, sort_desc: bool | None = None, limit: int | None = None, offset: int | None = None, search: str | None = None, skip_step_counts: bool | None = None)[source]
index_ids(show_shared: bool | None = None, show_published: bool | None = None, sort_by: str | None = None, sort_desc: bool | None = None, limit: int | None = None, offset: int | None = None, search: str | None = None)[source]
share_with_user(workflow_id: str, user_id_or_email: str)[source]
class galaxy_test.base.populators.RunJobsSummary(history_id, workflow_id, invocation_id, inputs, jobs, invocation, workflow_request)[source]

Bases: tuple

history_id: str

Alias for field number 0

workflow_id: str

Alias for field number 1

invocation_id: str

Alias for field number 2

inputs: dict

Alias for field number 3

jobs: list

Alias for field number 4

invocation: dict

Alias for field number 5

workflow_request: dict

Alias for field number 6

jobs_for_tool(tool_id)[source]
class galaxy_test.base.populators.WorkflowPopulator(galaxy_interactor)[source]

Bases: GalaxyInteractorHttpMixin, BaseWorkflowPopulator, ImporterGalaxyInterface

__init__(galaxy_interactor)[source]
galaxy_interactor: ApiTestInteractor
import_workflow(workflow, **kwds) Dict[str, Any][source]

Import a workflow via POST /api/workflows or comparable interface into Galaxy.

import_tool(tool) Dict[str, Any][source]

Import a workflow via POST /api/workflows or comparable interface into Galaxy.

build_module(step_type: str, content_id: str | None = None, inputs: Dict[str, Any] | None = None)[source]
scaling_workflow_yaml(**kwd)[source]
make_public(workflow_id: str) dict[source]
class galaxy_test.base.populators.CwlPopulator(dataset_populator: DatasetPopulator, workflow_populator: WorkflowPopulator)[source]

Bases: object

__init__(dataset_populator: DatasetPopulator, workflow_populator: WorkflowPopulator)[source]
get_conformance_test(version: str, doc: str)[source]
run_cwl_job(artifact: str, job_path: str | None = None, job: Dict | None = None, test_data_directory: str | None = None, history_id: str | None = None, assert_ok: bool = True) CwlRun[source]
Parameters:

artifact – CWL tool id, or (absolute or relative) path to a CWL tool or workflow file

run_conformance_test(version: str, doc: str)[source]
class galaxy_test.base.populators.LibraryPopulator(galaxy_interactor)[source]

Bases: object

__init__(galaxy_interactor)[source]
get_libraries()[source]
new_private_library(name)[source]
create_from_store_raw(payload: Dict[str, Any]) Response[source]
create_from_store(store_dict: Dict[str, Any] | None = None, store_path: str | None = None) List[Dict[str, Any]][source]
new_library(name)[source]
fetch_single_url_to_folder(file_type='auto', assert_ok=True)[source]
get_permissions(library_id, scope: str | None = 'current', is_library_access: bool | None = False, page: int | None = 1, page_limit: int | None = 1000, q: str | None = None, admin: bool | None = True)[source]
set_permissions(library_id, role_id=None)[source]

Old legacy way of setting permissions.

set_permissions_with_action(library_id, role_id=None, action=None)[source]
set_access_permission(library_id, role_id, action=None)[source]
set_add_permission(library_id, role_id, action=None)[source]
set_manage_permission(library_id, role_id, action=None)[source]
set_modify_permission(library_id, role_id, action=None)[source]
user_email()[source]
user_private_role_id()[source]
create_dataset_request(library, **kwds)[source]
new_library_dataset(name, **create_dataset_kwds)[source]
wait_on_library_dataset(library_id, dataset_id)[source]
raw_library_contents_create(library_id, payload, files=None)[source]
show_ld_raw(library_id: str, library_dataset_id: str) Response[source]
show_ld(library_id: str, library_dataset_id: str) Dict[str, Any][source]
show_ldda(ldda_id)[source]
new_library_dataset_in_private_library(library_name='private_dataset', wait=True)[source]
get_library_contents(library_id: str) List[Dict[str, Any]][source]
get_library_contents_with_path(library_id: str, path: str) Dict[str, Any][source]
setup_fetch_to_folder(test_name)[source]
class galaxy_test.base.populators.BaseDatasetCollectionPopulator[source]

Bases: object

dataset_populator: BaseDatasetPopulator
create_list_from_pairs(history_id, pairs, name='Dataset Collection from pairs')[source]
nested_collection_identifiers(history_id: str, collection_type)[source]
create_nested_collection(history_id, collection_type, name=None, collection=None, element_identifiers=None)[source]

Create a nested collection either from collection or using collection_type).

example_list_of_pairs(history_id: str) str[source]
create_list_of_pairs_in_history(history_id, **kwds)[source]
create_list_of_list_in_history(history_id: str, **kwds)[source]
create_pair_in_history(history_id: str, wait: bool = False, **kwds)[source]
create_list_in_history(history_id: str, wait: bool = False, **kwds)[source]
upload_collection(history_id: str, collection_type, elements, wait: bool = False, **kwds)[source]
create_list_payload(history_id: str, **kwds)[source]
create_pair_payload(history_id: str, **kwds)[source]
wait_for_fetched_collection(fetch_response: Dict[str, Any] | Response)[source]
pair_identifiers(history_id: str, contents=None, wait: bool = False)[source]
list_identifiers(history_id: str, contents=None)[source]
wait_for_dataset_collection(create_payload: dict, assert_ok: bool = False, timeout: int | float = 60) None[source]
class galaxy_test.base.populators.DatasetCollectionPopulator(galaxy_interactor: ApiTestInteractor)[source]

Bases: BaseDatasetCollectionPopulator

__init__(galaxy_interactor: ApiTestInteractor)[source]
dataset_populator: BaseDatasetPopulator
galaxy_test.base.populators.load_data_dict(history_id: str, test_data: Dict[str, Any], dataset_populator: BaseDatasetPopulator, dataset_collection_populator: BaseDatasetCollectionPopulator) Tuple[Dict[str, Any], Dict[str, Any], bool][source]

Load a dictionary as inputs to a workflow (test data focused).

galaxy_test.base.populators.stage_inputs(galaxy_interactor: ApiTestInteractor, history_id: str, job: Dict[str, Any], use_path_paste: bool = True, use_fetch_api: bool = True, to_posix_lines: bool = True, tool_or_workflow: typing_extensions.Literal[tool, workflow] = 'workflow', job_dir: str | None = None) Tuple[Dict[str, Any], List[Dict[str, Any]]][source]

Alternative to load_data_dict that uses production-style workflow inputs.

galaxy_test.base.populators.stage_rules_example(galaxy_interactor: ApiTestInteractor, history_id: str, example: Dict[str, Any]) Dict[str, Any][source]

Wrapper around stage_inputs for staging collections defined by rules spec DSL.

galaxy_test.base.populators.wait_on_state(state_func: Callable, desc: str = 'state', skip_states=None, ok_states=None, assert_ok: bool = False, timeout: int | float = 60) str[source]
class galaxy_test.base.populators.DescribeToolExecutionOutput(dataset_populator: BaseDatasetPopulator, history_id: str, hda_id: str)[source]

Bases: object

__init__(dataset_populator: BaseDatasetPopulator, history_id: str, hda_id: str)[source]
property details: Dict[str, Any]
property contents: str
with_contents(expected_contents: str) Self[source]
with_contents_stripped(expected_contents: str) Self[source]
containing(expected_contents: str) Self[source]
with_file_ext(expected_ext: str) Self[source]
property json: Any
with_json(expected_json: Any) Self[source]
assert_contains(expected_contents: str) Self[source]
assert_has_contents(expected_contents: str) Self[source]
class galaxy_test.base.populators.DescribeToolExecutionOutputCollection(dataset_populator: BaseDatasetPopulator, history_id: str, hdca_id: str)[source]

Bases: object

__init__(dataset_populator: BaseDatasetPopulator, history_id: str, hdca_id: str)[source]
property details: Dict[str, Any]
property elements: List[Dict[str, Any]]
with_n_elements(n: int) Self[source]
with_element_dict(index: str | int) Dict[str, Any][source]
with_dataset_element(index: str | int) DescribeToolExecutionOutput[source]
named(expected_name: str) Self[source]
assert_has_dataset_element(index: str | int) DescribeToolExecutionOutput[source]
class galaxy_test.base.populators.DescribeJob(dataset_populator: BaseDatasetPopulator, history_id: str, job_id: str)[source]

Bases: object

__init__(dataset_populator: BaseDatasetPopulator, history_id: str, job_id: str)[source]
property final_details: Dict[str, Any]
property final_state: str
with_final_state(expected_state: str) Self[source]
property with_single_output: DescribeToolExecutionOutput
with_output(output: str | int) DescribeToolExecutionOutput[source]
assert_has_output(output: str | int) DescribeToolExecutionOutput[source]
property assert_has_single_output: DescribeToolExecutionOutput
class galaxy_test.base.populators.DescribeFailure(response: Response)[source]

Bases: object

__init__(response: Response)[source]
with_status_code(code: int) Self[source]
with_error_containing(message: str) Self[source]
class galaxy_test.base.populators.RequiredTool(dataset_populator: BaseDatasetPopulator, tool_id: str, default_history_id: str | None)[source]

Bases: object

__init__(dataset_populator: BaseDatasetPopulator, tool_id: str, default_history_id: str | None)[source]
property execute: DescribeToolExecution
class galaxy_test.base.populators.DescribeToolInputs(input_format: str)[source]

Bases: object

__init__(input_format: str)[source]
any(inputs: Dict[str, Any]) Self[source]
flat(inputs: Dict[str, Any]) Self[source]
nested(inputs: Dict[str, Any]) Self[source]
property when: Self
class galaxy_test.base.populators.DescribeToolExecution(dataset_populator: BaseDatasetPopulator, tool_id: str)[source]

Bases: object

__init__(dataset_populator: BaseDatasetPopulator, tool_id: str)[source]
in_history(has_history_id: str | TargetHistory) Self[source]
with_inputs(inputs: DescribeToolInputs | Dict[str, Any]) Self[source]
with_nested_inputs(inputs: Dict[str, Any]) Self[source]
assert_has_n_jobs(n: int) Self[source]
assert_creates_n_implicit_collections(n: int) Self[source]
assert_creates_implicit_collection(index: str | int) DescribeToolExecutionOutputCollection[source]
property assert_has_single_job: DescribeJob
assert_has_job(job_index: int = 0) DescribeJob[source]
property that_fails: DescribeFailure
property assert_fails: DescribeFailure
class galaxy_test.base.populators.GiHttpMixin[source]

Bases: object

Mixin for adapting Galaxy testing populators helpers to bioblend.

class galaxy_test.base.populators.GiDatasetPopulator(gi)[source]

Bases: GiHttpMixin, BaseDatasetPopulator

Implementation of BaseDatasetPopulator backed by bioblend.

__init__(gi)[source]

Construct a dataset populator from a bioblend GalaxyInstance.

class galaxy_test.base.populators.GiDatasetCollectionPopulator(gi)[source]

Bases: GiHttpMixin, BaseDatasetCollectionPopulator

Implementation of BaseDatasetCollectionPopulator backed by bioblend.

__init__(gi)[source]

Construct a dataset collection populator from a bioblend GalaxyInstance.

class galaxy_test.base.populators.GiWorkflowPopulator(gi)[source]

Bases: GiHttpMixin, BaseWorkflowPopulator

Implementation of BaseWorkflowPopulator backed by bioblend.

__init__(gi)[source]

Construct a workflow populator from a bioblend GalaxyInstance.

class galaxy_test.base.populators.TargetHistory(dataset_populator: DatasetPopulator, dataset_collection_populator: DatasetCollectionPopulator, history_id: str)[source]

Bases: object

__init__(dataset_populator: DatasetPopulator, dataset_collection_populator: DatasetCollectionPopulator, history_id: str)[source]
property id: str
with_dataset(content: str, named: str | None = None) HasSrcDict[source]
with_pair(contents: List[str] | None = None) HasSrcDict[source]
with_list(contents: List[str] | List[Tuple[str, str]] | None = None) HasSrcDict[source]
with_example_list_of_pairs() HasSrcDict[source]
execute(tool_id: str) DescribeToolExecution[source]
class galaxy_test.base.populators.SrcDict[source]

Bases: TypedDict

src: str
id: str
class galaxy_test.base.populators.HasSrcDict(src_type: str, api_object: str | Dict[str, Any])[source]

Bases: object

__init__(src_type: str, api_object: str | Dict[str, Any])[source]
api_object: str | Dict[str, Any]
property id: str
property src_dict: SrcDict
property to_dict
galaxy_test.base.populators.wait_on(function: Callable, desc: str, timeout: int | float = 60)[source]
galaxy_test.base.populators.wait_on_assertion(function: Callable, desc: str, timeout: int | float = 60)[source]

galaxy_test.base.rules_test_data module

galaxy_test.base.rules_test_data.check_example_1(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_2(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_3(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_4(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_5(hdca, dataset_populator)[source]
galaxy_test.base.rules_test_data.check_example_6(hdca, dataset_populator)[source]

galaxy_test.base.testcase module

galaxy_test.base.testcase.host_port_and_url(test_driver: Any | None) Tuple[str, str | None, str][source]
class galaxy_test.base.testcase.FunctionalTestCase[source]

Bases: TestCase

Base class for tests targetting actual Galaxy servers.

Subclass should override galaxy_driver_class if a Galaxy server needs to be launched to run the test, this base class assumes a server is already running.

galaxy_driver_class: type | None = None
host: str
port: str | None
url: str
keepOutdir: str
test_data_resolver: TestDataResolver
setUp() None[source]
classmethod setUpClass()[source]

Configure and start Galaxy for a test.

classmethod tearDownClass()[source]

Shutdown Galaxy server and cleanup temp directory.

get_filename(filename: str) str[source]
pytestmark = [Mark(name='usefixtures', args=('embedded_driver',), kwargs={})]

galaxy_test.base.uses_shed module

galaxy_test.base.workflow_fixtures module