Warning

This document is for an in-development version of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.

galaxy.objectstore package

objectstore package, abstraction for storing blobs of data for use in Galaxy.

all providers ensure that data can be accessed on the filesystem for running tools

class galaxy.objectstore.ObjectStore(config, **kwargs)[source]

Bases: object

ObjectStore abstract interface.

FIELD DESCRIPTIONS (these apply to all the methods in this class):

Parameters:
  • obj (StorableObject) – A Galaxy object with an assigned database ID accessible via the .id attribute.
  • base_dir (string) – A key in self.extra_dirs corresponding to the base directory in which this object should be created, or None to specify the default directory.
  • dir_only (boolean) – If True, check only the path where the file identified by obj should be located, not the dataset itself. This option applies to extra_dir argument as well.
  • extra_dir (string) – Append extra_dir to the directory structure where the dataset identified by obj should be located. (e.g., 000/extra_dir/obj.id). Valid values include ‘job_work’ (defaulting to config.jobs_directory = ‘$GALAXY_ROOT/database/jobs_directory’); ‘temp’ (defaulting to config.new_file_path = ‘$GALAXY_ROOT/database/tmp’).
  • extra_dir_at_root (boolean) – Applicable only if extra_dir is set. If True, the extra_dir argument is placed at root of the created directory structure rather than at the end (e.g., extra_dir/000/obj.id vs. 000/extra_dir/obj.id)
  • alt_name (string) – Use this name as the alternative name for the created dataset rather than the default.
  • obj_dir (boolean) – Append a subdirectory named with the object’s ID (e.g. 000/obj.id)
__init__(config, **kwargs)[source]
Parameters:config (object) –

An object, most likely populated from galaxy/config.ini, having the following attributes:

  • object_store_check_old_style (only used by the DiskObjectStore subclass)
  • jobs_directory – Each job is given a unique empty directory as its current working directory. This option defines in what parent directory those directories will be created.
  • new_file_path – Used to set the ‘temp’ extra_dir.
shutdown()[source]

Close any connections for this ObjectStore.

exists(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None)[source]

Return True if the object identified by obj exists, False otherwise.

file_ready(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Check if a file corresponding to a dataset is ready to be used.

Return True if so, False otherwise

create(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Mark the object (obj) as existing in the store, but with no content.

This method will create a proper directory structure for the file if the directory does not already exist.

empty(obj, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Test if the object identified by obj has content.

If the object does not exist raises ObjectNotFound.

size(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Return size of the object identified by obj.

If the object does not exist, return 0.

delete(obj, entire_dir=False, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Delete the object identified by obj.

Parameters:entire_dir (boolean) – If True, delete the entire directory pointed to by extra_dir. For safety reasons, this option applies only for and in conjunction with the extra_dir or obj_dir options.
get_data(obj, start=0, count=-1, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Fetch count bytes of data offset by start bytes using obj.id.

If the object does not exist raises ObjectNotFound.

Parameters:
  • start (int) – Set the position to start reading the dataset file
  • count (int) – Read at most count bytes from the dataset
get_filename(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Get the expected filename with absolute path for object with id obj.id.

This can be used to access the contents of the object.

update_from_file(obj, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False, file_name=None, create=False)[source]

Inform the store that the file associated with obj.id has been updated.

If file_name is provided, update from that file instead of the default. If the object does not exist raises ObjectNotFound.

Parameters:
  • file_name (string) – Use file pointed to by file_name as the source for updating the dataset identified by obj
  • create (boolean) – If True and the default dataset does not exist, create it first.
get_object_url(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Return the URL for direct acces if supported, otherwise return None.

Note: need to be careful to not bypass dataset security with this.

get_store_usage_percent()[source]

Return the percentage indicating how full the store is.

classmethod parse_xml(clazz, config_xml)[source]

Parse an XML description of a configuration for this object store.

Return a configuration dictionary (such as would correspond to the YAML configuration) for the object store.

classmethod from_xml(clazz, config, config_xml, **kwd)[source]
to_dict()[source]
class galaxy.objectstore.DiskObjectStore(config, config_dict)[source]

Bases: galaxy.objectstore.ObjectStore

Standard Galaxy object store.

Stores objects in files under a specific directory on disk.

>>> from galaxy.util.bunch import Bunch
>>> import tempfile
>>> file_path=tempfile.mkdtemp()
>>> obj = Bunch(id=1)
>>> s = DiskObjectStore(Bunch(umask=0o077, jobs_directory=file_path, new_file_path=file_path, object_store_check_old_style=False), dict(files_dir=file_path))
>>> s.create(obj)
>>> s.exists(obj)
True
>>> assert s.get_filename(obj) == file_path + '/000/dataset_1.dat'
store_type = 'disk'
__init__(config, config_dict)[source]
Parameters:
  • config (object) –

    An object, most likely populated from galaxy/config.ini, having the same attributes needed by ObjectStore plus:

    • file_path – Default directory to store objects to disk in.
    • umask – the permission bits for newly created files.
  • file_path (str) – Override for the config.file_path value.
  • extra_dirs (dict) – Keys are string, values are directory paths.
classmethod parse_xml(clazz, config_xml)[source]
to_dict()[source]
exists(obj, **kwargs)[source]

Override ObjectStore’s stub and check on disk.

create(obj, **kwargs)[source]

Override ObjectStore’s stub by creating any files and folders on disk.

empty(obj, **kwargs)[source]

Override ObjectStore’s stub by checking file size on disk.

size(obj, **kwargs)[source]

Override ObjectStore’s stub by return file size on disk.

Returns 0 if the object doesn’t exist yet or other error.

delete(obj, entire_dir=False, **kwargs)[source]

Override ObjectStore’s stub; delete the file or folder on disk.

get_data(obj, start=0, count=-1, **kwargs)[source]

Override ObjectStore’s stub; retrieve data directly from disk.

get_filename(obj, **kwargs)[source]

Override ObjectStore’s stub.

If object_store_check_old_style is set to True in config then the root path is checked first.

update_from_file(obj, file_name=None, create=False, **kwargs)[source]

create parameter is not used in this implementation.

get_object_url(obj, **kwargs)[source]

Override ObjectStore’s stub.

Returns None, we have no URLs.

get_store_usage_percent()[source]

Override ObjectStore’s stub by return percent storage used.

class galaxy.objectstore.NestedObjectStore(config, config_xml=None)[source]

Bases: galaxy.objectstore.ObjectStore

Base for ObjectStores that use other ObjectStores.

Example: DistributedObjectStore, HierarchicalObjectStore

__init__(config, config_xml=None)[source]

Extend ObjectStore’s constructor.

shutdown()[source]

For each backend, shuts them down.

exists(obj, **kwargs)[source]

Determine if the obj exists in any of the backends.

file_ready(obj, **kwargs)[source]

Determine if the file for obj is ready to be used by any of the backends.

create(obj, **kwargs)[source]

Create a backing file in a random backend.

empty(obj, **kwargs)[source]

For the first backend that has this obj, determine if it is empty.

size(obj, **kwargs)[source]

For the first backend that has this obj, return its size.

delete(obj, **kwargs)[source]

For the first backend that has this obj, delete it.

get_data(obj, **kwargs)[source]

For the first backend that has this obj, get data from it.

get_filename(obj, **kwargs)[source]

For the first backend that has this obj, get its filename.

update_from_file(obj, **kwargs)[source]

For the first backend that has this obj, update it from the given file.

get_object_url(obj, **kwargs)[source]

For the first backend that has this obj, get its URL.

class galaxy.objectstore.DistributedObjectStore(config, config_dict, fsmon=False)[source]

Bases: galaxy.objectstore.NestedObjectStore

ObjectStore that defers to a list of backends.

When getting objects the first store where the object exists is used. When creating objects they are created in a store selected randomly, but with weighting.

store_type = 'distributed'
__init__(config, config_dict, fsmon=False)[source]
Parameters:
  • config (object) –

    An object, most likely populated from galaxy/config.ini, having the same attributes needed by NestedObjectStore plus:

    • distributed_object_store_config_file
  • fsmon (bool) – If True, monitor the file system for free space, removing backends when they get too full.
classmethod parse_xml(clazz, config_xml, legacy=False)[source]
classmethod from_xml(clazz, config, config_xml, fsmon=False)[source]
to_dict()[source]
shutdown()[source]

Shut down. Kill the free space monitor if there is one.

create(obj, **kwargs)[source]

The only method in which obj.object_store_id may be None.

class galaxy.objectstore.HierarchicalObjectStore(config, config_dict, fsmon=False)[source]

Bases: galaxy.objectstore.NestedObjectStore

ObjectStore that defers to a list of backends.

When getting objects the first store where the object exists is used. When creating objects only the first store is used.

store_type = 'hierarchical'
__init__(config, config_dict, fsmon=False)[source]

The default contructor. Extends NestedObjectStore.

classmethod parse_xml(clazz, config_xml)[source]
to_dict()[source]
exists(obj, **kwargs)[source]

Check all child object stores.

create(obj, **kwargs)[source]

Call the primary object store.

galaxy.objectstore.type_to_object_store_class(store, fsmon=False)[source]
galaxy.objectstore.build_object_store_from_config(config, fsmon=False, config_xml=None, config_dict=None)[source]

Invoke the appropriate object store.

Will use the object_store_config_file attribute of the config object to configure a new object store from the specified XML file.

Or you can specify the object store type in the object_store attribute of the config object. Currently ‘disk’, ‘s3’, ‘swift’, ‘distributed’, ‘hierarchical’, ‘irods’, and ‘pulsar’ are supported values.

galaxy.objectstore.local_extra_dirs(func)[source]

Non-local plugin decorator using local directories for the extra_dirs (job_work and temp).

galaxy.objectstore.convert_bytes(bytes)[source]

A helper function used for pretty printing disk usage.

galaxy.objectstore.config_to_dict(config)[source]

Dict-ify the portion of a config object consumed by the ObjectStore class and its subclasses.

class galaxy.objectstore.ObjectStorePopulator(app)[source]

Bases: object

Small helper for interacting with the object store and making sure all datasets from a job end up with the same object_store_id.

__init__(app)[source]
set_object_store_id(data)[source]

Submodules

galaxy.objectstore.azure_blob module

Object Store plugin for the Microsoft Azure Block Blob Storage system

galaxy.objectstore.azure_blob.parse_config_xml(config_xml)[source]
class galaxy.objectstore.azure_blob.AzureBlobObjectStore(config, config_dict)[source]

Bases: galaxy.objectstore.ObjectStore

Object store that stores objects as blobs in an Azure Blob Container. A local cache exists that is used as an intermediate location for files between Galaxy and Azure.

store_type = 'azure_blob'
__init__(config, config_dict)[source]
to_dict()[source]
classmethod parse_xml(clazz, config_xml)[source]
exists(obj, **kwargs)[source]
file_ready(obj, **kwargs)[source]

A helper method that checks if a file corresponding to a dataset is ready and available to be used. Return True if so, False otherwise.

create(obj, **kwargs)[source]
empty(obj, **kwargs)[source]
size(obj, **kwargs)[source]
delete(obj, entire_dir=False, **kwargs)[source]
get_data(obj, start=0, count=-1, **kwargs)[source]
get_filename(obj, **kwargs)[source]
update_from_file(obj, file_name=None, create=False, **kwargs)[source]
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]

galaxy.objectstore.cloud module

Object Store plugin for Cloud storage.

class galaxy.objectstore.cloud.Cloud(config, config_dict)[source]

Bases: galaxy.objectstore.ObjectStore, galaxy.objectstore.s3.CloudConfigMixin

Object store that stores objects as items in an cloud storage. A local cache exists that is used as an intermediate location for files between Galaxy and the cloud storage.

store_type = 'cloud'
__init__(config, config_dict)[source]
classmethod parse_xml(clazz, config_xml)[source]
to_dict()[source]
file_ready(obj, **kwargs)[source]

A helper method that checks if a file corresponding to a dataset is ready and available to be used. Return True if so, False otherwise.

exists(obj, **kwargs)[source]
create(obj, **kwargs)[source]
empty(obj, **kwargs)[source]
size(obj, **kwargs)[source]
delete(obj, entire_dir=False, **kwargs)[source]
get_data(obj, start=0, count=-1, **kwargs)[source]
get_filename(obj, **kwargs)[source]
update_from_file(obj, file_name=None, create=False, **kwargs)[source]
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]

galaxy.objectstore.pithos module

galaxy.objectstore.pithos.parse_config_xml(config_xml)[source]

Parse and validate config_xml, return dict for convenience :param config_xml: (xml.etree.ElementTree.Element) root of XML subtree :returns: (dict) according to syntax :raises: various XML parse errors

class galaxy.objectstore.pithos.PithosObjectStore(config, config_dict)[source]

Bases: galaxy.objectstore.ObjectStore

Object store that stores objects as items in a Pithos+ container. Cache is ignored for the time being.

store_type = 'pithos'
__init__(config, config_dict)[source]
classmethod parse_xml(clazz, config_xml)[source]
to_dict()[source]
exists(obj, **kwargs)[source]

Check if file exists, fix if file in cache and not on Pithos+ :returns: weather the file exists remotely or in cache

create(obj, **kwargs)[source]

Touch a file (aka create empty), if it doesn’t exist

empty(obj, **kwargs)[source]
Returns:weather the object has content
Raises:ObjectNotFound
size(obj, **kwargs)[source]
Returns:The size of the object, or 0 if it doesn’t exist (sorry for that, not our fault, the ObjectStore interface is like that some times)
delete(obj, **kwargs)[source]

Delete the object :returns: weather the object was deleted

get_data(obj, start=0, count=-1, **kwargs)[source]

Fetch (e.g., download) data :param start: Chunk of data starts here :param count: Fetch at most as many data, fetch all if negative

get_filename(obj, **kwargs)[source]

Get the expected filename with absolute path

update_from_file(obj, **kwargs)[source]

Update the store when a file is updated

get_object_url(obj, **kwargs)[source]
Returns:URL for direct access, None if no object
get_store_usage_percent()[source]
Returns:percentage indicating how full the store is

galaxy.objectstore.pulsar module

class galaxy.objectstore.pulsar.PulsarObjectStore(config, config_xml)[source]

Bases: galaxy.objectstore.ObjectStore

Object store implementation that delegates to a remote Pulsar server.

This may be more aspirational than practical for now, it would be good to Galaxy to a point that a handler thread could be setup that doesn’t attempt to access the disk files returned by a (this) object store - just passing them along to the Pulsar unmodified. That modification - along with this implementation and Pulsar job destinations would then allow Galaxy to fully manage jobs on remote servers with completely different mount points.

This implementation should be considered beta and may be dropped from Galaxy at some future point or significantly modified.

__init__(config, config_xml)[source]
exists(obj, **kwds)[source]
file_ready(obj, **kwds)[source]
create(obj, **kwds)[source]
empty(obj, **kwds)[source]
size(obj, **kwds)[source]
delete(obj, **kwds)[source]
get_data(obj, **kwds)[source]
get_filename(obj, **kwds)[source]
update_from_file(obj, **kwds)[source]
get_store_usage_percent()[source]
get_object_url(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None)[source]
shutdown()[source]

galaxy.objectstore.rods module

Object Store plugin for the Integrated Rule-Oriented Data Store (iRODS)

The module is named rods to avoid conflicting with the PyRods module, irods

class galaxy.objectstore.rods.IRODSObjectStore(config, file_path=None, extra_dirs=None)[source]

Bases: galaxy.objectstore.DiskObjectStore

Galaxy object store based on iRODS

__init__(config, file_path=None, extra_dirs=None)[source]
exists(*args, **kwargs)
create(*args, **kwargs)
empty(*args, **kwargs)
size(obj, **kwargs)[source]
delete(*args, **kwargs)
get_data(*args, **kwargs)
get_filename(*args, **kwargs)
update_from_file(*args, **kwargs)
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]
galaxy.objectstore.rods.rods_connect()[source]

A basic iRODS connection mechanism that connects using the current iRODS environment

galaxy.objectstore.s3 module

Object Store plugin for the Amazon Simple Storage Service (S3)

galaxy.objectstore.s3.parse_config_xml(config_xml)[source]
class galaxy.objectstore.s3.CloudConfigMixin[source]

Bases: object

class galaxy.objectstore.s3.S3ObjectStore(config, config_dict)[source]

Bases: galaxy.objectstore.ObjectStore, galaxy.objectstore.s3.CloudConfigMixin

Object store that stores objects as items in an AWS S3 bucket. A local cache exists that is used as an intermediate location for files between Galaxy and S3.

store_type = 's3'
__init__(config, config_dict)[source]
classmethod parse_xml(clazz, config_xml)[source]
to_dict()[source]
file_ready(obj, **kwargs)[source]

A helper method that checks if a file corresponding to a dataset is ready and available to be used. Return True if so, False otherwise.

exists(obj, **kwargs)[source]
create(obj, **kwargs)[source]
empty(obj, **kwargs)[source]
size(obj, **kwargs)[source]
delete(obj, entire_dir=False, **kwargs)[source]
get_data(obj, start=0, count=-1, **kwargs)[source]
get_filename(obj, **kwargs)[source]
update_from_file(obj, file_name=None, create=False, **kwargs)[source]
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]
class galaxy.objectstore.s3.SwiftObjectStore(config, config_dict)[source]

Bases: galaxy.objectstore.s3.S3ObjectStore

Object store that stores objects as items in a Swift bucket. A local cache exists that is used as an intermediate location for files between Galaxy and Swift.

store_type = 'swift'

galaxy.objectstore.s3_multipart_upload module

Split large file into multiple pieces for upload to S3. This parallelizes the task over available cores using multiprocessing. Code mostly taken form CloudBioLinux.

galaxy.objectstore.s3_multipart_upload.mp_from_ids(s3server, mp_id, mp_keyname, mp_bucketname)[source]

Get the multipart upload from the bucket and multipart IDs.

This allows us to reconstitute a connection to the upload from within multiprocessing functions.

galaxy.objectstore.s3_multipart_upload.transfer_part(s3server, mp_id, mp_keyname, mp_bucketname, i, part)[source]

Transfer a part of a multipart upload. Designed to be run in parallel.

galaxy.objectstore.s3_multipart_upload.multipart_upload(s3server, bucket, s3_key_name, tarball, mb_size)[source]

Upload large files using Amazon’s multipart upload functionality.