Warning

This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.

galaxy.objectstore package

objectstore package, abstraction for storing blobs of data for use in Galaxy.

all providers ensure that data can be accessed on the filesystem for running tools

class galaxy.objectstore.ObjectStore(config, **kwargs)[source]

Bases: object

ObjectStore abstract interface.

FIELD DESCRIPTIONS (these apply to all the methods in this class):

Parameters:
  • obj (StorableObject) – A Galaxy object with an assigned database ID accessible via the .id attribute.
  • base_dir (string) – A key in self.extra_dirs corresponding to the base directory in which this object should be created, or None to specify the default directory.
  • dir_only (boolean) – If True, check only the path where the file identified by obj should be located, not the dataset itself. This option applies to extra_dir argument as well.
  • extra_dir (string) – Append extra_dir to the directory structure where the dataset identified by obj should be located. (e.g., 000/extra_dir/obj.id). Valid values include ‘job_work’ (defaulting to config.jobs_directory = ‘$GALAXY_ROOT/database/jobs_directory’); ‘temp’ (defaulting to config.new_file_path = ‘$GALAXY_ROOT/database/tmp’).
  • extra_dir_at_root (boolean) – Applicable only if extra_dir is set. If True, the extra_dir argument is placed at root of the created directory structure rather than at the end (e.g., extra_dir/000/obj.id vs. 000/extra_dir/obj.id)
  • alt_name (string) – Use this name as the alternative name for the created dataset rather than the default.
  • obj_dir (boolean) – Append a subdirectory named with the object’s ID (e.g. 000/obj.id)
__init__(config, **kwargs)[source]
Parameters:config (object) –

An object, most likely populated from galaxy/config.ini, having the following attributes:

  • object_store_check_old_style (only used by the DiskObjectStore subclass)
  • jobs_directory – Each job is given a unique empty directory as its current working directory. This option defines in what parent directory those directories will be created.
  • new_file_path – Used to set the ‘temp’ extra_dir.
shutdown()[source]

Close any connections for this ObjectStore.

exists(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None)[source]

Return True if the object identified by obj exists, False otherwise.

file_ready(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Check if a file corresponding to a dataset is ready to be used.

Return True if so, False otherwise

create(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Mark the object (obj) as existing in the store, but with no content.

This method will create a proper directory structure for the file if the directory does not already exist.

empty(obj, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Test if the object identified by obj has content.

If the object does not exist raises ObjectNotFound.

size(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Return size of the object identified by obj.

If the object does not exist, return 0.

delete(obj, entire_dir=False, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Delete the object identified by obj.

Parameters:entire_dir (boolean) – If True, delete the entire directory pointed to by extra_dir. For safety reasons, this option applies only for and in conjunction with the extra_dir or obj_dir options.
get_data(obj, start=0, count=-1, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Fetch count bytes of data offset by start bytes using obj.id.

If the object does not exist raises ObjectNotFound.

Parameters:
  • start (int) – Set the position to start reading the dataset file
  • count (int) – Read at most count bytes from the dataset
get_filename(obj, base_dir=None, dir_only=False, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Get the expected filename with absolute path for object with id obj.id.

This can be used to access the contents of the object.

update_from_file(obj, base_dir=None, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False, file_name=None, create=False)[source]

Inform the store that the file associated with obj.id has been updated.

If file_name is provided, update from that file instead of the default. If the object does not exist raises ObjectNotFound.

Parameters:
  • file_name (string) – Use file pointed to by file_name as the source for updating the dataset identified by obj
  • create (boolean) – If True and the default dataset does not exist, create it first.
get_object_url(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None, obj_dir=False)[source]

Return the URL for direct acces if supported, otherwise return None.

Note: need to be careful to not bypass dataset security with this.

get_store_usage_percent()[source]

Return the percentage indicating how full the store is.

class galaxy.objectstore.DiskObjectStore(config, config_xml=None, file_path=None, extra_dirs=None)[source]

Bases: galaxy.objectstore.ObjectStore

Standard Galaxy object store.

Stores objects in files under a specific directory on disk.

>>> from galaxy.util.bunch import Bunch
>>> import tempfile
>>> file_path=tempfile.mkdtemp()
>>> obj = Bunch(id=1)
>>> s = DiskObjectStore(Bunch(umask=0o077, jobs_directory=file_path, new_file_path=file_path, object_store_check_old_style=False), file_path=file_path)
>>> s.create(obj)
>>> s.exists(obj)
True
>>> assert s.get_filename(obj) == file_path + '/000/dataset_1.dat'
__init__(config, config_xml=None, file_path=None, extra_dirs=None)[source]
Parameters:
  • config (object) –

    An object, most likely populated from galaxy/config.ini, having the same attributes needed by ObjectStore plus:

    • file_path – Default directory to store objects to disk in.
    • umask – the permission bits for newly created files.
  • file_path (str) – Override for the config.file_path value.
  • extra_dirs (dict) – Keys are string, values are directory paths.
exists(obj, **kwargs)[source]

Override ObjectStore’s stub and check on disk.

create(obj, **kwargs)[source]

Override ObjectStore’s stub by creating any files and folders on disk.

empty(obj, **kwargs)[source]

Override ObjectStore’s stub by checking file size on disk.

size(obj, **kwargs)[source]

Override ObjectStore’s stub by return file size on disk.

Returns 0 if the object doesn’t exist yet or other error.

delete(obj, entire_dir=False, **kwargs)[source]

Override ObjectStore’s stub; delete the file or folder on disk.

get_data(obj, start=0, count=-1, **kwargs)[source]

Override ObjectStore’s stub; retrieve data directly from disk.

get_filename(obj, **kwargs)[source]

Override ObjectStore’s stub.

If object_store_check_old_style is set to True in config then the root path is checked first.

update_from_file(obj, file_name=None, create=False, **kwargs)[source]

create parameter is not used in this implementation.

get_object_url(obj, **kwargs)[source]

Override ObjectStore’s stub.

Returns None, we have no URLs.

get_store_usage_percent()[source]

Override ObjectStore’s stub by return percent storage used.

class galaxy.objectstore.NestedObjectStore(config, config_xml=None)[source]

Bases: galaxy.objectstore.ObjectStore

Base for ObjectStores that use other ObjectStores.

Example: DistributedObjectStore, HierarchicalObjectStore

__init__(config, config_xml=None)[source]

Extend ObjectStore’s constructor.

shutdown()[source]

For each backend, shuts them down.

exists(obj, **kwargs)[source]

Determine if the obj exists in any of the backends.

file_ready(obj, **kwargs)[source]

Determine if the file for obj is ready to be used by any of the backends.

create(obj, **kwargs)[source]

Create a backing file in a random backend.

empty(obj, **kwargs)[source]

For the first backend that has this obj, determine if it is empty.

size(obj, **kwargs)[source]

For the first backend that has this obj, return its size.

delete(obj, **kwargs)[source]

For the first backend that has this obj, delete it.

get_data(obj, **kwargs)[source]

For the first backend that has this obj, get data from it.

get_filename(obj, **kwargs)[source]

For the first backend that has this obj, get its filename.

update_from_file(obj, **kwargs)[source]

For the first backend that has this obj, update it from the given file.

get_object_url(obj, **kwargs)[source]

For the first backend that has this obj, get its URL.

class galaxy.objectstore.DistributedObjectStore(config, config_xml=None, fsmon=False)[source]

Bases: galaxy.objectstore.NestedObjectStore

ObjectStore that defers to a list of backends.

When getting objects the first store where the object exists is used. When creating objects they are created in a store selected randomly, but with weighting.

__init__(config, config_xml=None, fsmon=False)[source]
Parameters:
  • config (object) –

    An object, most likely populated from galaxy/config.ini, having the same attributes needed by NestedObjectStore plus:

    • distributed_object_store_config_file
  • fsmon (bool) – If True, monitor the file system for free space, removing backends when they get too full.
shutdown()[source]

Shut down. Kill the free space monitor if there is one.

create(obj, **kwargs)[source]

The only method in which obj.object_store_id may be None.

class galaxy.objectstore.HierarchicalObjectStore(config, config_xml=None, fsmon=False)[source]

Bases: galaxy.objectstore.NestedObjectStore

ObjectStore that defers to a list of backends.

When getting objects the first store where the object exists is used. When creating objects only the first store is used.

__init__(config, config_xml=None, fsmon=False)[source]

The default contructor. Extends NestedObjectStore.

exists(obj, **kwargs)[source]

Check all child object stores.

create(obj, **kwargs)[source]

Call the primary object store.

galaxy.objectstore.build_object_store_from_config(config, fsmon=False, config_xml=None)[source]

Invoke the appropriate object store.

Will use the object_store_config_file attribute of the config object to configure a new object store from the specified XML file.

Or you can specify the object store type in the object_store attribute of the config object. Currently ‘disk’, ‘s3’, ‘swift’, ‘distributed’, ‘hierarchical’, ‘irods’, and ‘pulsar’ are supported values.

galaxy.objectstore.local_extra_dirs(func)[source]

Non-local plugin decorator using local directories for the extra_dirs (job_work and temp).

galaxy.objectstore.convert_bytes(bytes)[source]

A helper function used for pretty printing disk usage.

Submodules

galaxy.objectstore.azure_blob module

Object Store plugin for the Microsoft Azure Block Blob Storage system

class galaxy.objectstore.azure_blob.AzureBlobObjectStore(config, config_xml)[source]

Bases: galaxy.objectstore.ObjectStore

Object store that stores objects as blobs in an Azure Blob Container. A local cache exists that is used as an intermediate location for files between Galaxy and Azure.

__init__(config, config_xml)[source]
exists(obj, **kwargs)[source]
file_ready(obj, **kwargs)[source]

A helper method that checks if a file corresponding to a dataset is ready and available to be used. Return True if so, False otherwise.

create(obj, **kwargs)[source]
empty(obj, **kwargs)[source]
size(obj, **kwargs)[source]
delete(obj, entire_dir=False, **kwargs)[source]
get_data(obj, start=0, count=-1, **kwargs)[source]
get_filename(obj, **kwargs)[source]
update_from_file(obj, file_name=None, create=False, **kwargs)[source]
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]

galaxy.objectstore.pulsar module

class galaxy.objectstore.pulsar.PulsarObjectStore(config, config_xml)[source]

Bases: galaxy.objectstore.ObjectStore

Object store implementation that delegates to a remote Pulsar server.

This may be more aspirational than practical for now, it would be good to Galaxy to a point that a handler thread could be setup that doesn’t attempt to access the disk files returned by a (this) object store - just passing them along to the Pulsar unmodified. That modification - along with this implementation and Pulsar job destinations would then allow Galaxy to fully manage jobs on remote servers with completely different mount points.

This implementation should be considered beta and may be dropped from Galaxy at some future point or significantly modified.

__init__(config, config_xml)[source]
exists(obj, **kwds)[source]
file_ready(obj, **kwds)[source]
create(obj, **kwds)[source]
empty(obj, **kwds)[source]
size(obj, **kwds)[source]
delete(obj, **kwds)[source]
get_data(obj, **kwds)[source]
get_filename(obj, **kwds)[source]
update_from_file(obj, **kwds)[source]
get_store_usage_percent()[source]
get_object_url(obj, extra_dir=None, extra_dir_at_root=False, alt_name=None)[source]
shutdown()[source]

galaxy.objectstore.rods module

Object Store plugin for the Integrated Rule-Oriented Data Store (iRODS)

The module is named rods to avoid conflicting with the PyRods module, irods

class galaxy.objectstore.rods.IRODSObjectStore(config, file_path=None, extra_dirs=None)[source]

Bases: galaxy.objectstore.DiskObjectStore

Galaxy object store based on iRODS

__init__(config, file_path=None, extra_dirs=None)[source]
exists(*args, **kwargs)
create(*args, **kwargs)
empty(*args, **kwargs)
size(obj, **kwargs)[source]
delete(*args, **kwargs)
get_data(*args, **kwargs)
get_filename(*args, **kwargs)
update_from_file(*args, **kwargs)
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]
galaxy.objectstore.rods.rods_connect()[source]

A basic iRODS connection mechanism that connects using the current iRODS environment

galaxy.objectstore.s3 module

Object Store plugin for the Amazon Simple Storage Service (S3)

class galaxy.objectstore.s3.S3ObjectStore(config, config_xml)[source]

Bases: galaxy.objectstore.ObjectStore

Object store that stores objects as items in an AWS S3 bucket. A local cache exists that is used as an intermediate location for files between Galaxy and S3.

__init__(config, config_xml)[source]
file_ready(obj, **kwargs)[source]

A helper method that checks if a file corresponding to a dataset is ready and available to be used. Return True if so, False otherwise.

exists(obj, **kwargs)[source]
create(obj, **kwargs)[source]
empty(obj, **kwargs)[source]
size(obj, **kwargs)[source]
delete(obj, entire_dir=False, **kwargs)[source]
get_data(obj, start=0, count=-1, **kwargs)[source]
get_filename(obj, **kwargs)[source]
update_from_file(obj, file_name=None, create=False, **kwargs)[source]
get_object_url(obj, **kwargs)[source]
get_store_usage_percent()[source]
class galaxy.objectstore.s3.SwiftObjectStore(config, config_xml)[source]

Bases: galaxy.objectstore.s3.S3ObjectStore

Object store that stores objects as items in a Swift bucket. A local cache exists that is used as an intermediate location for files between Galaxy and Swift.

galaxy.objectstore.s3_multipart_upload module

Split large file into multiple pieces for upload to S3. This parallelizes the task over available cores using multiprocessing. Code mostly taken form CloudBioLinux.

galaxy.objectstore.s3_multipart_upload.map_wrap(f)[source]
galaxy.objectstore.s3_multipart_upload.mp_from_ids(s3server, mp_id, mp_keyname, mp_bucketname)[source]

Get the multipart upload from the bucket and multipart IDs.

This allows us to reconstitute a connection to the upload from within multiprocessing functions.

galaxy.objectstore.s3_multipart_upload.transfer_part(args)[source]

Transfer a part of a multipart upload. Designed to be run in parallel.

galaxy.objectstore.s3_multipart_upload.multipart_upload(s3server, bucket, s3_key_name, tarball, mb_size)[source]

Upload large files using Amazon’s multipart upload functionality.

galaxy.objectstore.s3_multipart_upload.multimap(*args, **kwds)[source]

Provide multiprocessing imap like function.

The context manager handles setting up the pool, worked around interrupt issues and terminating the pool on completion.