Skip to main content

ClearML Data

In Machine Learning, you are very likely dealing with a gargantuan amount of data that you need to put in a dataset, which you then need to be able to share, reproduce, and track.

ClearML Data Management solves two important challenges:

  • Accessibility - Making data easily accessible from every machine,
  • Versioning - Linking data and experiments for better traceability.

We believe Data is not code. It should not be stored in a git tree, because progress on datasets is not always linear. Moreover, it can be difficult and inefficient to find on a git tree the commit associated with a certain version of a dataset.

A clearml-data dataset is a collection of files, stored on a central storage location (S3 \ GS \ Azure \ Network Storage). Datasets can be set up to inherit from other datasets, so data lineages can be created, and users can track when and how their data changes.
Dataset changes are stored using differentiable storage, meaning a version will store the change-set from its previous dataset parents

Local copies of datasets are always cached, so the same data never needs to be downloaded twice. When a dataset is pulled it will automatically pull all parent datasets and merge them into one output folder for you to work with

ClearML-data offers two interfaces:

  • clearml-data - CLI utility for creating, uploading, and managing datasets.
  • clearml.Dataset - A python interface for creating, retrieving, managing, and using datasets.

Creating a Dataset#

Using the clearml-data CLI, users can create datasets using the following commands:

clearml-data create --project dataset_example --name initial_version
clearml-data add --files data_folder

The commands will do the following:

  1. Start a Data Processing Task called "initial_version" in the "dataset_example" project

  2. The CLI will return a unique ID for the dataset

  3. All the files from the "data_folder" folder will be added to the dataset and uploaded by default to the ClearML server.

note

clearml-data is stateful and remembers the last created dataset so there's no need to specify a specific dataset ID unless we want to work on another dataset.

Using a Dataset#

Now in our python code, we can access and use the created dataset from anywhere:

from clearml import Dataset
local_path = Dataset.get(dataset_id='dataset_id_from_previous_command').get_local_copy()

We have all our files in the same folder structure under local_path, it is that simple!

The next step is to set the dataset_id as a parameter for our code and voilà! We can now train on any dataset we have in the system.

Setup#

clearml-data comes built-in with our clearml python package! Just check out the getting started guide for more info!

Usage#

CLI#

It's possible to manage datasets (create \ modify \ upload \ delete) with the clearml-data command line tool.

Creating a Dataset#

clearml-data create --project <project_name> --name <dataset_name> --parents <existing_dataset_id>`

Creates a new dataset.

Parameters

NameDescriptionOptional
nameDataset's name
projectDataset's project
parentsIDs of the dataset's parents. The dataset inherits all of its parents' content. Multiple parents can be entered, but they are merged in the order they were entered
tagsDataset user tags. The dataset can be labeled, which can be useful for organizing datasets
important

clearml-data works in a stateful mode so once a new dataset is created, the following commands do not require the --id flag.


Add Files to Dataset#

clearml-data add --id <dataset_id> --files <filenames/folders_to_add>

It's possible to add individual files or complete folders.

Parameters

NameDescriptionOptional
idDataset's ID. Default: previously created / accessed dataset
filesFiles / folders to add. Wildcard selection is supported, for example: ~/data/*.jpg ~/data/json
dataset-folderDataset base folder to add the files to in the dataset. Default: dataset root
non-recursiveDisable recursive scan of files
verboseVerbose reporting

Remove Files From Dataset#

clearml-data remove --id <dataset_id_to_remove_from> --files <filenames/folders_to_remove>

Parameters

NameDescriptionOptional
idDataset's ID. Default: previously created / accessed dataset
filesFiles / folders to remove (wildcard selection is supported, for example: ~/data/*.jpg ~/data/json). Notice: file path is the path within the dataset, not the local path.
non-recursiveDisable recursive scan of files
verboseVerbose reporting

Finalize Dataset#

clearml-data close --id <dataset_id>

Finalizes the dataset and makes it ready to be consumed. It automatically uploads all files that were not previously uploaded. Once a dataset is finalized, it can no longer be modified.

Parameters

NameDescriptionOptional
idDataset's ID. Default: previously created / accessed dataset
storageRemote storage to use for the dataset files. Default: files_server
disable-uploadDisable automatic upload when closing the dataset
verboseVerbose reporting

Upload Dataset' Content#

clearml-data upload [--id <dataset_id>] [--storage <upload_destination>]

Uploads added files to ClearML Server by default. It's possible to specify a different storage medium by entering an upload destination, such as s3://bucket, gs://, azure://, /mnt/shared/.

Parameters

NameDescriptionOptional
idDataset's ID. Default: previously created / accessed dataset
storageRemote storage to use for the dataset files. Default: files_server
verboseVerbose reporting

Sync Local Folder#

clearml-data sync [--id <dataset_id] --folder <folder_location> [--parents '<parent_id>']`

This option syncs a folder's content with ClearML. It is useful in case a user has a single point of truth (i.e. a folder) which updates from time to time.

Once an update should be reflected into ClearML's system, users can call clearml-data sync, create a new dataset, enter the folder, and the changes (either file addition, modification and removal) will be reflected in ClearML.

This command also uploads the data and finalizes the dataset automatically.

Parameters

NameDescriptionOptional
idDataset's ID. Default: previously created / accessed dataset
folderLocal folder to sync. Wildcard selection is supported, for example: ~/data/*.jpg ~/data/json
storageRemote storage to use for the dataset files. Default: files_server
parentsIDs of the dataset's parents (i.e. merge all parents). All modifications made to the folder since the parents were synced will be reflected in the dataset
projectIf creating a new dataset, specify the dataset's project name
nameIf creating a new dataset, specify the dataset's name
tagsDataset user tags
skip-closeDo not auto close dataset after syncing folders
verboseVerbose reporting

List Dataset Content#

clearml-data list [--id <dataset_id>]

Parameters

NameDescriptionOptional
idDataset ID whose contents will be shown (alternatively, use project / name combination). Default: previously accessed dataset
projectSpecify dataset project name (if used instead of ID, dataset name is also required)
nameSpecify dataset name (if used instead of ID, dataset project is also required)
filterFilter files based on folder / wildcard. Multiple filters are supported. Example: folder/date_*.json folder/sub-folder
modifiedOnly list file changes (add / remove / modify) introduced in this version

Delete a Dataset#

clearml-data delete [--id <dataset_id_to_delete>]

Deletes an entire dataset from ClearML. This can also be used to delete a newly created dataset.

This does not work on datasets with children.

Parameters

NameDescriptionOptional
idID of dataset to be deleted. Default: previously created / accessed dataset that hasn't been finalized yet
forceForce dataset deletion even if other dataset versions depend on it

Search for a Dataset#

clearml-data search [--name <name>] [--project <project_name>] [--tags <tag>]

Lists all datasets in the system that match the search request.

Datasets can be searched by project, name, ID, and tags.

Parameters

NameDescriptionOptional
idsA list of dataset IDs
projectThe project name of the datasets
nameA dataset name or a partial name to filter datasets by
tagsA list of dataset user tags

Python API#

All API commands should be imported with
from clearml import Dataset

Dataset.get(dataset_id=DS_ID).get_local_copy()#

Returns a path to dataset in cache, and downloads it if it is not already in cache.

Parameters

NameDescriptionOptional
use_soft_linksIf True, use soft links. Default: False on Windows, True on Posix systems
raise_on_errorIf True, raise exception if dataset merging failed on any file

Dataset.get(dataset_id=DS_ID).get_mutable_local_copy()#

Downloads the dataset to a specific folder (non-cached). If the folder already has contents, specify whether to overwrite its contents with the dataset contents.

Parameters

NameDescriptionOptional
target_folderLocal target folder for the writable copy of the dataset
overwriteIf True, recursively delete the contents of the target folder before creating a copy of the dataset. If False (default) and target folder contains files, raise exception or return None
raise_on_errorIf True, raise exception if dataset merging failed on any file

Dataset.create()#

Create a new dataset.

Parent datasets can be specified, and the new dataset inherits all of its parent's content. Multiple dataset parents can be listed. Merging of parent datasets is done based on the list's order, where each parent can override overlapping files in the previous parent dataset.

Parameters

NameDescriptionOptional
dataset_nameName of the new dataset
dataset_projectThe project containing the dataset. If not specified, infer project name from parent datasets. If there is no parent dataset, then this value is required
parent_datasetsExpand a parent dataset by adding / removing files
use_current_taskIf True, the dataset is created on the current Task. Default: False

Dataset.add_files()#

Add files or folder into the current dataset.

Parameters

NameDescriptionOptional
pathAdd a folder / file to the dataset
wildcardAdd only a specific set of files based on wildcard matching. Wildcard matching can be a single string or a list of wildcards, for example: ~/data/*.jpg, ~/data/json
local_base_folderFiles will be located based on their relative path from local_base_folder
dataset_pathWhere in the dataset the folder / files should be located
recursiveIf True, match all wildcard files recursively
verboseIf True, print to console files added / modified

Dataset.upload()#

Start file uploading, the function returns when all files are uploaded.

Parameters

NameDescriptionOptional
show_progressIf True, show upload progress bar
verboseIf True, print verbose progress report
output_urlTarget storage for the compressed dataset (default: file server). Examples: s3://bucket/data, gs://bucket/data , azure://bucket/data, /mnt/share/data
compressionCompression algorithm for the Zipped dataset file (default: ZIP_DEFLATED)

Dataset.finalize()#

Closes the dataset and marks it as Completed. After a dataset has been closed, it can no longer be modified. Before closing a dataset, its files must first be uploaded.

Parameters

NameDescriptionOptional
verboseIf True, print verbose progress report
raise_on_errorIf True, raise exception if dataset finalizing failed