Skip to main content

Pipeline from Tasks

The example demonstrates a simple pipeline, where each step is a ClearML Task.

The pipeline is implemented using the PipelineController class. Steps are added to a PipelineController object, which launches and monitors the steps when executed.

This example incorporates four tasks, each of which is created using a different script:

When the controller task is executed, it clones the step tasks, and enqueues the newly cloned tasks for execution. Note that the base tasks from which the steps are cloned are only used as templates and not executed themselves. Also note that for the controller to clone, these base tasks need to exist in the system (as a result of a previous run or using clearml-task).

The controller task itself can be run locally, or, if the controller task has already run at least once and is in the ClearML Server, the controller can be cloned, and the cloned task can be executed remotely.

The sections below describe in more detail what happens in the controller task and in each step task.

The Pipeline Controller

  1. Create the pipeline controller object.

    pipe = PipelineController(
    name='pipeline demo',
  2. Set the default execution queue to be used. All the pipeline steps will be enqueued for execution in this queue.

  3. Build the pipeline (see PipelineController.add_step method for complete reference):

    The pipeline’s first step uses the pre-existing task pipeline step 1 dataset artifact in the examples project. The step uploads local data and stores it as an artifact.

    base_task_name='pipeline step 1 dataset artifact'

    The second step uses the pre-existing task pipeline step 2 process dataset in the examples project. The second step’s dependency upon the first step’s completion is designated by setting it as its parent.

    Custom configuration values specific to this step execution are defined through the parameter_override parameter, where the first step’s artifact is fed into the second step.

    Special pre-execution and post-execution logic is added for this step through the use of pre_execute_callback and post_execute_callback respectively.

    parents=['stage_data', ],
    base_task_name='pipeline step 2 process dataset',
    'General/dataset_url': '${stage_data.artifacts.dataset.url}',
    'General/test_size': 0.25

    The third step uses the pre-existing task pipeline step 3 train model in the examples projects. The step uses Step 2’s artifacts.

  4. Run the pipeline.


    The pipeline launches remotely, through the services queue, unless otherwise specified.

Step 1 - Downloading the Data

The pipeline’s first step ( does the following:

  1. Download data using StorageManager.get_local_copy

    # simulate local dataset, download one, so we have something local
    local_iris_pkl = StorageManager.get_local_copy(
  2. Store the data as an artifact named dataset using Task.upload_artifact

    # add and upload local file containing our toy dataset
    task.upload_artifact('dataset', artifact_object=local_iris_pkl)

Step 2 - Processing the Data

The pipeline's second step ( does the following:

  1. Connect its configuration parameters with the ClearML task:

    args = {
    'dataset_task_id': '',
    'dataset_url': '',
    'random_state': 42,
    'test_size': 0.2,

    # store arguments, later we will be able to change them from outside the code
  2. Download the data created in the previous step (specified through the dataset_url parameter) using StorageManager.get_local_copy

    iris_pickle = StorageManager.get_local_copy(remote_url=args['dataset_url'])
  3. Generate testing and training sets from the data and store them as artifacts.

    task.upload_artifact('X_train', X_train)
    task.upload_artifact('X_test', X_test)
    task.upload_artifact('y_train', y_train)
    task.upload_artifact('y_test', y_test)

Step 3 - Training the Network

The pipeline's third step ( does the following:

  1. Connect its configuration parameters with the ClearML task. This allows the pipeline controller to override the dataset_task_id value as the pipeline is run.

    # Arguments
    args = {
    'dataset_task_id': 'REPLACE_WITH_DATASET_TASK_ID',
  2. Clone the base task and enqueue it using Task.execute_remotely.

  3. Access the data created in the previous task.

    dataset_task = Task.get_task(task_id=args['dataset_task_id'])
    X_train = dataset_task.artifacts['X_train'].get()
    X_test = dataset_task.artifacts['X_test'].get()
    y_train = dataset_task.artifacts['y_train'].get()
    y_test = dataset_task.artifacts['y_test'].get()
  4. Train the network and log plots.

Running the Pipeline

To run the pipeline:

  1. If the pipeline steps tasks do not yet exist, run their code to create the ClearML tasks.

  2. Run the pipeline controller.


    If you enqueue a Task, make sure an agent is assigned to the queue, so it will execute the Task.


When the experiment is executed, the terminal returns the task ID, and links to the pipeline controller task page and pipeline page.

ClearML Task: created new task id=bc93610688f242ecbbe70f413ff2cf5f
ClearML results page:
ClearML pipeline page:

The pipeline run’s page contains the pipeline’s structure, the execution status of every step, as well as the run’s configuration parameters and output.

Pipeline DAG

To view a run’s complete information, click Full details on the bottom of the Run Info panel, which will open the pipeline’s controller task page.

Click a step to see its summary information.

Pipeline step info


Click DETAILS to view a log of the pipeline controller’s console output.

Pipeline console

Click on a step to view its console output.