A pipeline comprises one or more nodes that are (in many cases) connected with each other to define execution dependencies. A node is an instance of a configurable component that commonly only implements a single unit of work to make it reusable. A unit of work can represent any task, such as loading data, pre-processing data, analyzing data, training a machine learning model, deploying a model for serving, querying a service, or sending an email.
Note though that multiple components might implement the "same" task. For example, one component might load data from a SQL database, whereas another component might download data from S3 storage. Conceptually both components load data, but how they load it is entirely different.
Elyra supports two types of components: generic components and custom components. A pipeline that utilizes only generic components is called a generic pipeline, whereas a pipeline that utilizes generic components and/or custom components is referred to as runtime-specific pipeline.
Pipelines are assembled using the Visual Pipeline Editor. The editor includes a palette, the canvas, and a properties panel, shown on the left, in the center, and the right, respectively.
Please review the Best practices for file-based pipeline nodes topic in the User Guide if your pipelines include generic components.
Elyra pipelines support three runtime platforms:
A generic pipeline comprises only of nodes that are implemented using generic components. This Elyra release includes three generic components that allow for execution of Jupyter notebooks, Python scripts, and R scripts.
Generic pipelines are portable, meaning they can run locally in JupyterLab, or remotely on Kubeflow Pipelines or Apache Airflow.
A runtime-specific pipeline is permanently associated with a runtime platform, such as Kubeflow Pipelines or Apache Airflow. A runtime-specific pipeline may include nodes that are implemented using generic components or custom components for that runtime.
For illustrative purposes the Elyra component registry includes a couple example custom components. You can add your own components as outlined in Managing custom components.
Note that it is not possible to convert a generic pipeline to a runtime-specific pipeline or a runtime-specific pipeline from one type to another.
The tutorials provide comprehensive step-by-step instructions for creating and running pipelines. To create a pipeline using the editor:
Pipeline properties include:
- An optional description, summarizing the pipeline purpose.
- Properties that apply to every generic pipeline node. In this release the following properties are supported:
- **Object storage path prefix**. Elyra stores pipeline input and output artifacts in a cloud object storage bucket. By default these artifacts are located in the `/<pipeline-instance-name>` path. The example below depicts the artifact location for several pipelines in the `pipeline-examples` bucket:
![artifacts default storage layout on object storage](../images/user_guide/pipelines/node-artifacts-on-object-storage.png)
Configure an object storage path prefix to store artifacts in a pipeline-specific location `/<path-prefix>/<pipeline-instance-name>`:
![artifacts custom storage layout on object storage](../images/user_guide/pipelines/generic-node-artifacts-custom-layout.png)
- Default values that apply to every pipeline node that is implemented by a [generic component](pipeline-components.html#generic-components). These values can be overridden for each node.
- **Runtime image**
- Identifies the container image used to execute the Jupyter notebook or script. Select an image from the list or [add a new one](runtime-image-conf.md) that meets your requirements.
- The value is ignored when the pipeline is executed locally.
- **Environment variables**
- A list of environment variables to be set in the container that executes the Jupyter notebook or script. Format: `ENV_VAR_NAME=value`. Entries that are empty (`ENV_VAR_NAME=`) or malformed are ignored.
- **Data volumes**
- A list of [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVC) to be mounted into the container that executes the Jupyter notebook or script. Format: `/mnt/path=existing-pvc-name`. Entries that are empty (`/mnt/path=`) or malformed are ignored. Entries with a PVC name considered to be an [invalid Kubernetes resource name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names) will raise a validation error after pipeline submission or export.
- The referenced PVCs must exist in the Kubernetes namespace where the generic pipeline nodes are executed.
- Data volumes are not mounted when the pipeline is executed locally.
- **Kubernetes secrets**
- A list of [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) to be accessed as environment variables during Jupyter notebook or script execution. Format: `ENV_VAR=secret-name:secret-key`. Entries that are empty (`ENV_VAR=`) or malformed are ignored. Entries with a secret name considered to be an [invalid Kubernetes resource name](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names) or with [an invalid secret key](https://kubernetes.io/docs/concepts/configuration/secret/#restriction-names-data) will raise a validation error after pipeline submission or export.
- The referenced secrets must exist in the Kubernetes namespace where the generic pipeline nodes are executed.
- Secrets are ignored when the pipeline is executed locally. For remote execution, if an environment variable was assigned both a static value (via the 'Environment Variables' property) and a Kubernetes secret value, the secret's value is used.
You can also drag and drop Jupyter notebooks, Python scripts, or R scripts from the JupyterLab File Browser onto the canvas.
Open Properties
. Runtime properties configure a component and govern its execution behavior.Runtime properties are component specific. For generic components (Jupyter notebook, Python script, and R script) the properties are defined as follows:
Runtime Image
TensorFlow 2.0
CPU, GPU, and RAM
File Dependencies
*
and ?
.dependent-script.py
Environment Variables
=
.TOKEN=value
Output Files
*
and ?
.data/*.csv
Data Volumes
/mnt/path=existing-pvc-name
. Entries that are empty (/mnt/path=
) or malformed are ignored. Entries with a PVC name considered to be an invalid Kubernetes resource name will raise a validation error after pipeline submission or export. The referenced PVCs must exist in the Kubernetes namespace where the node is executed./mnt/vol1=data-pvc
Kubernetes Secrets
ENV_VAR=secret-name:secret-key
. Entries that are empty (ENV_VAR=
) or malformed are ignored. Entries with a secret name considered to be an invalid Kubernetes resource name or with an invalid secret key will raise a validation error after pipeline submission or export. The referenced secrets must exist in the Kubernetes namespace where the generic pipeline nodes are executed.ENV_VAR=secret-name:secret-key
Note: You can rename the pipeline file in the JupyterLab File Browser.
Pipelines can be run from the Visual Pipeline Editor and the elyra-pipeline
command line interface. Before you can run a pipeline on Kubeflow Pipelines or Apache Airflow you must create a runtime configuration
. A runtime configuration contains information about the target environment, such as server URL and credentials.
To run a pipeline from the Visual Pipeline Editor:
Run Pipeline
in the editor's tool bar.Elyra does not include a pipeline run monitoring interface for pipelines:
The pipeline run output artifacts are stored in the following locations:
Use the elyra-pipeline
run
command to execute a generic pipeline in your JupyterLab environment.
$ elyra-pipeline run elyra-pipelines/a-notebook.pipeline
Use the elyra-pipeline
submit
command to run a generic or runtime-specific pipeline remotely on Kubeflow Pipelines or Apache Airflow, specifying a compatible runtime configuration as parameter:
$ elyra-pipeline submit elyra-pipelines/a-kubeflow.pipeline \
--runtime-config kfp-shared-tekton
For Kubeflow Pipelines the --monitor
option is supported. If specified, the pipeline execution status is monitored for up to --monitor-timeout
minutes (default: 60) and the elyra-pipeline submit
command terminates as follows:
--monitor-timeout
is exceeded: exit code 0--monitor-timeout
is exceeded: exit code 124 (Note: the pipeline continues running on Kubeflow Pipelines, only monitoring is stopped.)--monitor-timeout
is exceeded: non-zero exit code but not 124Note: Refer to the Managing runtime configurations using the Elyra CLI topic in the User Guide for details on how to list and manage runtime configurations. If the specified --runtime-config
is not compatible with the specified pipeline an error is raised.
When you export a pipeline Elyra only prepares it for later execution, but does not upload it to the Kubeflow Pipelines or Apache Airflow server. Export performs two tasks. It packages dependencies for generic components and uploads them to cloud storage, and it generates pipeline code for the target runtime.
Before you can export a pipeline on Kubeflow Pipelines or Apache Airflow you must create a runtime configuration
. A runtime configuration contains information about the target environment, such as server URL and credentials.
To export a pipeline from the Visual Pipeline Editor:
Export Pipeline
in the editor's tool bar.For generic pipelines select a runtime platform (local, Kubeflow Pipelines, or Apache Airflow) and a runtime configuration for that platform. For runtime-specific pipelines select a runtime configuration.
Select an export format.
Use the elyra-pipeline
export
command to export a pipeline to a runtime-specific format, such as YAML for Kubeflow Pipelines or Python DAG for Apache Airflow.
$ elyra-pipeline export a-notebook.pipeline --runtime-config kfp_dev_env --output /path/to/exported.yaml --overwrite
To learn more about supported parameters, run
$ elyra-pipeline export --help
Note: Refer to the Managing runtime configurations using the Elyra CLI topic in the User Guide for details on how to list and manage runtime configurations. If the specified --runtime-config
is not compatible with the specified pipeline an error is raised.
Use the elyra-pipeline
describe
command to display pipeline information, such as description, version, and dependencies.
$ elyra-pipeline describe a-notebook.pipeline
To learn more about supported parameters, run
$ elyra-pipeline describe --help