Browse Source

Add support for 'export' command to elyra-pipeline CLI (#2582)

Patrick Titzler 3 years ago
parent
commit
0e623deea1

+ 7 - 5
docs/source/user_guide/command-line-interface.md

@@ -164,7 +164,7 @@ Finally, when updating instances using the `update` command, you are not require
 ### Working with pipelines
 
 In Elyra, [a pipeline](pipelines.md) is a representation of a
-workflow that you run locally or remotely on Kubeflow Pipelines or Apache Airflow.
+workflow that you run locally or remotely on Kubeflow Pipelines or Apache Airflow. The `elyra-pipeline` CLI is used to run pipelines, validate pipelines, describe pipelines, or export pipelines.
 
 #### Getting help
 
@@ -179,9 +179,11 @@ To learn more about a specific command, e.g. `run`, run
 $ elyra-pipeline run --help
 ```
 
-#### Running pipelines
-
 Refer to the topics below for detailed information on how to use `elyra-pipeline` to
  - [Display pipeline information summary](pipelines.html#running-a-pipeline-using-the-command-line)
- - [Run a pipeline locally](pipelines.html#running-a-pipeline-using-the-command-line)
- - [Submit a pipeline for remote execution](pipelines.html#running-a-pipeline-using-the-command-line)
+ - [Run a pipeline locally](pipelines.html#running-a-pipeline-from-the-command-line-interface)
+ - [Submit a pipeline for remote execution](pipelines.html#running-a-pipeline-from-the-command-line-interface)
+ - [Export a pipeline](pipelines.html#exporting-a-pipeline-from-the-command-line-interface)
+
+
+

+ 42 - 16
docs/source/user_guide/pipelines.md

@@ -119,7 +119,7 @@ Note: You can rename the pipeline file in the JupyterLab _File Browser_.
 
 Pipelines can be run from the Visual Pipeline Editor and the `elyra-pipeline` command line interface. Before you can run a pipeline on Kubeflow Pipelines or Apache Airflow you must create a [`runtime configuration`](runtime-conf.md). A runtime configuration contains information about the target environment, such as server URL and credentials.
 
-**Running a pipeline from the Visual Pipeline Editor**
+#### Running a pipeline from the Visual Pipeline Editor
 
 To run a pipeline from the Visual Pipeline Editor:
 1. Click `Run Pipeline` in the editor's tool bar.
@@ -139,39 +139,32 @@ To run a pipeline from the Visual Pipeline Editor:
    - For local/JupyterLab execution all artifacts are stored in the local file system.
    - For Kubeflow Pipelines and Apache Airflow output artifacts for generic components are stored in the runtime configuration's designated object storage bucket.   
 
-**Running a pipeline from the command line interface**
+#### Running a pipeline from the command line interface
 
-The [`elyra-pipeline` command line interface](https://elyra.readthedocs.io/en/latest/user_guide/command-line-interface.html#working-with-pipelines)
-provides an informative command: `describe` and two execution commands: `run` and `submit`.
-
-Use the `elyra-pipeline describe` command to display pipeline details such as name, version, etc.
-
-```bash
-$ elyra-pipeline describe elyra-pipelines/a-notebook.pipeline
-```
-
-Use the `elyra-pipeline run` command to run a generic pipeline in your JupyterLab environment:
+Use the [`elyra-pipeline`](command-line-interface.html#working-with-pipelines) `run` command to execute a generic pipeline in your JupyterLab environment.
 
 ```bash
 $ elyra-pipeline run elyra-pipelines/a-notebook.pipeline
 ```
 
-Use the `elyra-pipeline submit` command to run a generic or runtime-specific pipeline remotely on Kubeflow Pipelines or Apache Airflow, specifying a compatible runtime configuration as parameter:
+Use the [`elyra-pipeline`](command-line-interface.html#working-with-pipelines) `submit` command to run a generic or runtime-specific pipeline remotely on Kubeflow Pipelines or Apache Airflow, specifying a compatible runtime configuration as parameter:
 
 ```bash
 $ elyra-pipeline submit elyra-pipelines/a-kubeflow.pipeline \
       --runtime-config kfp-shared-tekton
 ```
 
-Note: Refer to the [Managing runtime configurations using the Elyra CLI](runtime-conf.html#managing-runtime-configurations-using-the-elyra-cli) topic in the _User Guide_ for details on how to list and manage runtime configurations.
+Note: Refer to the [Managing runtime configurations using the Elyra CLI](runtime-conf.html#managing-runtime-configurations-using-the-elyra-cli) topic in the _User Guide_ for details on how to list and manage runtime configurations. If the specified `--runtime-config` is not compatible with the specified pipeline an error is raised.
 
-### Exporting a pipeline
+### Exporting pipelines
 
 When you export a pipeline Elyra only prepares it for later execution, but does not upload it to the Kubeflow Pipelines or Apache Airflow server. Export performs two tasks. 
 It packages dependencies for generic components and uploads them to cloud storage, and it generates pipeline code for the target runtime. 
 
 Before you can export a pipeline on Kubeflow Pipelines or Apache Airflow you must create a [`runtime configuration`](runtime-conf.md). A runtime configuration contains information about the target environment, such as server URL and credentials.
 
+#### Exporting a pipeline from the Visual Pipeline Editor
+
 To export a pipeline from the Visual Pipeline Editor:
 1. Click `Export Pipeline` in the editor's tool bar.
 
@@ -183,4 +176,37 @@ To export a pipeline from the Visual Pipeline Editor:
    
    ![Configure pipeline export options](../images/user_guide/pipelines/configure-pipeline-export-options.png)
 
-1. Import the exported pipeline file using the Kubeflow Central Dashboard or add it to the Git repository that Apache Airflow is monitoring.   
+1. Import the exported pipeline file using the Kubeflow Central Dashboard or add it to the Git repository that Apache Airflow is monitoring.
+
+
+#### Exporting a pipeline from the command line interface
+
+Use the [`elyra-pipeline`](command-line-interface.html#working-with-pipelines) `export` command to export a pipeline to a runtime-specific format, such as YAML for Kubeflow Pipelines or Python DAG for Apache Airflow.
+
+```bash
+$ elyra-pipeline export a-notebook.pipeline --runtime-config kfp_dev_env --output /path/to/exported.yaml --overwrite
+```
+
+To learn more about supported parameters, run
+```bash
+$ elyra-pipeline export --help
+```
+
+Note: Refer to the [Managing runtime configurations using the Elyra CLI](runtime-conf.html#managing-runtime-configurations-using-the-elyra-cli) topic in the _User Guide_ for details on how to list and manage runtime configurations. If the specified `--runtime-config` is not compatible with the specified pipeline an error is raised.
+
+### Describing pipelines
+
+#### Describing a pipeline from the command line interface
+
+Use the [`elyra-pipeline`](command-line-interface.html#working-with-pipelines) `describe` command to display pipeline information, such as description, version, and dependencies.
+
+```bash
+$ elyra-pipeline describe a-notebook.pipeline
+```
+
+To learn more about supported parameters, run
+```bash
+$ elyra-pipeline describe --help
+```
+
+

+ 183 - 33
elyra/cli/pipeline_app.py

@@ -19,6 +19,7 @@ from collections import OrderedDict
 import json
 from operator import itemgetter
 import os
+from pathlib import Path
 from typing import Optional
 import warnings
 
@@ -30,11 +31,15 @@ from yaspin import yaspin
 from elyra._version import __version__
 from elyra.metadata.manager import MetadataManager
 from elyra.metadata.schema import SchemaManager
+from elyra.metadata.schemaspaces import Runtimes
 from elyra.pipeline.parser import PipelineParser
 from elyra.pipeline.pipeline_definition import Pipeline
 from elyra.pipeline.pipeline_definition import PipelineDefinition
 from elyra.pipeline.processor import PipelineProcessorManager
 from elyra.pipeline.processor import PipelineProcessorResponse
+from elyra.pipeline.runtime_type import RuntimeProcessorType
+from elyra.pipeline.runtime_type import RuntimeTypeResources
+from elyra.pipeline.runtimes_metadata import RuntimesMetadata
 from elyra.pipeline.validation import PipelineValidationManager
 from elyra.pipeline.validation import ValidationSeverity
 
@@ -45,20 +50,41 @@ SEVERITY = {ValidationSeverity.Error: 'Error',
             ValidationSeverity.Information: 'Information'}
 
 
-def _get_runtime_type(runtime_config: Optional[str]) -> Optional[str]:
-    if not runtime_config or runtime_config == 'local':
+def _get_runtime_config(runtime_config_name: Optional[str]) -> Optional[RuntimesMetadata]:
+    """Fetch runtime configuration for the specified name"""
+    if not runtime_config_name or runtime_config_name == 'local':
         # No runtime configuration was specified or it is local.
         # Cannot use metadata manager to determine the runtime type.
-        return 'local'
+        return None
     try:
-        metadata_manager = MetadataManager(schemaspace='runtimes')
-        metadata = metadata_manager.get(runtime_config)
-        return metadata.schema_name
+        metadata_manager = MetadataManager(schemaspace=Runtimes.RUNTIMES_SCHEMASPACE_NAME)
+        return metadata_manager.get(runtime_config_name)
     except Exception as e:
-        raise click.ClickException(f'Invalid runtime configuration: {runtime_config}\n {e}')
+        raise click.ClickException(f'Invalid runtime configuration: {runtime_config_name}\n {e}')
+
+
+def _get_runtime_type(runtime_config_name: Optional[str]) -> Optional[str]:
+    """Get runtime type for the provided runtime configuration name"""
+    runtime_config = _get_runtime_config(runtime_config_name)
+    if runtime_config:
+        return runtime_config.metadata.get('runtime_type')
+    return None
+
+
+def _get_runtime_schema_name(runtime_config_name: Optional[str]) -> Optional[str]:
+    """Get runtime schema name for the provided runtime configuration name"""
+    if not runtime_config_name or runtime_config_name == 'local':
+        # No runtime configuration was specified or it is local.
+        # Cannot use metadata manager to determine the runtime type.
+        return 'local'
+    runtime_config = _get_runtime_config(runtime_config_name)
+    if runtime_config:
+        return runtime_config.schema_name
+    return None
 
 
 def _get_runtime_display_name(schema_name: Optional[str]) -> Optional[str]:
+    """Return the display name for the specified runtime schema_name"""
     if not schema_name or schema_name == 'local':
         # No schame name was  specified or it is local.
         # Cannot use metadata manager to determine the display name.
@@ -66,7 +92,7 @@ def _get_runtime_display_name(schema_name: Optional[str]) -> Optional[str]:
 
     try:
         schema_manager = SchemaManager.instance()
-        schema = schema_manager.get_schema('runtimes', schema_name)
+        schema = schema_manager.get_schema(Runtimes.RUNTIMES_SCHEMASPACE_NAME, schema_name)
         return schema['display_name']
     except Exception as e:
         raise click.ClickException(f'Invalid runtime configuration: {schema_name}\n {e}')
@@ -86,12 +112,6 @@ def _validate_pipeline_runtime(primary_pipeline: Pipeline, runtime: str) -> bool
     return is_valid
 
 
-def _validate_pipeline_file_extension(pipeline_file: str):
-    extension = os.path.splitext(pipeline_file)[1]
-    if extension != '.pipeline':
-        raise click.ClickException('Pipeline file should be a [.pipeline] file.\n')
-
-
 def _preprocess_pipeline(pipeline_path: str,
                          runtime: Optional[str] = None,
                          runtime_config: Optional[str] = None) -> dict:
@@ -166,7 +186,8 @@ def _print_issues(issues):
     click.echo("")
 
 
-def _validate_pipeline_definition(pipeline_definition):
+def _validate_pipeline_definition(pipeline_definition: PipelineDefinition):
+    """Validate pipeline definition and display issues"""
 
     click.echo("Validating pipeline...")
     # validate pipeline
@@ -178,7 +199,8 @@ def _validate_pipeline_definition(pipeline_definition):
     _print_issues(issues)
 
     if validation_response.has_fatal:
-        raise click.ClickException("Pipeline validation FAILED. The pipeline was not submitted for execution.")
+        # raise an exception and let the caller decide what to do
+        raise click.ClickException("Unable to continue due to pipeline validation issues.")
 
 
 def _execute_pipeline(pipeline_definition) -> PipelineProcessorResponse:
@@ -197,6 +219,15 @@ def _execute_pipeline(pipeline_definition) -> PipelineProcessorResponse:
         raise click.ClickException(f'Error processing pipeline: \n {re} \n {re.__cause__}')
 
 
+def validate_pipeline_path(ctx, param, value):
+    """Callback for pipeline_path parameter"""
+    if not value.is_file():
+        raise click.BadParameter(f"'{value}' is not a file.")
+    if value.suffix != '.pipeline':
+        raise click.BadParameter(f"'{value}' is not a .pipeline file.")
+    return value
+
+
 def print_banner(title):
     click.echo(Fore.CYAN + "────────────────────────────────────────────────────────────────" + Style.RESET_ALL)
     click.echo(Fore.CYAN + " {}".format(title) + Style.RESET_ALL)
@@ -234,7 +265,9 @@ def pipeline():
 @click.option('--runtime-config',
               required=False,
               help='Runtime config where the pipeline should be processed')
-@click.argument('pipeline_path')
+@click.argument('pipeline_path',
+                type=Path,
+                callback=validate_pipeline_path)
 def validate(pipeline_path, runtime_config='local'):
     """
     Validate pipeline
@@ -243,18 +276,21 @@ def validate(pipeline_path, runtime_config='local'):
 
     print_banner("Elyra Pipeline Validation")
 
-    runtime = _get_runtime_type(runtime_config)
-
-    _validate_pipeline_file_extension(pipeline_path)
+    runtime = _get_runtime_schema_name(runtime_config)
 
     pipeline_definition = \
         _preprocess_pipeline(pipeline_path, runtime=runtime, runtime_config=runtime_config)
 
-    _validate_pipeline_definition(pipeline_definition)
+    try:
+        _validate_pipeline_definition(pipeline_definition)
+    except Exception:
+        raise click.ClickException("Pipeline validation FAILED.")
 
 
 @click.command()
-@click.argument('pipeline_path')
+@click.argument('pipeline_path',
+                type=Path,
+                callback=validate_pipeline_path)
 @click.option('--json',
               'json_option',
               is_flag=True,
@@ -272,14 +308,15 @@ def submit(json_option, pipeline_path, runtime_config):
 
     print_banner("Elyra Pipeline Submission")
 
-    runtime = _get_runtime_type(runtime_config)
-
-    _validate_pipeline_file_extension(pipeline_path)
+    runtime = _get_runtime_schema_name(runtime_config)
 
     pipeline_definition = \
         _preprocess_pipeline(pipeline_path, runtime=runtime, runtime_config=runtime_config)
 
-    _validate_pipeline_definition(pipeline_definition)
+    try:
+        _validate_pipeline_definition(pipeline_definition)
+    except Exception:
+        raise click.ClickException("Pipeline validation FAILED. The pipeline was not submitted for execution.")
 
     with yaspin(text="Submitting pipeline..."):
         response: PipelineProcessorResponse = _execute_pipeline(pipeline_definition)
@@ -313,7 +350,9 @@ def submit(json_option, pipeline_path, runtime_config):
               is_flag=True,
               required=False,
               help='Display pipeline summary in JSON format')
-@click.argument('pipeline_path')
+@click.argument('pipeline_path',
+                type=Path,
+                callback=validate_pipeline_path)
 def run(json_option, pipeline_path):
     """
     Run a pipeline in your local environment
@@ -322,12 +361,13 @@ def run(json_option, pipeline_path):
 
     print_banner("Elyra Pipeline Local Run")
 
-    _validate_pipeline_file_extension(pipeline_path)
-
     pipeline_definition = \
         _preprocess_pipeline(pipeline_path, runtime='local', runtime_config='local')
 
-    _validate_pipeline_definition(pipeline_definition)
+    try:
+        _validate_pipeline_definition(pipeline_definition)
+    except Exception:
+        raise click.ClickException("Pipeline validation FAILED. The pipeline was not run.")
 
     response = _execute_pipeline(pipeline_definition)
 
@@ -346,7 +386,9 @@ def run(json_option, pipeline_path):
               is_flag=True,
               required=False,
               help='Display pipeline summary in JSON format')
-@click.argument('pipeline_path')
+@click.argument('pipeline_path',
+                type=Path,
+                callback=validate_pipeline_path)
 def describe(json_option, pipeline_path):
     """
     Display pipeline summary
@@ -362,8 +404,6 @@ def describe(json_option, pipeline_path):
     pipeline_keys = ["name", "description", "type", "version", "nodes", "file_dependencies", "component_dependencies"]
     iter_keys = {"file_dependencies", "component_dependencies"}
 
-    _validate_pipeline_file_extension(pipeline_path)
-
     pipeline_definition = \
         _preprocess_pipeline(pipeline_path, runtime='local', runtime_config='local')
 
@@ -408,7 +448,117 @@ def describe(json_option, pipeline_path):
         click.echo(json.dumps(describe_dict, indent=indent_length))
 
 
+@click.command()
+@click.argument('pipeline_path',
+                type=Path,
+                callback=validate_pipeline_path)
+@click.option('--runtime-config',
+              required=True,
+              help='Runtime configuration name.')
+@click.option('--output',
+              required=False,
+              type=Path,
+              help='Exported file name (including optional path). Defaults to '
+                   ' the current directory and the pipeline name.')
+@click.option('--overwrite',
+              is_flag=True,
+              help='Overwrite output file if it already exists.')
+def export(pipeline_path, runtime_config, output, overwrite):
+    """
+    Export a pipeline to a runtime-specific format
+    """
+
+    click.echo()
+    print_banner("Elyra pipeline export")
+
+    rtc = _get_runtime_config(runtime_config)
+    runtime_schema = rtc.schema_name
+    runtime_type = rtc.metadata.get('runtime_type')
+
+    pipeline_definition = \
+        _preprocess_pipeline(pipeline_path, runtime=runtime_schema, runtime_config=runtime_config)
+
+    # Verify that the pipeline's runtime type is compatible with the
+    # runtime configuration
+    pipeline_runtime_type = pipeline_definition.get('pipelines', [{}])[0]\
+                                               .get('app_data', {})\
+                                               .get('runtime_type', 'Generic')
+    if pipeline_runtime_type and\
+       pipeline_runtime_type != 'Generic' and\
+       pipeline_runtime_type != runtime_type:
+        raise click.BadParameter(f"The runtime configuration type '{runtime_type}' does not match "
+                                 f"the pipeline's runtime type '{pipeline_runtime_type}'.",
+                                 param_hint='--runtime-config')
+
+    resources = RuntimeTypeResources.get_instance_by_type(
+        RuntimeProcessorType.get_instance_by_name(runtime_type))
+    supported_export_formats = resources.get_export_extensions()
+    if len(supported_export_formats) == 0:
+        raise click.ClickException(f"Runtime type '{runtime_type}' does not support export.")
+
+    # If, in the future, a runtime supports multiple export output formats,
+    # the user can choose one. For now, choose the only option.
+    selected_export_format = supported_export_formats[0]
+    selected_export_format_suffix = f'.{selected_export_format}'
+
+    # generate output file name from the user-provided input
+    if output is None:
+        # user did not specify an output; use current directory
+        # and derive the file name from the pipeline file name
+        output_path = Path.cwd()
+        filename = f"{Path(pipeline_path).stem}{selected_export_format_suffix}"
+    else:
+        if output.suffix == selected_export_format_suffix:
+            # user provided a file name
+            output_path = output.parent
+            filename = output.name
+        else:
+            # user provided a directory
+            output_path = output
+            filename = f"{Path(pipeline_path).stem}{selected_export_format_suffix}"
+    output_file = output_path.resolve() / filename
+
+    # verify that the output path meets the prerequisites
+    if not output_file.parent.is_dir():
+        try:
+            output_file.parent.mkdir(parents=True, exist_ok=True)
+        except Exception as ex:
+            raise click.BadParameter(f'Cannot create output directory: {ex}',
+                                     param_hint='--output')
+
+    # handle output overwrite
+    if output_file.exists() and not overwrite:
+        raise click.ClickException(f"Output file '{str(output_file)}' exists and "
+                                   "option '--overwrite' was not specified.")
+
+    # validate the pipeline
+    try:
+        _validate_pipeline_definition(pipeline_definition)
+    except Exception:
+        raise click.ClickException("Pipeline validation FAILED. The pipeline was not exported.")
+
+    with yaspin(text='Exporting pipeline ...'):
+        try:
+            # parse pipeline
+            pipeline_object = PipelineParser().parse(pipeline_definition)
+            # process pipeline
+            with warnings.catch_warnings():
+                warnings.simplefilter("ignore")
+                asyncio.get_event_loop().run_until_complete(
+                    PipelineProcessorManager.instance().export(pipeline_object,
+                                                               selected_export_format,
+                                                               str(output_file),
+                                                               True))
+        except ValueError as ve:
+            raise click.ClickException(f'Error parsing pipeline: \n {ve}')
+        except RuntimeError as re:
+            raise click.ClickException(f'Error exporting pipeline: \n {re} \n {re.__cause__}')
+
+    click.echo(f"Pipeline was exported to '{str(output_file)}'.")
+
+
 pipeline.add_command(describe)
 pipeline.add_command(validate)
 pipeline.add_command(submit)
 pipeline.add_command(run)
+pipeline.add_command(export)

+ 88 - 0
elyra/tests/cli/resources/pipelines/airflow.pipeline

@@ -0,0 +1,88 @@
+{
+  "doc_type": "pipeline",
+  "version": "3.0",
+  "json_schema": "http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
+  "id": "elyra-auto-generated-pipeline",
+  "primary_pipeline": "primary",
+  "pipelines": [
+    {
+      "id": "primary",
+      "nodes": [
+        {
+          "id": "1ba020d6-517e-4f19-8bcd-6fa987b3df93",
+          "type": "execution_node",
+          "op": "execute-notebook-node",
+          "app_data": {
+            "component_parameters": {
+              "filename": "hello.ipynb",
+              "outputs": [],
+              "env_vars": [],
+              "dependencies": [],
+              "include_subdirectories": false,
+              "runtime_image": "amancevice/pandas:1.1.1"
+            },
+            "label": "",
+            "ui_data": {
+              "label": "hello.ipynb",
+              "image": "data:image/svg+xml;utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%2216%22%20viewBox%3D%220%200%2022%2022%22%3E%0A%20%20%3Cg%20class%3D%22jp-icon-warn0%20jp-icon-selectable%22%20fill%3D%22%23EF6C00%22%3E%0A%20%20%20%20%3Cpath%20d%3D%22M18.7%203.3v15.4H3.3V3.3h15.4m1.5-1.5H1.8v18.3h18.3l.1-18.3z%22%2F%3E%0A%20%20%20%20%3Cpath%20d%3D%22M16.5%2016.5l-5.4-4.3-5.6%204.3v-11h11z%22%2F%3E%0A%20%20%3C%2Fg%3E%0A%3C%2Fsvg%3E%0A",
+              "x_pos": 175,
+              "y_pos": 110,
+              "description": "Run notebook file",
+              "decorations": [
+                {
+                  "id": "error",
+                  "image": "data:image/svg+xml;utf8,%3Csvg%20focusable%3D%22false%22%20preserveAspectRatio%3D%22xMidYMid%20meet%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20fill%3D%22%23da1e28%22%20width%3D%2216%22%20height%3D%2216%22%20viewBox%3D%220%200%2016%2016%22%20aria-hidden%3D%22true%22%3E%3Ccircle%20cx%3D%228%22%20cy%3D%228%22%20r%3D%228%22%20fill%3D%22%23ffffff%22%3E%3C%2Fcircle%3E%3Cpath%20d%3D%22M8%2C1C4.2%2C1%2C1%2C4.2%2C1%2C8s3.2%2C7%2C7%2C7s7-3.1%2C7-7S11.9%2C1%2C8%2C1z%20M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2%09c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%3E%3C%2Fpath%3E%3Cpath%20d%3D%22M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8%09c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%20data-icon-path%3D%22inner-path%22%20opacity%3D%220%22%3E%3C%2Fpath%3E%3C%2Fsvg%3E",
+                  "outline": false,
+                  "position": "topRight",
+                  "x_pos": -24,
+                  "y_pos": -8
+                }
+              ]
+            }
+          },
+          "inputs": [
+            {
+              "id": "inPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Input Port"
+                }
+              }
+            }
+          ],
+          "outputs": [
+            {
+              "id": "outPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Output Port"
+                }
+              }
+            }
+          ]
+        }
+      ],
+      "app_data": {
+        "ui_data": {
+          "comments": []
+        },
+        "version": 7,
+        "runtime_type": "APACHE_AIRFLOW",
+        "properties": {
+          "name": "untitled",
+          "runtime": "Apache Airflow"
+        }
+      },
+      "runtime_ref": ""
+    }
+  ],
+  "schemas": []
+}

+ 87 - 0
elyra/tests/cli/resources/pipelines/generic.pipeline

@@ -0,0 +1,87 @@
+{
+  "doc_type": "pipeline",
+  "version": "3.0",
+  "json_schema": "http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
+  "id": "elyra-auto-generated-pipeline",
+  "primary_pipeline": "primary",
+  "pipelines": [
+    {
+      "id": "primary",
+      "nodes": [
+        {
+          "id": "a538e06c-2577-45bd-b5f9-36d5cd47dd74",
+          "type": "execution_node",
+          "op": "execute-notebook-node",
+          "app_data": {
+            "component_parameters": {
+              "filename": "hello.ipynb",
+              "outputs": [],
+              "env_vars": [],
+              "dependencies": [],
+              "include_subdirectories": false,
+              "runtime_image": "amancevice/pandas:1.1.1"
+            },
+            "label": "",
+            "ui_data": {
+              "label": "hello.ipynb",
+              "image": "data:image/svg+xml;utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%2216%22%20viewBox%3D%220%200%2022%2022%22%3E%0A%20%20%3Cg%20class%3D%22jp-icon-warn0%20jp-icon-selectable%22%20fill%3D%22%23EF6C00%22%3E%0A%20%20%20%20%3Cpath%20d%3D%22M18.7%203.3v15.4H3.3V3.3h15.4m1.5-1.5H1.8v18.3h18.3l.1-18.3z%22%2F%3E%0A%20%20%20%20%3Cpath%20d%3D%22M16.5%2016.5l-5.4-4.3-5.6%204.3v-11h11z%22%2F%3E%0A%20%20%3C%2Fg%3E%0A%3C%2Fsvg%3E%0A",
+              "x_pos": 170,
+              "y_pos": 114,
+              "description": "Run notebook file",
+              "decorations": [
+                {
+                  "id": "error",
+                  "image": "data:image/svg+xml;utf8,%3Csvg%20focusable%3D%22false%22%20preserveAspectRatio%3D%22xMidYMid%20meet%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20fill%3D%22%23da1e28%22%20width%3D%2216%22%20height%3D%2216%22%20viewBox%3D%220%200%2016%2016%22%20aria-hidden%3D%22true%22%3E%3Ccircle%20cx%3D%228%22%20cy%3D%228%22%20r%3D%228%22%20fill%3D%22%23ffffff%22%3E%3C%2Fcircle%3E%3Cpath%20d%3D%22M8%2C1C4.2%2C1%2C1%2C4.2%2C1%2C8s3.2%2C7%2C7%2C7s7-3.1%2C7-7S11.9%2C1%2C8%2C1z%20M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2%09c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%3E%3C%2Fpath%3E%3Cpath%20d%3D%22M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8%09c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%20data-icon-path%3D%22inner-path%22%20opacity%3D%220%22%3E%3C%2Fpath%3E%3C%2Fsvg%3E",
+                  "outline": false,
+                  "position": "topRight",
+                  "x_pos": -24,
+                  "y_pos": -8
+                }
+              ]
+            }
+          },
+          "inputs": [
+            {
+              "id": "inPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Input Port"
+                }
+              }
+            }
+          ],
+          "outputs": [
+            {
+              "id": "outPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Output Port"
+                }
+              }
+            }
+          ]
+        }
+      ],
+      "app_data": {
+        "ui_data": {
+          "comments": []
+        },
+        "version": 7,
+        "properties": {
+          "name": "generic",
+          "runtime": "Generic"
+        }
+      },
+      "runtime_ref": ""
+    }
+  ],
+  "schemas": []
+}

+ 35 - 0
elyra/tests/cli/resources/pipelines/hello.ipynb

@@ -0,0 +1,35 @@
+{
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "6b13ff91-8b09-41e2-bb0e-c54c4ca8759c",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "print('hello')"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.7.12"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}

+ 0 - 0
elyra/tests/cli/resources/kfp_3_node_custom.pipeline → elyra/tests/cli/resources/pipelines/kfp_3_node_custom.pipeline


+ 88 - 0
elyra/tests/cli/resources/pipelines/kubeflow_pipelines.pipeline

@@ -0,0 +1,88 @@
+{
+  "doc_type": "pipeline",
+  "version": "3.0",
+  "json_schema": "http://api.dataplatform.ibm.com/schemas/common-pipeline/pipeline-flow/pipeline-flow-v3-schema.json",
+  "id": "elyra-auto-generated-pipeline",
+  "primary_pipeline": "primary",
+  "pipelines": [
+    {
+      "id": "primary",
+      "nodes": [
+        {
+          "id": "92464c03-21dc-4451-aa89-9d7f10177d24",
+          "type": "execution_node",
+          "op": "execute-notebook-node",
+          "app_data": {
+            "component_parameters": {
+              "filename": "hello.ipynb",
+              "outputs": [],
+              "env_vars": [],
+              "dependencies": [],
+              "include_subdirectories": false,
+              "runtime_image": "amancevice/pandas:1.1.1"
+            },
+            "label": "",
+            "ui_data": {
+              "label": "hello.ipynb",
+              "image": "data:image/svg+xml;utf8,%3Csvg%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20width%3D%2216%22%20viewBox%3D%220%200%2022%2022%22%3E%0A%20%20%3Cg%20class%3D%22jp-icon-warn0%20jp-icon-selectable%22%20fill%3D%22%23EF6C00%22%3E%0A%20%20%20%20%3Cpath%20d%3D%22M18.7%203.3v15.4H3.3V3.3h15.4m1.5-1.5H1.8v18.3h18.3l.1-18.3z%22%2F%3E%0A%20%20%20%20%3Cpath%20d%3D%22M16.5%2016.5l-5.4-4.3-5.6%204.3v-11h11z%22%2F%3E%0A%20%20%3C%2Fg%3E%0A%3C%2Fsvg%3E%0A",
+              "x_pos": 130,
+              "y_pos": 118,
+              "description": "Run notebook file",
+              "decorations": [
+                {
+                  "id": "error",
+                  "image": "data:image/svg+xml;utf8,%3Csvg%20focusable%3D%22false%22%20preserveAspectRatio%3D%22xMidYMid%20meet%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%20fill%3D%22%23da1e28%22%20width%3D%2216%22%20height%3D%2216%22%20viewBox%3D%220%200%2016%2016%22%20aria-hidden%3D%22true%22%3E%3Ccircle%20cx%3D%228%22%20cy%3D%228%22%20r%3D%228%22%20fill%3D%22%23ffffff%22%3E%3C%2Fcircle%3E%3Cpath%20d%3D%22M8%2C1C4.2%2C1%2C1%2C4.2%2C1%2C8s3.2%2C7%2C7%2C7s7-3.1%2C7-7S11.9%2C1%2C8%2C1z%20M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2%09c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%3E%3C%2Fpath%3E%3Cpath%20d%3D%22M7.5%2C4h1v5h-1C7.5%2C9%2C7.5%2C4%2C7.5%2C4z%20M8%2C12.2c-0.4%2C0-0.8-0.4-0.8-0.8s0.3-0.8%2C0.8-0.8%09c0.4%2C0%2C0.8%2C0.4%2C0.8%2C0.8S8.4%2C12.2%2C8%2C12.2z%22%20data-icon-path%3D%22inner-path%22%20opacity%3D%220%22%3E%3C%2Fpath%3E%3C%2Fsvg%3E",
+                  "outline": false,
+                  "position": "topRight",
+                  "x_pos": -24,
+                  "y_pos": -8
+                }
+              ]
+            }
+          },
+          "inputs": [
+            {
+              "id": "inPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Input Port"
+                }
+              }
+            }
+          ],
+          "outputs": [
+            {
+              "id": "outPort",
+              "app_data": {
+                "ui_data": {
+                  "cardinality": {
+                    "min": 0,
+                    "max": -1
+                  },
+                  "label": "Output Port"
+                }
+              }
+            }
+          ]
+        }
+      ],
+      "app_data": {
+        "ui_data": {
+          "comments": []
+        },
+        "version": 7,
+        "runtime_type": "KUBEFLOW_PIPELINES",
+        "properties": {
+          "name": "untitled",
+          "runtime": "Kubeflow Pipelines"
+        }
+      },
+      "runtime_ref": ""
+    }
+  ],
+  "schemas": []
+}

+ 8 - 0
elyra/tests/cli/resources/pipelines/pipeline_with_zero_length_pipelines_field.pipeline

@@ -0,0 +1,8 @@
+{
+    "doc_type": "pipeline",
+    "version": "3.0",
+    "id": "0",
+    "primary_pipeline": "1",
+    "pipelines": [],
+    "schemas": []
+}

+ 21 - 0
elyra/tests/cli/resources/pipelines/pipeline_with_zero_nodes.pipeline

@@ -0,0 +1,21 @@
+{
+    "doc_type": "pipeline",
+    "version": "3.0",
+    "id": "0",
+    "primary_pipeline": "1",
+    "pipelines": [
+        {
+            "id": "1",
+            "nodes": [],
+            "app_data": {
+                "runtime": "",
+                "version": 5,
+                "runtime_type": "KUBEFLOW_PIPELINES",
+                "properties": {
+                    "name": "generic"
+                }
+            },
+            "schemas": []
+        }
+    ]
+}

+ 7 - 0
elyra/tests/cli/resources/pipelines/pipeline_without_pipelines_field.pipeline

@@ -0,0 +1,7 @@
+{
+    "doc_type": "pipeline",
+    "version": "3.0",
+    "id": "0",
+    "primary_pipeline": "1",
+    "schemas": []
+}

+ 19 - 0
elyra/tests/cli/resources/runtime_configs/valid_airflow_test_config.json

@@ -0,0 +1,19 @@
+{
+  "display_name": "airflow test instance",
+  "metadata": {
+    "api_endpoint": "http://airflowhost:2222",
+    "github_repo": "test-org/test-repo",
+    "github_branch": "test-branch",
+    "github_repo_token": "test-token",
+    "cos_endpoint": "http://miniohost:3333",
+    "cos_bucket": "test-bucket-name",
+    "cos_username": "minio",
+    "cos_password": "minio123",
+    "user_namespace": "default",
+    "git_type": "GITHUB",
+    "github_api_endpoint": "https://api.github.com",
+    "cos_auth_type": "USER_CREDENTIALS",
+    "runtime_type": "APACHE_AIRFLOW"
+  },
+  "schema_name": "airflow"
+}

+ 20 - 0
elyra/tests/cli/resources/runtime_configs/valid_kfp_test_config.json

@@ -0,0 +1,20 @@
+{
+  "display_name": "Kubeflow Pipelines dev environment",
+  "metadata": {
+    "api_endpoint": "http://kfphost/pipeline",
+    "description": "KFP test instance",
+    "cos_endpoint": "http://miniohost:12345",
+    "cos_username": "minio",
+    "cos_password": "minio123",
+    "cos_bucket": "kfp-test-bucket",
+    "tags": ["dev"],
+    "runtime_type": "KUBEFLOW_PIPELINES",
+    "engine": "Argo",
+    "auth_type": "DEX_STATIC_PASSWORDS",
+    "cos_auth_type": "USER_CREDENTIALS",
+    "user_namespace": "kubeflow-user-example-com",
+    "api_username": "a-user",
+    "api_password": "t0ps3cr3t"
+  },
+  "schema_name": "kfp"
+}

+ 400 - 214
elyra/tests/cli/test_pipeline_app.py

@@ -15,7 +15,8 @@
 #
 """Tests for elyra-pipeline application"""
 import json
-import os
+from pathlib import Path
+import shutil
 
 from click.testing import CliRunner
 from conftest import KFP_COMPONENT_CACHE_INSTANCE
@@ -26,58 +27,58 @@ from elyra.metadata.manager import MetadataManager
 from elyra.metadata.metadata import Metadata
 from elyra.metadata.schemaspaces import Runtimes
 
-SUB_COMMANDS = ['run', 'submit', 'describe', 'validate']
-
-PIPELINE_SOURCE_WITH_ZERO_LENGTH_PIPELINES_FIELD = \
-    '{"doc_type":"pipeline","version":"3.0","id":"0","primary_pipeline":"1","pipelines":[],"schemas":[]}'
-
-PIPELINE_SOURCE_WITHOUT_PIPELINES_FIELD = \
-    '{"doc_type":"pipeline","version":"3.0","id":"0","primary_pipeline":"1","schemas":[]}'
-
-PIPELINE_SOURCE_WITH_ZERO_NODES = \
-    '{"doc_type":"pipeline","version":"3.0","id":"0","primary_pipeline":"1","pipelines":[{"id":"1","nodes":[],"app_data":{"runtime":"","version": 5, "runtime_type": "KUBEFLOW_PIPELINES", "properties": {"name": "generic"}}, "schemas":[]}]}'  # noqa
-
-KFP_RUNTIME_INSTANCE = {
-    "display_name": "PipelineApp KFP runtime instance",
-    "metadata": {
-        "api_endpoint": "http://acme.com:32470/pipeline",
-        "cos_endpoint": "http://acme.com:30205",
-        "cos_username": "minio",
-        "cos_password": "miniosecret",
-        "cos_bucket": "my-bucket",
-        "tags": [],
-        "engine": "Argo",
-        "user_namespace": "kubeflow-user-example-com",
-        "api_username": "user@example.com",
-        "api_password": "12341234",
-        "runtime_type": "KUBEFLOW_PIPELINES",
-        "auth_type": "DEX_LEGACY"
-    },
-    "schema_name": "kfp"
-}
+# used to drive generic parameter handling tests
+SUB_COMMANDS = ['run', 'submit', 'describe', 'validate', 'export']
 
 
 @pytest.fixture
-def kfp_runtime_instance():
-    """Creates an instance of a kfp scehma and removes after test. """
-    instance_name = "pipeline_app_test"
+def kubeflow_pipelines_runtime_instance():
+    """Creates a Kubeflow Pipelines RTC and removes it after test. """
+    instance_name = "valid_kfp_test_config"
+    instance_config_file = Path(__file__).parent / 'resources' / 'runtime_configs' / f'{instance_name}.json'
+    with open(instance_config_file, 'r') as fd:
+        instance_config = json.load(fd)
+
     md_mgr = MetadataManager(schemaspace=Runtimes.RUNTIMES_SCHEMASPACE_ID)
     # clean possible orphaned instance...
     try:
         md_mgr.remove(instance_name)
     except Exception:
         pass
-    runtime_instance = md_mgr.create(instance_name, Metadata(**KFP_RUNTIME_INSTANCE))
+    runtime_instance = md_mgr.create(instance_name, Metadata(**instance_config))
+    yield runtime_instance.name
+    md_mgr.remove(runtime_instance.name)
+
+
+@pytest.fixture
+def airflow_runtime_instance():
+    """Creates an airflow RTC and removes it after test. """
+    instance_name = "valid_airflow_test_config"
+    instance_config_file = Path(__file__).parent / 'resources' / 'runtime_configs' / f'{instance_name}.json'
+    with open(instance_config_file, 'r') as fd:
+        instance_config = json.load(fd)
+
+    md_mgr = MetadataManager(schemaspace=Runtimes.RUNTIMES_SCHEMASPACE_ID)
+    # clean possible orphaned instance...
+    try:
+        md_mgr.remove(instance_name)
+    except Exception:
+        pass
+    runtime_instance = md_mgr.create(instance_name, Metadata(**instance_config))
     yield runtime_instance.name
     md_mgr.remove(runtime_instance.name)
 
 
 def test_no_opts():
+    """Verify that all commands are displayed in help"""
     runner = CliRunner()
     result = runner.invoke(pipeline)
     assert 'run       Run a pipeline in your local environment' in result.output
     assert 'submit    Submit a pipeline to be executed on the server' in result.output
     assert 'describe  Display pipeline summary' in result.output
+    assert 'export    Export a pipeline to a runtime-specific format' in result.output
+    assert 'validate  Validate pipeline' in result.output
+
     assert result.exit_code == 0
 
 
@@ -88,214 +89,110 @@ def test_bad_subcommand():
     assert result.exit_code != 0
 
 
-def test_subcommand_no_opts():
+@pytest.mark.parametrize("subcommand", SUB_COMMANDS)
+def test_subcommand_no_opts(subcommand):
     runner = CliRunner()
-    for command in SUB_COMMANDS:
-        result = runner.invoke(pipeline, [command])
-        assert "Error: Missing argument 'PIPELINE_PATH'" in result.output
-        assert result.exit_code != 0
-
-
-def test_run_with_invalid_pipeline():
-    runner = CliRunner()
-
-    result = runner.invoke(pipeline, ['run', 'foo.pipeline'])
-    assert "Pipeline file not found:" in result.output
-    assert "foo.pipeline" in result.output
-    assert result.exit_code != 0
-
-
-def test_submit_with_invalid_pipeline(kfp_runtime_instance):
-    runner = CliRunner()
-
-    result = runner.invoke(pipeline, ['submit', 'foo.pipeline',
-                                      '--runtime-config', kfp_runtime_instance])
-    assert "Pipeline file not found:" in result.output
-    assert "foo.pipeline" in result.output
-    assert result.exit_code != 0
-
-
-def test_describe_with_invalid_pipeline():
-    runner = CliRunner()
-
-    result = runner.invoke(pipeline, ['describe', 'foo.pipeline'])
-    assert "Pipeline file not found:" in result.output
-    assert "foo.pipeline" in result.output
+    result = runner.invoke(pipeline, [subcommand])
     assert result.exit_code != 0
+    assert "Error: Missing argument 'PIPELINE_PATH'" in result.output
 
 
-def test_validate_with_invalid_pipeline():
+@pytest.mark.parametrize("subcommand", SUB_COMMANDS)
+def test_subcommand_invalid_pipeline_path(subcommand):
+    """Verify that every command only accepts a valid pipeline_path file name"""
     runner = CliRunner()
 
-    result = runner.invoke(pipeline, ['validate', 'foo.pipeline'])
-    assert "Pipeline file not found:" in result.output
-    assert "foo.pipeline" in result.output
+    # test: file not found
+    file_name = 'no-such.pipeline'
+    result = runner.invoke(pipeline, [subcommand, file_name])
     assert result.exit_code != 0
+    assert f"Invalid value for 'PIPELINE_PATH': '{file_name}' is not a file." in result.output
 
-
-def test_run_with_unsupported_file_type():
-    runner = CliRunner()
+    # test: file with wrong extension
     with runner.isolated_filesystem():
-        with open('foo.ipynb', 'w') as f:
-            f.write('{ "nbformat": 4, "cells": [] }')
-
-        result = runner.invoke(pipeline, ['run', 'foo.ipynb'])
-        assert "Pipeline file should be a [.pipeline] file" in result.output
+        file_name = 'wrong.extension'
+        with open(file_name, 'w') as f:
+            f.write('I am not a pipeline file.')
+        result = runner.invoke(pipeline, [subcommand, file_name])
         assert result.exit_code != 0
+        assert f"Invalid value for 'PIPELINE_PATH': '{file_name}' is not a .pipeline file." in result.output
 
 
-def test_submit_with_unsupported_file_type(kfp_runtime_instance):
+@pytest.mark.parametrize("subcommand", SUB_COMMANDS)
+def test_subcommand_with_no_pipelines_field(subcommand, kubeflow_pipelines_runtime_instance):
+    """Verify that every command properly detects pipeline issues"""
     runner = CliRunner()
     with runner.isolated_filesystem():
-        with open('foo.ipynb', 'w') as f:
-            f.write('{ "nbformat": 4, "cells": [] }')
-
-        result = runner.invoke(pipeline, ['submit', 'foo.ipynb',
-                                          '--runtime-config', kfp_runtime_instance])
-        assert "Pipeline file should be a [.pipeline] file" in result.output
-        assert result.exit_code != 0
-
+        pipeline_file = 'pipeline_without_pipelines_field.pipeline'
+        pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / pipeline_file
+        assert pipeline_file_path.is_file()
 
-def test_describe_with_unsupported_file_type():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.ipynb', 'w') as f:
-            f.write('{ "nbformat": 4, "cells": [] }')
+        # every CLI command invocation requires these parameters
+        invoke_parameters = [subcommand, str(pipeline_file_path)]
+        if subcommand in ['submit', 'export']:
+            # these commands also require a runtime configuration
+            invoke_parameters.extend(['--runtime-config', kubeflow_pipelines_runtime_instance])
 
-        result = runner.invoke(pipeline, ['describe', 'foo.ipynb'])
-        assert "Pipeline file should be a [.pipeline] file" in result.output
+        result = runner.invoke(pipeline, invoke_parameters)
         assert result.exit_code != 0
-
-
-def test_validate_with_unsupported_file_type():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.ipynb', 'w') as f:
-            f.write('{ "nbformat": 4, "cells": [] }')
-
-        result = runner.invoke(pipeline, ['validate', 'foo.ipynb'])
-        assert "Pipeline file should be a [.pipeline] file" in result.output
-        assert result.exit_code != 0
-
-
-def test_run_with_no_pipelines_field():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITHOUT_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['run', pipeline_file_path])
-        assert "Pipeline is missing 'pipelines' field." in result.output
-        assert result.exit_code != 0
-
-
-def test_submit_with_no_pipelines_field(kfp_runtime_instance):
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITHOUT_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['submit', pipeline_file_path,
-                                          '--runtime-config', kfp_runtime_instance])
         assert "Pipeline is missing 'pipelines' field." in result.output
-        assert result.exit_code != 0
 
 
-def test_describe_with_no_pipelines_field():
+@pytest.mark.parametrize("subcommand", SUB_COMMANDS)
+def test_subcommand_with_zero_length_pipelines_field(subcommand, kubeflow_pipelines_runtime_instance):
+    """Verify that every command properly detects pipeline issues"""
     runner = CliRunner()
     with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITHOUT_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['describe', pipeline_file_path])
-        assert "Pipeline is missing 'pipelines' field." in result.output
-        assert result.exit_code != 0
-
+        pipeline_file = 'pipeline_with_zero_length_pipelines_field.pipeline'
+        pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / pipeline_file
+        assert pipeline_file_path.is_file()
 
-def test_validate_with_no_pipelines_field():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITHOUT_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
+        # every CLI command invocation requires these parameters
+        invoke_parameters = [subcommand, str(pipeline_file_path)]
+        if subcommand in ['submit', 'export']:
+            # these commands also require a runtime configuration
+            invoke_parameters.extend(['--runtime-config', kubeflow_pipelines_runtime_instance])
 
-        result = runner.invoke(pipeline, ['validate', pipeline_file_path])
-        assert "Pipeline is missing 'pipelines' field." in result.output
+        result = runner.invoke(pipeline, invoke_parameters)
         assert result.exit_code != 0
-
-
-def test_run_with_zero_length_pipelines_field():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_LENGTH_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['run', pipeline_file_path])
         assert "Pipeline has zero length 'pipelines' field." in result.output
-        assert result.exit_code != 0
 
 
-def test_submit_with_zero_length_pipelines_field(kfp_runtime_instance):
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_LENGTH_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
+@pytest.mark.parametrize("subcommand", SUB_COMMANDS)
+def test_subcommand_with_no_nodes(subcommand, kubeflow_pipelines_runtime_instance):
+    """Verify that every command properly detects pipeline issues"""
 
-        result = runner.invoke(pipeline, ['submit', pipeline_file_path,
-                                          '--runtime-config', kfp_runtime_instance])
-        assert "Pipeline has zero length 'pipelines' field." in result.output
-        assert result.exit_code != 0
+    # don't run this test for the `describe` command
+    # (see test_describe_with_no_nodes)
+    if subcommand == 'describe':
+        return
 
-
-def test_describe_with_zero_length_pipelines_field():
     runner = CliRunner()
     with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_LENGTH_PIPELINES_FIELD)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['describe', pipeline_file_path])
-        assert "Pipeline has zero length 'pipelines' field." in result.output
-        assert result.exit_code != 0
+        pipeline_file = 'pipeline_with_zero_nodes.pipeline'
+        pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / pipeline_file
+        assert pipeline_file_path.is_file()
 
+        # every CLI command invocation requires these parameters
+        invoke_parameters = [subcommand, str(pipeline_file_path)]
+        if subcommand in ['submit', 'export']:
+            # these commands also require a runtime configuration
+            invoke_parameters.extend(['--runtime-config', kubeflow_pipelines_runtime_instance])
 
-def test_run_pipeline_with_no_nodes():
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_NODES)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['run', pipeline_file_path])
-        assert "At least one node must exist in the primary pipeline." in result.output
+        result = runner.invoke(pipeline, invoke_parameters)
         assert result.exit_code != 0
-
-
-def test_submit_pipeline_with_no_nodes(kfp_runtime_instance):
-    runner = CliRunner()
-    with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_NODES)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
-
-        result = runner.invoke(pipeline, ['submit', pipeline_file_path, '--runtime-config', kfp_runtime_instance])
         assert "At least one node must exist in the primary pipeline." in result.output
-        assert result.exit_code != 0
 
 
-def test_describe_with_empty_pipeline():
+def test_describe_with_no_nodes():
     runner = CliRunner()
     with runner.isolated_filesystem():
-        with open('foo.pipeline', 'w') as pipeline_file:
-            pipeline_file.write(PIPELINE_SOURCE_WITH_ZERO_NODES)
-            pipeline_file_path = os.path.join(os.getcwd(), pipeline_file.name)
+        pipeline_file = 'pipeline_with_zero_nodes.pipeline'
+        pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / pipeline_file
+        assert pipeline_file_path.is_file()
 
-        result = runner.invoke(pipeline, ['describe', pipeline_file_path])
+        result = runner.invoke(pipeline, ['describe', str(pipeline_file_path)])
+        assert result.exit_code == 0, result.output
         assert "Description: None" in result.output
         assert "Type: KUBEFLOW_PIPELINES" in result.output
         assert "Nodes: 0" in result.output
@@ -305,9 +202,9 @@ def test_describe_with_empty_pipeline():
 
 def test_describe_with_kfp_components():
     runner = CliRunner()
-    pipeline_file_path = os.path.join(os.path.dirname(__file__), 'resources', 'kfp_3_node_custom.pipeline')
+    pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / 'kfp_3_node_custom.pipeline'
 
-    result = runner.invoke(pipeline, ['describe', pipeline_file_path])
+    result = runner.invoke(pipeline, ['describe', str(pipeline_file_path)])
     assert "Description: 3-node custom component pipeline" in result.output
     assert "Type: KUBEFLOW_PIPELINES" in result.output
     assert "Nodes: 3" in result.output
@@ -322,11 +219,13 @@ def test_describe_with_kfp_components():
 
 
 @pytest.mark.parametrize('component_cache_instance', [KFP_COMPONENT_CACHE_INSTANCE], indirect=True)
-def test_validate_with_kfp_components(kfp_runtime_instance, component_cache_instance):
+def test_validate_with_kfp_components(kubeflow_pipelines_runtime_instance, component_cache_instance):
     runner = CliRunner()
-    pipeline_file_path = os.path.join(os.path.dirname(__file__), 'resources', 'kfp_3_node_custom.pipeline')
-
-    result = runner.invoke(pipeline, ['validate', pipeline_file_path, '--runtime-config', kfp_runtime_instance])
+    pipeline_file_path = Path(__file__).parent / 'resources' / 'pipelines' / 'kfp_3_node_custom.pipeline'
+    result = runner.invoke(pipeline, ['validate',
+                                      str(pipeline_file_path),
+                                      '--runtime-config',
+                                      kubeflow_pipelines_runtime_instance])
     assert "Validating pipeline..." in result.output
     assert result.exit_code == 0
 
@@ -334,8 +233,8 @@ def test_validate_with_kfp_components(kfp_runtime_instance, component_cache_inst
 def test_describe_with_missing_kfp_component():
     runner = CliRunner()
     with runner.isolated_filesystem():
-        valid_file_path = os.path.join(os.path.dirname(__file__), 'resources', 'kfp_3_node_custom.pipeline')
-        pipeline_file_path = os.path.join(os.getcwd(), 'foo.pipeline')
+        valid_file_path = Path(__file__).parent / 'resources' / 'pipelines' / 'kfp_3_node_custom.pipeline'
+        pipeline_file_path = Path.cwd() / 'foo.pipeline'
         with open(pipeline_file_path, 'w') as pipeline_file:
             with open(valid_file_path) as valid_file:
                 valid_data = json.load(valid_file)
@@ -343,18 +242,18 @@ def test_describe_with_missing_kfp_component():
                 valid_data['pipelines'][0]['nodes'][0]['op'] = valid_data['pipelines'][0]['nodes'][0]['op'] + 'Missing'
                 pipeline_file.write(json.dumps(valid_data))
 
-        result = runner.invoke(pipeline, ['describe', pipeline_file_path])
+        result = runner.invoke(pipeline, ['describe', str(pipeline_file_path)])
         assert "Description: 3-node custom component pipeline" in result.output
         assert "Type: KUBEFLOW_PIPELINES" in result.output
         assert "Nodes: 3" in result.output
         assert result.exit_code == 0
 
 
-def test_validate_with_missing_kfp_component(kfp_runtime_instance):
+def test_validate_with_missing_kfp_component(kubeflow_pipelines_runtime_instance):
     runner = CliRunner()
     with runner.isolated_filesystem():
-        valid_file_path = os.path.join(os.path.dirname(__file__), 'resources', 'kfp_3_node_custom.pipeline')
-        pipeline_file_path = os.path.join(os.getcwd(), 'foo.pipeline')
+        valid_file_path = Path(__file__).parent / 'resources' / 'pipelines' / 'kfp_3_node_custom.pipeline'
+        pipeline_file_path = Path.cwd() / 'foo.pipeline'
         with open(pipeline_file_path, 'w') as pipeline_file:
             with open(valid_file_path) as valid_file:
                 valid_data = json.load(valid_file)
@@ -362,7 +261,294 @@ def test_validate_with_missing_kfp_component(kfp_runtime_instance):
                 valid_data['pipelines'][0]['nodes'][0]['op'] = valid_data['pipelines'][0]['nodes'][0]['op'] + 'Missing'
                 pipeline_file.write(json.dumps(valid_data))
 
-        result = runner.invoke(pipeline, ['validate', pipeline_file_path, '--runtime-config', kfp_runtime_instance])
+        result = runner.invoke(pipeline, ['validate',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance])
         assert "Validating pipeline..." in result.output
         assert "[Error][Calculate data hash] - This component was not found in the catalog." in result.output
         assert result.exit_code != 0
+
+# ------------------------------------------------------------------
+# tests for 'export' command
+# ------------------------------------------------------------------
+
+
+def do_mock_export(output_path: str, dir_only=False):
+    # simulate export result
+    p = Path(output_path)
+    # create parent directories, if required
+    if not p.parent.is_dir():
+        p.parent.mkdir(parents=True, exist_ok=True)
+    if dir_only:
+        return
+    # create a mock export file
+    with open(output_path, 'w') as output:
+        output.write('dummy export output')
+
+
+def prepare_export_work_dir(work_dir: str,
+                            source_dir: str):
+    """Copies the files in source_dir to work_dir"""
+    for file in Path(source_dir).glob('*'):
+        shutil.copy(str(file), work_dir)
+    # print for debug purposes; this is only displayed if an assert fails
+    print(f"Work directory content: {list(Path(work_dir).glob('*'))}")
+
+
+def test_export_invalid_runtime_config():
+    """Test user error scenarios: the specified runtime configuration is 'invalid'"""
+    runner = CliRunner()
+
+    # test pipeline; it's not used in this test
+    pipeline_file = 'kubeflow_pipelines.pipeline'
+    p = Path(__file__).parent / 'resources' / 'pipelines' / f'{pipeline_file}'
+    assert p.is_file()
+
+    # no runtime configuration was specified
+    result = runner.invoke(pipeline,
+                           ['export',
+                            str(p)])
+    assert result.exit_code != 0, result.output
+    assert "Error: Missing option '--runtime-config'." in result.output, result.output
+
+    # runtime configuration does not exist
+    config_name = 'no-such-config'
+    result = runner.invoke(pipeline,
+                           ['export',
+                            str(p),
+                            '--runtime-config',
+                            config_name])
+    assert result.exit_code != 0, result.output
+    assert f'Error: Invalid runtime configuration: {config_name}' in result.output
+    assert f"No such instance named '{config_name}' was found in the runtimes schemaspace." in result.output
+
+
+def test_export_incompatible_runtime_config(kubeflow_pipelines_runtime_instance,
+                                            airflow_runtime_instance):
+    """
+    Test user error scenarios: the specified runtime configuration is not compatible
+    with the pipeline type, e.g. KFP pipeline with Airflow runtime config
+    """
+    runner = CliRunner()
+
+    # try exporting a KFP pipeline using an Airflow runtime configuration
+    pipeline_file = "kubeflow_pipelines.pipeline"
+    p = Path(__file__).parent / 'resources' / 'pipelines' / f'{pipeline_file}'
+    assert p.is_file()
+
+    # try export using Airflow runtime configuration
+    result = runner.invoke(pipeline,
+                           ['export',
+                            str(p),
+                            '--runtime-config',
+                            airflow_runtime_instance])
+
+    assert result.exit_code != 0, result.output
+    assert "The runtime configuration type 'APACHE_AIRFLOW' does not "\
+           "match the pipeline's runtime type 'KUBEFLOW_PIPELINES'." in result.output
+
+    # try exporting an Airflow pipeline using a Kubeflow Pipelines runtime configuration
+    pipeline_file = "airflow.pipeline"
+    p = Path(__file__).parent / 'resources' / 'pipelines' / f'{pipeline_file}'
+    assert p.is_file()
+
+    # try export using KFP runtime configuration
+    result = runner.invoke(pipeline,
+                           ['export',
+                            str(p),
+                            '--runtime-config',
+                            kubeflow_pipelines_runtime_instance])
+
+    assert result.exit_code != 0, result.output
+    assert "The runtime configuration type 'KUBEFLOW_PIPELINES' does not "\
+           "match the pipeline's runtime type 'APACHE_AIRFLOW'." in result.output
+
+
+@pytest.mark.parametrize('component_cache_instance', [KFP_COMPONENT_CACHE_INSTANCE], indirect=True)
+def test_export_kubeflow_output_option(kubeflow_pipelines_runtime_instance,
+                                       component_cache_instance):
+    """Verify that the '--output' option works as expected for Kubeflow Pipelines"""
+    runner = CliRunner()
+    with runner.isolated_filesystem():
+        cwd = Path.cwd().resolve()
+        # copy pipeline file and depencencies
+        prepare_export_work_dir(str(cwd),
+                                Path(__file__).parent / 'resources' / 'pipelines')
+        pipeline_file = 'kfp_3_node_custom.pipeline'
+        pipeline_file_path = cwd / pipeline_file
+        # make sure the pipeline file exists
+        assert pipeline_file_path.is_file() is True
+        print(f'Pipeline file: {pipeline_file_path}')
+
+        # Test: '--output' not specified; exported file is created
+        # in current directory and named like the pipeline file with
+        # a '.yaml' suffix
+        expected_output_file = pipeline_file_path.with_suffix('.yaml')
+
+        # this should succeed
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance])
+
+        assert result.exit_code == 0, result.output
+        assert f"was exported to '{str(expected_output_file)}" in result.output, result.output
+
+        # Test: '--output' specified and ends with '.yaml'
+        expected_output_file = cwd / 'test-dir' / 'output.yaml'
+
+        # this should succeed
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance,
+                                          '--output',
+                                          str(expected_output_file)])
+
+        assert result.exit_code == 0, result.output
+        assert f"was exported to '{str(expected_output_file)}" in result.output, result.output
+
+        # Test: '--output' specified and ends with '.yml'
+        expected_output_file = cwd / 'test-dir-2' / 'output.yml'
+
+        # this should succeed
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance,
+                                          '--output',
+                                          str(expected_output_file)])
+
+        assert result.exit_code == 0, result.output
+        assert f"was exported to '{str(expected_output_file)}" in result.output, result.output
+
+
+def test_export_airflow_output_option(airflow_runtime_instance):
+    """Verify that the '--output' option works as expected for Airflow"""
+    runner = CliRunner()
+    with runner.isolated_filesystem():
+        cwd = Path.cwd().resolve()
+        # copy pipeline file and depencencies
+        prepare_export_work_dir(str(cwd),
+                                Path(__file__).parent / 'resources' / 'pipelines')
+        pipeline_file = 'airflow.pipeline'
+        pipeline_file_path = cwd / pipeline_file
+        # make sure the pipeline file exists
+        assert pipeline_file_path.is_file() is True
+        print(f'Pipeline file: {pipeline_file_path}')
+
+        #
+        # Test: '--output' not specified; exported file is created
+        # in current directory and named like the pipeline file with
+        # a '.py' suffix
+        #
+        expected_output_file = pipeline_file_path.with_suffix('.py')
+        print(f'expected_output_file -> {expected_output_file}')
+        do_mock_export(str(expected_output_file))
+
+        # this should fail: default output file already exists
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          airflow_runtime_instance])
+
+        assert result.exit_code != 0, result.output
+        assert f"Error: Output file '{expected_output_file}' exists and option '--overwrite' "\
+               "was not specified." in result.output, result.output
+
+        #
+        # Test: '--output' specified and ends with '.py' (the value is treated
+        #       as a file name)
+        #
+        expected_output_file = cwd / 'test-dir-2' / 'output.py'
+        do_mock_export(str(expected_output_file))
+
+        # this should fail: specified output file already exists
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          airflow_runtime_instance,
+                                          '--output',
+                                          str(expected_output_file)])
+        assert result.exit_code != 0, result.output
+        assert f"Error: Output file '{expected_output_file}' exists and option '--overwrite' "\
+               "was not specified." in result.output, result.output
+
+        #
+        # Test: '--output' specified and does not end with '.py' (the value
+        #       is treated as a directory)
+        #
+        output_dir = cwd / 'test-dir-3'
+        expected_output_file = output_dir / Path(pipeline_file).with_suffix('.py')
+        do_mock_export(str(expected_output_file))
+
+        # this should fail: specified output file already exists
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          airflow_runtime_instance,
+                                          '--output',
+                                          str(output_dir)])
+        assert result.exit_code != 0, result.output
+        assert f"Error: Output file '{expected_output_file}' exists and option '--overwrite' "\
+               "was not specified." in result.output, result.output
+
+
+@pytest.mark.parametrize('component_cache_instance', [KFP_COMPONENT_CACHE_INSTANCE], indirect=True)
+def test_export_kubeflow_overwrite_option(kubeflow_pipelines_runtime_instance,
+                                          component_cache_instance):
+    """Verify that the '--overwrite' option works as expected for Kubeflow Pipelines"""
+    runner = CliRunner()
+    with runner.isolated_filesystem():
+        cwd = Path.cwd().resolve()
+        # copy pipeline file and depencencies
+        prepare_export_work_dir(str(cwd),
+                                Path(__file__).parent / 'resources' / 'pipelines')
+        pipeline_file = 'kfp_3_node_custom.pipeline'
+        pipeline_file_path = cwd / pipeline_file
+        # make sure the pipeline file exists
+        assert pipeline_file_path.is_file() is True
+        print(f'Pipeline file: {pipeline_file_path}')
+
+        # Test: '--overwrite' not specified; exported file is created
+        # in current directory and named like the pipeline file with
+        # a '.yaml' suffix
+        expected_output_file = pipeline_file_path.with_suffix('.yaml')
+
+        # this should succeed
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance])
+
+        assert result.exit_code == 0, result.output
+        assert f"was exported to '{str(expected_output_file)}" in result.output, result.output
+
+        # Test: '--overwrite' not specified; the output already exists
+        # this should fail
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance])
+
+        assert result.exit_code != 0, result.output
+        assert f"Output file '{expected_output_file}' exists and option '--overwrite' was not" in result.output
+
+        # Test: '--overwrite' specified; exported file is created
+        # in current directory and named like the pipeline file with
+        # a '.yaml' suffix
+        # this should succeed
+        result = runner.invoke(pipeline, ['export',
+                                          str(pipeline_file_path),
+                                          '--runtime-config',
+                                          kubeflow_pipelines_runtime_instance,
+                                          '--overwrite'])
+
+        assert result.exit_code == 0, result.output
+        assert f"was exported to '{str(expected_output_file)}" in result.output, result.output
+
+
+# ------------------------------------------------------------------
+# end tests for 'export' command
+# ------------------------------------------------------------------