Create custom pipeline templates

Create custom pipeline templates for Edge Processor pipelines, and share them through an SPL2-based app on the Splunk Cloud Platform.

You can create custom templates for Edge Processor and Ingest Processor pipelines, and make them available to other users through an SPL2-based app. When a user installs your app on the Splunk Cloud Platform deployment that is connected to the Data Management experience, your template is included in the list of available pipeline templates, and users can create pipelines using your template as a starting point.

To create a custom template, start by designing the Edge Processor or Ingest Processor pipeline that you want your template to contain, and save it as an SPL2 module. Then, add the @template annotation to the SPL2 module to convert the pipeline into a template. For detailed instructions, see the following sections on this page:

  1. Identify the data to be processed

  2. Prepare sample events

  3. Design the pipeline

  4. Save the pipeline as an SPL2 module

  5. Convert the pipeline into a template

  6. Package the template into an app

Identify the data to be processed

Identify the specific field-value pair that incoming events must contain in order to be processed by the pipelines that are created from the template. For example, the pipelines created from your template might be designed to process only events that have cisco:asa as the value in the sourcetype field.

Each pipeline processes a specific subset of all the data that the Edge Processor or Ingest Processor receives. By including a condition that incoming events must meet in order to be processed, you can help users understand the kind of data that the template is designed to work with and provide a default configuration for the pipelines that they create using your template.

Note: This configuration is also known as the "partition" of the pipeline. For more information, see Create pipelines for Edge Processors and Partitions.

When defining the @template annotation to turn your pipeline into a template, you can use the sourcetype parameter to specify the required field-value pair.

Prepare sample events

Prepare one or more sample events that accurately represent the incoming data that the template processes.

Providing sample events is a best practice that can greatly streamline the experience of creating and using your template:

  • If you use the pipeline editor in the Data Management experience to create the pipeline for your template, including sample data allows you to preview the pipeline and check how each processing action changes events.

  • When defining the @template annotation to turn your pipeline into a template, specifying sample data in the events parameter allows you to include that data by default in the template. Edge Processor and Ingest Processor users who install your app and use your template will be able to see the kind of events that the template is designed to process, and be able to preview the pipeline in order to see how it changes those events.

Note: These sample events are visible to other users, so make sure that the events do not include any sensitive data, such as credentials.

The sample events can be raw data where each event is a text string, or parsed data where each event has values that are stored in fields. Depending on the format of your sample events and whether you are using them in the pipeline editor or the @template annotation, you must use different syntax to represent the events.

Raw data

The following is an example of 3 raw events:

_raw
Wed Feb 14 2026 23:16:57 mailsv1 sshd[4590]: Failed password for apache from 78.111.167.117 port 3801 ssh2
Wed Feb 14 2026 23:16:57 mailsv1 sshd[4590]: Failed password for apache from 78.111.167.117 port 3801 ssh2
Mon Feb 12 2026 09:31:03 mailsv1 sshd[5800]: Failed password for invalid user guest from 66.69.195.226 port 2903 ssh2
If you want to use them in the pipeline editor to generate a preview of your pipeline, you must enter these events in the Sample data area, and specify each event as a distinct line of text. For example:
PYTHON
Wed Feb 14 2026 23:16:57 mailsv1 sshd[4590]: Failed password for apache from 78.111.167.117 port 3801 ssh2 
Wed Feb 14 2026 15:51:38 mailsv1 sshd[1991]: Failed password for grumpy from 76.169.7.252 port 1244 ssh2 
Mon Feb 12 2026 09:31:03 mailsv1 sshd[5800]: Failed password for invalid user guest from 66.69.195.226 port 2903 ssh2
If you are including these events by default in your template, you must specify these events in the events parameter of the @templates annotation. You must specify each event as an array element that is enclosed in double quotation marks ( " ), as follows:
PYTHON
["Wed Feb 14 2026 23:16:57 mailsv1 sshd[4590]: Failed password for apache from 78.111.167.117 port 3801 ssh2", "Wed Feb 14 2026 15:51:38 mailsv1 sshd[1991]: Failed password for grumpy from 76.169.7.252 port 1244 ssh2", "Mon Feb 12 2026 09:31:03 mailsv1 sshd[5800]: Failed password for invalid user guest from 66.69.195.226 port 2903 ssh2"]

Parsed data

The following is an example of 3 parsed events:

_raw _time severity category
Hello World 2026-04-24T13:00:05.105+0000 INFO system
Unexpected failure 2026-04-24T13:25:48.128+0000 ERROR system
Shutting down 2026-04-24T13:30:57.306+0000 INFO system
If you want to use them in the pipeline editor to generate a preview of your pipeline, you must enter these events in the Sample data area. You must provide lines of comma-separated values, where the first line contains field names and each subsequent line contains the values for an event. For example:
CODE
_raw,_time,severity,category
Hello World,2026-04-24T13:00:05.105+0000,INFO,system
Unexpected failure,2026-04-24T13:25:48.128+0000,ERROR,system
Shutting down,2026-04-24T13:30:57.306+0000,INFO,system
If you are including these events by default in your template, you must specify these events in the events parameter of the @templates annotation. You must specify these events as an array of JSON objects, where each object represents 1 event and each key-value pair in the object represents a field-value pair in the event. For example:
JSON
[
    {
        "_raw": "Hello World",
        "_time": "2026-04-24T13:00:05.105+0000",
        "severity": "INFO",
        "category": "system"
    },
    {
        "_raw": "Unexpected failure",
        "_time": "2026-04-24T13:25:48.128+0000",
        "severity": "ERROR",
        "category": "system"
    },
    {
        "_raw": "Shutting down",
        "_time": "2026-04-24T13:30:57.306+0000",
        "severity": "INFO",
        "category": "system"
    }
]

Design the pipeline

Design the pipeline that you want to convert into a template.

In your code editor of choice, write SPL2 to create the pipeline that you want to turn into a template. Enter SPL2 statements that define the pipeline and any custom items that the pipeline needs to use, such as custom data types, custom commands, or custom functions.

Consider the following best practices when designing the pipeline:

  • Start by creating your pipeline in the pipeline editor. The pipeline editor is designed to validate SPL2 for Edge Processor and Ingest Processor pipelines instead of searches. Additionally, it provides features that streamline pipeline development, such as point-and-click workflows that allow you to configure data processing actions without writing SPL2 manually, and pipeline previews that allow you to check how each data processing action changes the incoming data.
  • Include inline comments that explain the purpose, usage, and configuration of each SPL2 item or statement.
  • Structure the SPL2 content to maximize reusability by breaking down complex pipelines or custom functions into smaller, modular components.
  • Do not include sensitive data or hard-coded credentials in templates.

For more information about designing and creating pipelines, see the following:

SPL2 feature Documentation
Edge Processor pipeline statements
Ingest Processor pipeline statements

In the Use Ingest Processors manual:

Custom data types Custom data types in the SPL2 Search Manual
Custom command functions Custom command functions in the SPL2 Search Manual
Custom eval functions Custom eval functions in the SPL2 Search Manual

Save the pipeline as an SPL2 module

Save the pipeline as a .spl2 file in the /default/data/spl2 folder of your app.

If you used the pipeline editor to create the pipeline, you'll need to copy the contents of the SPL2 editing pane into a new file in your app development environment before you can save it as part of your app. For example, if you are using Visual Studio Code to develop your app, then save the pipeline as a .spl2 file in Visual Studio Code.

Convert the pipeline into a template

Adding a @template annotation to the module converts the pipeline into a template.

In the SPL2 module, enter the following @template syntax, replacing the placeholder values with supported parameter settings. The parameters are explained in the table that follows.
JSON
@template( 
    name: "<template_name>", 
    description: "<template_description>", 
    runtime: [<supported_spl2_profile>], 
    sourcetype: { 
        field: "<event_field_name>", 
        operator: "<logical_operator>", 
        values: ["<event_field_value>"] 
    }, 
    "events": ["<sample_event_1>", "<sample_event_2>", ...] 
);
Note: As a best practice, specify all of the following parameters, even those that are optional. These parameters provide information that helps the user understand the purpose and usage of the template, and ensures that the template can be surfaced clearly in the Data Management experience UI.
Parameter Required Description
name Yes

The name of the template.

This name displays on the Pipelines page in the Data Management experience.

Note: As a best practice for helping users find the right template for their use case, adopt a consistent and descriptive naming convention for your templates.
description No

A description for the template.

This description displays in the following locations in the Data Management experience:

  • The side panel that opens when a user selects the template on the Pipelines page.
  • The list of available templates on the Get started page of the pipeline creation workflow.

Defaults to empty.

runtime No

The SPL2 runtime that the template supports.

Specify one or more of the following runtimes as an array of strings:

  • edgeProcessor: The template and any pipelines created from it support the SPL2 commands and functions that are part of the edgeProcessor profile, and the pipelines can be applied to Edge Processors.
  • ingestProcessor: The template and any pipelines created from it support the SPL2 commands and functions that are part of the ingestProcessor profile, and the pipelines can be applied to the Ingest Processor.

Note: For more information about profiles, see the SPL2 compatibility profiles and quick references chapter in the SPL2 Search Reference.

Defaults to edgeProcessor.

sourcetype No

The field-value pair that the incoming events must contain in order to be processed by the pipelines created from the template.

This field-value pair displays in the Partition area of the pipeline editor, and determines how the pipeline selects and parses incoming data before processing it.

Note: For more information, see Create pipelines for Edge Processors and Partitions.

Define the required field-value pair using a JSON object with the following keys. All of these keys are required if you include the sourcetype parameter in the @templates annotation.

  • field: The name of the event field.

  • operator: One of the following values:

    • EQUAL: The field must contain an exact value.

    • MATCH: The field must contain a value that matches a specified regular expression.

  • values: An array containing one of the following:

    • When operator is set to EQUAL: The value that the event field must contain.

    • When operator is set to MATCH: The regular expression matching the value that the event field must contain.

For example, the following sourcetype definition specifies that the pipeline only processes events that have cisco:asa as the value in the sourcetype field:

JSON
sourcetype: { 
    field: "sourcetype", 
    operator: "EQUAL", 
    values: ["cisco:asa"] 
}

Defaults to empty. In this case, users must configure the partition when creating a pipeline from this template.

events No

One or more sample events representing the incoming data that the pipelines created from this template are designed to process.

These events display in the Sample data area of the pipeline editor. They indicate to users the kind of events that the template is designed to process, and allow users to generate pipeline previews that show how the events are changed as they pass through the pipeline.

Specify an array of strings or an array of JSON objects, where each array element represents one sample event.

  • If you specify an array of strings, each string is treated as a raw data event.
  • If you specify an array of JSON objects, each object is treated as a parsed event, and each key-value pair in the object is treated as a field-value pair in the event.

For more information and examples of the supported syntax, see Prepare sample events.

Defaults to empty.

Package the template into an app

Save your changes to the SPL2 module, and then package, test, and release your app for Splunk Cloud Platform. For more information, see the following pages in the Developer Guide for Splunk Cloud Platform and Splunk Enterprise:

Consider the following best practices when packaging and testing your app:

  • Make sure that all supporting functions, data types, and helper modules are included in the app so that the template is self-contained and does not require external dependencies for functionality.
  • Test the template against all intended runtimes, such as Edge Processor or Ingest Processor.
  • Follow the security guidelines described in Security best practices for apps in Splunk Cloud Platform and Splunk Enterprise in the Developer Guide for Splunk Cloud Platform and Splunk Enterprise.

Next steps

You now have an SPL2-based app that includes a custom pipeline template. When the app is installed on a Splunk Cloud Platform deployment, the template becomes available for use in the Data Management experience tenant that's connected to the deployment. For information about installing the app, see Install SPL2-based apps in the Splunk Cloud Platform Admin Manual.

To navigate from Splunk Cloud Platform to the list of available templates in the Data Management experience, select Settings then Data Management experience. Then, navigate to the Pipelines page and select the Templates tab. Your template appears in the list.

For information about creating pipelines using the template, see Use templates to create pipelines for Edge Processors.