logo

Are you need IT Support Engineer? Free Consultant

n8n on SAP BTP Kyma — Part 1: From Local Docker to…

  • By sujay
  • 13/05/2026
  • 7 Views

 

Technology Blogs | SAP Community | n8n meets SAP — Part 1 of 3

A Step-by-Step Guide to Hosting n8n in a Cloud-Native SAP Environment

Technology Blogs · n8n  ·  SAP BTP  ·  Kyma  ·  Kubernetes

 

1. The Big Picture

If you have ever run n8n on a local machine or a simple VM, you already know how powerful it is. A few clicks, a running container, and you have a workflow automation engine at your fingertips. But when it comes to real enterprise use,  shared access, uptime guarantees, integration with SAP systems, and data that cannot simply disappear when a server restarts, a more robust foundation is needed.

This is where SAP BTP, Kyma runtime comes in. Kyma is SAP's fully managed Kubernetes environment, running directly on the Business Technology Platform. Instead of you managing cluster infrastructure, SAP manages the control plane. You deploy workloads; SAP keeps the platform running. Your n8n instance gets all the benefits of Kubernetes — automatic restarts on failure, persistent storage, a stable external URL, and native proximity to other BTP services like SAP AI Core or SAP HANA Cloud — without the overhead of operating a cluster yourself.

This post comes with all the files you need. At the bottom of this post you will find the complete contents of the deployment files,  docker-compose.yml for local development, and n8n-pvc.yaml together with n8n-kyma.yaml for the Kyma deployment. You can create each file locally by simply copying the code provided. By the end of this post, you will have a live, publicly accessible n8n instance running inside your SAP BTP subaccount.

 

2. Provisioning the Infrastructure

Enabling Kyma Runtime in SAP BTP Cockpit

Before any deployment can happen, the Kyma runtime needs to be enabled in your SAP BTP subaccount. If you have not done this yet, the process is straightforward.

Log in to the SAP BTP Cockpit and navigate to your subaccount. In the left-hand navigation, open “Kyma Environment”. If Kyma has not been provisioned yet, you will see an “Enable Kyma” button. Click it, choose a plan (for most scenarios the default plan is sufficient), and confirm. Provisioning takes several minutes — SAP is standing up a fully managed Kubernetes cluster on your behalf.

Once complete, the status in the Cockpit changes to “Created” and a “Link to dashboard” appears. That dashboard is your Kyma control plane — a Kubernetes-native UI where you can inspect namespaces, deployments, services, and more. Keep it open alongside your terminal throughout this guide.

 

 

SAP BTP Cockpit subaccount view showing the Kyma Environment tile with status “Created” and the “Link to dashboard” button visible.

 

 

 

 

 

 

 

The Kyma Dashboard Landing Page After Clicking Through — Showing The Cluster Overview And The Left-Hand Namespace Selector.The Kyma Dashboard landing page after clicking through — showing the cluster overview and the left-hand namespace selector.

 

 

A Note on Resource Sizing

n8n is a Node.js application. It is not particularly memory-hungry at rest, but it spins up processes to execute workflows, and those processes can be CPU-intensive depending on what your workflows do. For a shared team instance handling moderate workflow loads, a cluster with at least 2 vCPUs and 4 GB of memory available for your workloads is a comfortable starting point.

In Kyma's managed environment, the underlying node pool is managed by SAP. If you are on a trial account, be aware that trial clusters have resource limits. For any production or shared team usage, an enterprise subaccount with a properly sized instance type is the right choice. You can check available capacity in the Kyma Dashboard under the cluster's node details.


3. Establishing the Connection

Your cluster is running in the SAP cloud. Your deployment files are on your local machine. To bridge the two, you need two tools installed locally: kubectl (the Kubernetes command-line client) and a Docker engine (for building custom images — relevant in Part 2 of this series).

The Toolbelt

kubectl is the standard CLI for interacting with any Kubernetes cluster. Install it following the official guide for your operating system. On macOS, brew install kubectl is the fastest path. On Windows, the installer is available directly from the Kubernetes project.

Docker Desktop covers both the Docker engine and an optional local Kubernetes cluster for testing. Download it from docker.com. For this post, Docker is only needed for the local docker-compose option and for Part 2 — the Kyma deployment itself pulls images directly from Docker Hub.

The Key to the Kingdom: Downloading your Kubeconfig

kubectl does not know which cluster to talk to until you give it a configuration file — the kubeconfig. This file contains the cluster's API endpoint URL, authentication credentials, and context information. Kyma makes it easy to download.

In the BTP Cockpit, select Kyma Enviroment and click the “KubeconfigURL” . Save the file somewhere accessible, for example ~/.kube/kyma-config.yaml. Then tell kubectl to use it by setting the KUBECONFIG environment variable in your terminal:

export KUBECONFIG=~/.kube/kyma-config.yaml

For this to persist across terminal sessions, add the line to your shell profile (.zshrc, .bashrc, or equivalent).

Verification

With the kubeconfig set, run:

kubectl get nodes

If everything is configured correctly, you will see a list of the worker nodes in your Kyma cluster — their names, status (Ready), and age. This confirms that your local machine is communicating with the SAP cloud cluster. If you see an authentication error instead, double-check that the KUBECONFIG variable is set in the correct terminal session.


4. Deploying the Application

With the cluster accessible, it is time to deploy. The process has a deliberate order: storage first, secrets second, application third. Never the other way around — a Pod that starts without its volume or credentials will crash and require manual intervention.

Namespace Creation

Kubernetes clusters are shared environments. Namespaces are the mechanism for creating logical isolation within a cluster — separate resource quotas, separate access controls, separate network policies. Running n8n in its own n8n namespace keeps it cleanly separated from any other workloads you deploy later.

kubectl create namespace n8n

Always use a dedicated namespace. Deploying into default is a habit that causes confusion fast once you have more than one workload in a cluster.

Persistence First

n8n stores everything on disk: your workflow definitions, saved credentials, execution history, and configuration. Without a persistent volume, all of that disappears every time the Pod restarts — and in Kubernetes, Pods restart. For updates, node maintenance, or any crash, the container is replaced with a fresh one. Without persistence, you start from zero every time.

The n8n-pvc.yaml file in the repository defines a PersistentVolumeClaim — a request to the cluster for a dedicated 5 GB storage volume. Apply it first:

kubectl apply -f n8n-pvc.yaml

Kyma provisions the underlying storage automatically. You can verify it was created and is in Bound status with:

kubectl get pvc -n n8n

The Secret Sauce: The Encryption Key

n8n encrypts all credentials it stores — API keys, passwords, OAuth tokens — using an encryption key that you provide via the N8N_ENCRYPTION_KEY environment variable. This key is critical. If you lose it, n8n can no longer decrypt any of the credentials you have saved, and you will need to re-enter them all from scratch.

Generate a secure key using:

openssl rand -hex 32

Copy the output. Now create a Kubernetes Secret to hold it safely — secrets in Kubernetes are stored separately from your application manifests and are not exposed in plain text in your deployment files:

kubectl create secret generic n8n-secret \
  --from-literal=N8N_ENCRYPTION_KEY='' \
  -n n8n
Never put a real key directly into a YAML file that you commit to a repository. The .env.example file in the repo shows the expected format for local development — use that for Docker Compose, and use Kubernetes Secrets for Kyma.

Deploying n8n

With storage claimed and the secret created, apply the main deployment manifest:

kubectl apply -f n8n-kyma.yaml -n n8n

The n8n-kyma.yaml file defines a Kubernetes Deployment — a declaration of the desired state for the n8n application. It tells Kubernetes to run one Pod using the official n8nio/n8n:latest image, mount the persistent volume at /home/node/.n8n (where n8n writes all its data), and inject a set of environment variables including the host domain, protocol, port, and CORS origins.

Before applying, open n8n-kyma.yaml and replace the two placeholder values:

Placeholder What to put here

The key you generated above — or better, reference the Kubernetes Secret
Your Kyma app domain, e.g. n8n.abc1234.kyma.ondemand.com

Once applied, Kubernetes pulls the n8n image from Docker Hub and starts the container. Track the startup with:

kubectl get pods -n n8n --watch

The Pod status will move from Pending (image being pulled) to ContainerCreating to Running. Once it shows Running and 1/1 in the Ready column, n8n is up.


5. Creating the Service — The Internal Bridge

At this point n8n is running inside a Pod. But Pods in Kubernetes are ephemeral — every time the Pod is restarted, recreated, or updated, it gets a new internal IP address. Any other component trying to reach n8n by IP would break on the next restart.

A Kubernetes Service solves this. It is a stable, named internal endpoint that always points to the correct Pod, regardless of how many times that Pod has been replaced. The Service does not change; the Pod behind it can.

Create a file called n8n-service.yaml and apply it to the cluster. The most important part of the Service definition is the selector:

selector:
  app: n8n

This tells the Service: “route traffic to any Pod that carries the label app: n8n.” If you look at n8n-kyma.yaml, the Pod template carries exactly that label — that is how the Service and the Deployment find each other. External traffic arriving at the Service on port 80 is forwarded to the n8n container on port 5678.

kubectl apply -f n8n-service.yaml -n n8n

 

The Kyma Dashboard Services View In The N8N Namespace, Showing N8N-Service Listed With Its Cluster Ip.The Kyma Dashboard Services view in the n8n namespace, showing n8n-service listed with its cluster IP.

 

 


6. The API Rule — The External Gateway

The Service we just created is internal. It is reachable within the cluster, but not from your browser. The final piece is exposing n8n to the outside world through Kyma's API Gateway.

Kyma uses a custom resource called an APIRule to manage external access to services. An APIRule is more than just an ingress route — it integrates with Kyma's Istio service mesh and the Oathkeeper access control layer. This means you can add authentication policies directly at the gateway level, before a request ever reaches your application.

Creating the API Rule via the Kyma Dashboard

The simplest way to create an APIRule for a first deployment is directly through the Kyma Dashboard UI:

  1. Open the Kyma Dashboard and switch to the n8n namespace using the namespace selector at the top.
  2. In the left-hand navigation, find “API Rules” under the Networking section.
  3. Click “Create API Rule”.
  4. Fill in the form:
    • Name: n8n
    • Service: select n8n-service, port 80
    • Host: choose a subdomain — n8n is a natural choice. Kyma will append your cluster's base domain automatically, resulting in a URL like https://n8n.abc1234.kyma.ondemand.com.
    • Access Strategy: set to NoAuth for the initial setup.

A Word on the Access Strategy

Setting the access strategy to NoAuth means Kyma's gateway passes all requests through to n8n without performing any additional authentication check at the gateway level. This does not mean the instance is unsecured — n8n has its own built-in user management and requires account login before anyone can see or edit workflows.

For a first deployment and internal team use, this is a perfectly reasonable starting point. For production environments or public-facing instances, you can tighten this by configuring the APIRule to use JWT validation against SAP Identity Authentication Service (IAS), restricting access to authenticated SAP BTP users before requests even reach the n8n container.

Verify External Access

Once the APIRule is created, copy the host URL from the dashboard and open it in your browser. You should be greeted by the n8n login screen. If the browser shows a connection error, give it 30–60 seconds — APIRules take a moment to propagate through the Istio gateway. A green checkmark on the APIRule in the Kyma Dashboard confirms it was processed successfully.

 

Browser Showing The N8N Login Screen At The Live Kyma Domain Url — The Confirmation That The Full Stack Is Working End-To-End.Browser showing the n8n login screen at the live Kyma domain URL — the confirmation that the full stack is working end-to-end.

 

 

 


 

docker-compose.yml : 
services:
  n8n:
    image: n8nio/n8n:latest
    restart: always
    ports:
      - "5678:5678"
    environment:
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - N8N_HOST=localhost
      - N8N_PROTOCOL=http
      - NODE_ENV=production
    volumes:
      - ./n8n_data:/home/node/.n8n

 

 

.env.example :
# n8n Encryption Key — generate with: openssl rand -hex 32
N8N_ENCRYPTION_KEY=your-random-32-byte-hex-key-here

 

n8n-pvc.yaml : 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: n8n
  namespace: n8n
spec:
  replicas: 1
  selector:
    matchLabels:
      app: n8n
  template:
    metadata:
      labels:
        app: n8n
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      containers:
      - name: n8n
        image: n8nio/n8n:latest
        env:
          - name: N8N_ENCRYPTION_KEY
            value: ""

          - name: N8N_HOST
            value: ""
          - name: WEBHOOK_URL
            value: "https:///"
          - name: N8N_PROTOCOL
            value: "https"
          - name: N8N_PORT
            value: "5678"

          - name: N8N_CORS_ALLOWED_ORIGINS
            value: "https:///"

          - name: N8N_USER_FOLDER
            value: "/home/node/.n8n"
        volumeMounts:
          - name: n8n-data
            mountPath: /home/node/.n8n
      volumes:
      - name: n8n-data
        persistentVolumeClaim:
          claimName: n8n-pvc

 

n8n-kyma.yaml: 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: n8n-pvc
  namespace: n8n
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

 

 

AI Usage Disclosure: Gen AI was used exclusively for linguistic refinements such as grammar, spelling, and phrasing as well as for the structural organisation of this text. All conceptual content, technical knowledge, architecture decisions, and implementation steps were developed independently by the author.

 

 

Up Next — Part 2 of 3

Create a Custom Language Model Node for SAP AI Core

You now have a live, persistent n8n instance running on SAP BTP. In Part 2, we take it further: building a custom n8n node that connects directly to SAP AI Core's Generative AI Hub — turning your automation platform into an AI-capable engine that can call SAP-hosted language models like GPT-4o or Llama 3 from within any workflow. The full source code is already available; we will walk through exactly how it works and how to bundle it into your Kyma deployment. 

 

 

 

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *