logo

Are you need IT Support Engineer? Free Consultant

Part 2 – Create a Custom SAP AI Core Chat Model No…

  • By sujay
  • 14/05/2026
  • 3 Views

 

SAP Community — Technology Blogs n8n meets SAP — 3-Part Series

Part 2 of 3

How to build a custom node that connects n8n's AI Agent to SAP's Generative AI Hub, package it into a Docker image, and deploy it to your Kyma instance.

In Part 1 we deployed a clean, vanilla n8n instance on SAP BTP, Kyma runtime. That instance uses the official n8nio/n8n:latest Docker image with no modifications. It works perfectly for most integration scenarios — but it has no built-in understanding of SAP AI Core or the Generative AI Hub.

n8n solves this through a custom node system. Any developer can write a node, package it as a Node.js module, and bake it into a custom Docker image on top of the official n8n base. That image replaces the default one in your deployment, and your new node appears in the editor alongside every built-in node. This post walks through exactly that process from understanding how the node works, to building and testing it locally, pushing it to a private registry, and finally swapping the image in your Kyma deployment. At the end, we build a first real AI workflow to put it all together.

All files used in this post are available at the bottom of this page. You can create each file locally by copying the code provided directly from there. The custom node in this post was built on top of the official n8n-nodes-starter template provided by n8n, which gives you the correct project structure.


1 How the Custom Node Works

Before touching a terminal, it helps to understand what kind of node this actually is because it is not a standard action node. It is a Language Model node.

Nodes in n8n's AI System

n8n has a built-in AI framework based on LangChain. Within that framework, nodes play different roles. An AI Agent node orchestrates a conversation loop , it receives a user message, decides whether to call a tool or respond directly, and produces a final answer. But the Agent node itself does not contain a language model. It expects one to be plugged into it from the outside.

That is exactly what our custom node does. It acts as a model provider, it outputs a ready-to-use language model connection that the Agent node can pick up and use. In n8n's visual canvas, you connect our SAP AI Core node to the “Chat Model” input of an Agent node, and from that point forward the agent uses SAP AI Core as its brain.

 

 

n8n canvas showing the SAP AI Core Chat Model node connected to the “Chat Model” input of an AI Agent node

 

 

 

The Authentication Flow

SAP AI Core is protected by XSUAA — SAP's OAuth2 authorization server. Every request to the Generative AI Hub must carry a valid Bearer token. The node handles this automatically: when a workflow runs, the node fetches a fresh token from the XSUAA token endpoint using the Client ID and Client Secret from the saved credentials, then passes that token to every subsequent API call.

 

This token fetch uses the client credentials grant type, the standard machine-to-machine OAuth2 flow. There is no user redirect or browser interaction involved. The credentials are stored securely in n8n's encrypted credential store (encrypted with the N8N_ENCRYPTION_KEY from Part 1).

 

The LangChain Bridge

SAP AI Core's Generative AI Hub exposes an API that is compatible with the OpenAI chat completions format. The custom node takes advantage of this by using LangChain's ChatOpenAI client and pointing it at the AI Core endpoint instead of OpenAI's servers. This gives n8n's entire AI framework immediate compatibility with any model deployed in your AI Core resource group, with no custom request logic required.

The node constructs the endpoint URL dynamically from two pieces of information you provide: the Base URL from your AI Core service key, and the Deployment ID of the specific model you want to call. It also attaches the ai-resource-group header to every request so AI Core knows which resource group to bill and route the request to.

 

Node Parameters

When you add the node to a canvas, you configure it with four fields:

 

Parameter   Where to find it Example value
Base URL AI Core service key — field AI_API_URL https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com
Deployment ID SAP AI Launchpad — your active deployment d1a2b3c4e5f6
Model Name The model behind your deployment gpt-4o
Resource Group AI Core resource group name default

The credentials (Access Token URL, Client ID, Client Secret) are stored separately in n8n's credential manager and reused across multiple nodes. You create them once and reference them from any number of SAP AI Core nodes in any workflow.

 

 

The Sap Ai Core Chat Model Node Configuration Panel In N8N, Showing All Four Parameters Filled In And The Credential Selector Pointing To A Saved &Quot;Sap Ai Core Oauth2&Quot; Credential.The SAP AI Core Chat Model node configuration panel in n8n, showing all four parameters filled in and the credential selector pointing to a saved “SAP AI Core OAuth2” credential.

 

 

 

2 Building the Image Locally

The repository contains a multi-stage Dockerfile that handles everything: compiling the TypeScript source, assembling the distributable files, and embedding them into a fresh n8n image. You do not need to understand every line of it — but it is worth knowing what the two stages do.

 

Stage 1 — The Builder

The first stage uses a standard Node.js 20 image as a build environment. It installs all dependencies (including dev tools like the TypeScript compiler), copies the source files, normalises line endings to avoid cross-platform issues between Windows and Linux, and runs the TypeScript build. The output is a compiled dist/ folder containing plain JavaScript — ready to run in any Node.js environment.

Stage 2 — The Final Image

The second stage starts fresh from n8nio/n8n:latest — the same base image used in Part 1. It copies only the compiled output from Stage 1 into a specific system path: /usr/local/lib/node_modules/n8n-nodes-sap-ai-core/.

Why that path specifically? The PersistentVolumeClaim from Part 1 is mounted at /home/node/.n8n. If the node were installed there, the volume mount would hide it at runtime — the node would silently disappear. Installing it to a system path outside the mount point ensures it is always present, regardless of what the PVC contains.

Running the Build

Clone the repository, then build the image with a name and tag of your choice:

 

docker build -t n8n-sap-ai-core:latest .

 

Docker will work through both stages. The first time you run this it will take a couple of minutes as it downloads base images and compiles dependencies. Subsequent builds are faster thanks to Docker's layer cache.

 

3 Testing the Image Locally

Before pushing anything to a registry or touching the Kyma deployment, it is worth confirming that the image works as expected on your local machine. The companion repository from Part 1 already contains a docker-compose.yml for running n8n locally — you just need to point it at your new custom image instead of the default one.

Open the docker-compose.yml and change the image field from n8nio/n8n:latest to n8n-sap-ai-core:latest (the tag you used in the build step). Then start the stack:

 

docker compose up -d

 

Open http://localhost:5678 in your browser. Log in, create a new workflow, and open the node picker. Search for “SAP” — the SAP AI Core Chat Model node should appear in the results. If it does, the custom image is working correctly.

 

N8N Node Picker (Local Instance) With &Quot;Sap&Quot; Typed In The Search Field And The Sap Ai Core Chat Model Node Visible In The Results List With The Sap Icon.n8n node picker (local instance) with “SAP” typed in the search field and the SAP AI Core Chat Model node visible in the results list with the SAP icon.

 

 

4 Pushing to a Private Registry

Kyma pulls container images from a registry — either Docker Hub (public) or a private registry that you control. For enterprise use, a private registry is the right choice: it keeps your custom image within your organisation's infrastructure and prevents unauthorised access.

The exact registry URL depends on what your organisation uses. Common choices include the SAP Container Registry, a registry in your cloud provider's environment or a self-hosted Harbor instance. In this Tutorial we used a Docker Hub private repository. 

Tag and Push

First, tag your locally built image with the full registry path. Then push it:

 

# Tag the image with your registry path
docker tag n8n-sap-ai-core:latest /n8n-sap-ai-core:1.0.0

# Push to the registry
docker push /n8n-sap-ai-core:1.0.0

 

Use a specific version tag (like 1.0.0) rather than latest for anything going to Kyma. This makes rollbacks straightforward if a new version causes problems, you simply update the deployment YAML back to the previous tag and re-apply.

 

Registry Credentials in Kyma

If your registry is private, Kyma needs credentials to pull the image. You provide these as a Kubernetes imagePullSecret,  a Secret of type kubernetes.io/dockerconfigjson that contains your registry login. Create the secret in the n8n namespace and reference it in your deployment manifest. The Kyma Dashboard has a built-in form for creating Docker registry secrets under Configuration → Secrets.


5 Updating the Kyma Deployment

With the image in the registry, the final step is telling the Kyma deployment to use it. Open n8n-kyma.yaml from the Part 1 repository and locate the image field inside the container spec. Change it from the default n8n image to your custom one:

 

# Before
image: n8nio/n8n:latest

# After
image: /n8n-sap-ai-core:1.0.0

 

If your registry is private, also add the imagePullSecrets entry pointing to the secret you created in the previous step. Then apply the updated manifest:

 

kubectl apply -f n8n-kyma.yaml -n n8n

 

Kubernetes detects that the image has changed and performs a rolling update — it starts a new Pod with the new image, waits for it to become healthy, then terminates the old one. Your n8n instance will be briefly in a transitional state but will not go fully offline. Watch the rollout with:

 

kubectl rollout status deployment/n8n -n n8n

 

Once the rollout is complete, open your live n8n Kyma URL in a browser. Create a new workflow and search for “SAP” in the node picker. The custom node should now be available in your production instance.

 

6 Building the Sample Workflow

With the custom node deployed, we can build a real AI workflow. The scenario is an IT Helpdesk Assistant, a conversational agent that can look up tickets, knowledge articles, and employee skills from a live OData API and answer questions about them in natural language. This workflow also serves as the foundation for Part 3, where we surface it through Joule Studio.

 

Workflow Overview

Step 1 — Create the Workflow & add Agent-Node

In n8n, create a new workflow and name it IT-Helpdesk-Agent. Add an AI Agent node to a blank canvas, n8n automatically pairs it with a Chat Trigger, so both nodes appear together. This gives you a built-in chat interface directly inside n8n where you can test the agent in real time without any additional setup.

Step 2 — Add the AI Agent Node

Add an AI Agent node. In its configuration, set the System Message to give the agent a proper persona. 

 

 

Ai Agent Node Configuration Panel Showing The Prompt Field And The &Quot;Chat Model&Quot; And &Quot;Tools&Quot; Inputs Visible At The Bottom Of The Node Ready To Be Connected.AI Agent node configuration panel showing the Prompt field and the “Chat Model” and “Tools” inputs visible at the bottom of the node ready to be connected.

 

 

Step 3 — Connect the SAP AI Core Chat Model

Add the SAP AI Core Chat Model node to the canvas. Configure it with your AI Core credentials, Base URL, Deployment ID, and the model name of your active deployment. Connect the output of this node to the Chat Model input of the AI Agent node.

The agent will now use your SAP-hosted model as its reasoning engine for every workflow execution.

 

Step 4 — Add the OData Tool

Add an HTTP Request Tool node,  this is the tool that gives the agent access to real data. The Description tells the agent when and how to use this tool. The URL is where the dynamic behaviour lives. Instead of a fixed endpoint, the URL uses n8n's $fromAI() expression to let the model itself decide which entity to query:

 

Httprequest Node Configuration Panel Showing The Description Field And The Url Field.httpRequest node configuration panel showing the Description field and the URL field.

 

At this point you have a working AI workflow running entirely within your SAP BTP environment,  powered by SAP AI Core, querying a live OData API, and producing natural language answers. No external AI provider, no data leaving your BTP boundary.

 

Shopithan28_2-1778109802482.Png

 


SapAiCore.node.ts:
import {
    INodeType,
    INodeTypeDescription,
    ISupplyDataFunctions,
    SupplyData,
    NodeConnectionTypes,
} from 'n8n-workflow';
import { ChatOpenAI } from '@langchain/openai';
import * as https from 'https';
import { URL } from 'url';

function fetchXsuaaToken(tokenUrl: string, clientId: string, clientSecret: string): Promise {
    return new Promise((resolve, reject) => {
        const body = 'grant_type=client_credentials';
        const basicAuth = Buffer.from(`${clientId}:${clientSecret}`).toString('base64');

        let url: URL;
        try {
            url = new URL(tokenUrl);
        } catch {
            return reject(new Error(`Invalid accessTokenUrl: "${tokenUrl}"`));
        }

        const req = https.request({
            hostname: url.hostname,
            path: url.pathname + (url.search ?? ''),
            port: url.port ? parseInt(url.port) : 443,
            method: 'POST',
            headers: {
                'Authorization': `Basic ${basicAuth}`,
                'Content-Type': 'application/x-www-form-urlencoded',
                'Content-Length': Buffer.byteLength(body),
            },
        }, (res) => {
            let data="";
            res.on('data', (chunk) => { data += chunk; });
            res.on('end', () => {
                try {
                    const json = JSON.parse(data);
                    if (json.access_token) {
                        resolve(json.access_token);
                    } else {
                        reject(new Error(
                            `XSUAA [${res.statusCode}]: ${data} | ` +
                            `clientId prefix: ${clientId.substring(0, 10)}... | ` +
                            `tokenUrl: ${tokenUrl}`
                        ));
                    }
                } catch {
                    reject(new Error(`XSUAA parse error [${res.statusCode}]: ${data}`));
                }
            });
        });

        req.on('error', (err) => reject(new Error(`XSUAA request error: ${err.message}`)));
        req.write(body);
        req.end();
    });
}

export class SapAiCore implements INodeType {
    description: INodeTypeDescription = {
        displayName: 'SAP AI Core Chat Model',
        name: 'sapAiCoreChatModel',
        icon: 'file:sap.svg',
        group: ['transform'],
        version: 1,
        description: 'SAP Generative AI Hub Chat Model via AI Core',
        defaults: {
            name: 'SAP AI Core Model',
        },
        inputs: [],
        outputs: [NodeConnectionTypes.AiLanguageModel],
        credentials: [
            {
                name: 'sapAiCoreAuth',
                required: true,
            },
        ],
        properties: [
            {
                displayName: 'Base URL',
                name: 'baseUrl',
                type: 'string',
                default: '',
                required: true,
                placeholder: 'https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com',
                description: 'The AI_API_URL from your SAP Service Key',
            },
            {
                displayName: 'Deployment ID',
                name: 'deploymentId',
                type: 'string',
                required: true,
                default: '',
                description: 'The ID of your specific model deployment in SAP AI Core',
            },
            {
                displayName: 'Model Name',
                name: 'modelName',
                type: 'string',
                default: 'gpt-4o',
                description: 'The name of the model being used (e.g., gpt-4o, gpt-35-turbo)',
            },
            {
                displayName: 'Resource Group',
                name: 'resourceGroup',
                type: 'string',
                default: 'default',
                description: 'The AI Core resource group',
            },
        ],
    };

    async supplyData(this: ISupplyDataFunctions, itemIndex: number): Promise {
        const credentials = await this.getCredentials('sapAiCoreAuth');

        const baseUrlRaw = this.getNodeParameter('baseUrl', itemIndex, '') as string;
        const baseUrl = baseUrlRaw.replace(/\/$/, '').replace(/\/v2$/, '');
        const deploymentId = this.getNodeParameter('deploymentId', itemIndex, '') as string;
        const modelName = this.getNodeParameter('modelName', itemIndex, 'gpt-4o') as string;
        const resourceGroup = this.getNodeParameter('resourceGroup', itemIndex, 'default') as string;

        const accessTokenUrl = (credentials.accessTokenUrl as string ?? '').trim();
        const clientId = (credentials.clientId as string ?? '').trim();
        const clientSecret = (credentials.clientSecret as string ?? '').trim();

        if (!accessTokenUrl || !clientId || !clientSecret) {
            throw new Error(
                `SAP Auth: credentials incomplete — ` +
                `tokenUrl=${accessTokenUrl ? 'OK' : 'MISSING'}, ` +
                `clientId=${clientId ? 'OK' : 'MISSING'}, ` +
                `clientSecret=${clientSecret ? 'OK' : 'MISSING'}. ` +
                `Delete and re-create the credential in n8n if you recently changed the node.`
            );
        }

        const apiToken = await fetchXsuaaToken(accessTokenUrl, clientId, clientSecret);

        const model = new ChatOpenAI({
            openAIApiKey: apiToken,
            configuration: {
                baseURL: `${baseUrl}/v2/inference/deployments/${deploymentId}`,
                defaultHeaders: {
                    'Authorization': `Bearer ${apiToken}`,
                    'ai-resource-group': resourceGroup,
                    'Content-Type': 'application/json',
                },
                defaultQuery: {
                    'api-version': 'latest',
                },
            },
            modelName: modelName,
            maxRetries: 2,
        });

        return { response: model };
    }
}

SapAiCoreAuth.credentials.ts : 

import { ICredentialType, INodeProperties } from 'n8n-workflow';

export class SapAiCoreAuth implements ICredentialType {
    name="sapAiCoreAuth";
    displayName="SAP AI Core OAuth2 (Client Credentials)";
    documentationUrl="https://help.sap.com/docs/sap-ai-core";

    properties: INodeProperties[] = [
        {
            displayName: 'Access Token URL',
            name: 'accessTokenUrl',
            type: 'string',
            default: '',
            placeholder: 'https://tenant.authentication.sap.hana.ondemand.com/oauth/token',
            required: true,
        },
        {
            displayName: 'Client ID',
            name: 'clientId',
            type: 'string',
            default: '',
            required: true,
        },
        {
            displayName: 'Client Secret',
            name: 'clientSecret',
            type: 'string',
            typeOptions: { password: true },
            default: '',
            required: true,
        },
    ];
}
Dockerfile : 
# Stage 1: Build 
FROM node:20 AS builder
WORKDIR /build
COPY package*.json tsconfig.json ./
RUN npm install --include=dev --ignore-scripts
COPY . .


# This is critical for proper credential parsing and script execution in Linux containers
RUN find /build -type f \( -name "*.js" -o -name "*.ts" -o -name "*.json" \) -exec sed -i 's/\r$//' {} \; || true

RUN npm run build

# Stage 2: Final n8n Image
FROM n8nio/n8n:latest

USER root

# 1. Use a system path instead of /home/node/.n8n 
# This prevents the PVC from "hiding" your files
RUN mkdir -p /usr/local/lib/node_modules/n8n-nodes-sap-ai-core

# 2. Copy the compiled files and package.json
COPY --from=builder /build/dist /usr/local/lib/node_modules/n8n-nodes-sap-ai-core/dist
COPY --from=builder /build/package.json /usr/local/lib/node_modules/n8n-nodes-sap-ai-core/package.json

# 3. Ensure the icon is in the correct place
COPY --from=builder /build/nodes/SapAiCore/sap.svg /usr/local/lib/node_modules/n8n-nodes-sap-ai-core/dist/nodes/SapAiCore/sap.svg

# 4. Install dependencies in this new system folder
WORKDIR /usr/local/lib/node_modules/n8n-nodes-sap-ai-core
RUN npm install --omit=dev --legacy-peer-deps

# 5. Set permissions so the 'node' user can read the files
RUN chown -R node:node /usr/local/lib/node_modules/n8n-nodes-sap-ai-core

USER node
WORKDIR /home/node
 

 

 package.json : 
{
    "name": "n8n-nodes-sap-ai-core",
    "version": "1.0.0",
    "main": "dist/nodes/SapAiCore/SapAiCore.node.js",
    "scripts": {
        "build": "tsc && npm run copy:assets",
        "copy:assets": "copyfiles -u 1 \"nodes/**/*.svg\" dist/"
    },
    "n8n": {
        "n8nNodesApiVersion": 1,
        "credentials": [
            "dist/credentials/SapAiCoreAuth.credentials.js"
        ],
        "nodes": [
            "dist/nodes/SapAiCore/SapAiCore.node.js"
        ]
    },
    "dependencies": {
        "@langchain/core": "^1.1.41",
        "@langchain/openai": "^0.0.28"
    },
    "peerDependencies": {
        "n8n-workflow": "*"
    },
    "devDependencies": {
        "copyfiles": "^2.4.1",
        "n8n-workflow": "*",
        "typescript": "5.4.3"
    }
}

 

AI Usage Disclosure: Gen AI was used exclusively for linguistic refinements such as grammar, spelling, and phrasing as well as for the structural organisation of this text. All conceptual content, technical knowledge, architecture decisions, and implementation steps were developed independently by the author.

 

 

Up Next — Part 3 of 3

In Part 3 we take this workflow further. We will expose it as a callable tool inside Joule Studio — SAP's agent development environment — turning this n8n automation into a skill that Joule agents can invoke conversationally from within any Joule-enabled SAP application.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *