logo

Are you need IT Support Engineer? Free Consultant

SAP HANA Cloud Intelligent Application – CAP Appli…

  • By sujay
  • 12/05/2026
  • 1 Views

Prerequisites

  • An active SAP BTP Free Basic Trial account (sign up here)
  • Access to SAP Business Application Studio (included in the trial)
  • SAP AI Core with Extended plan provisioned in the trial environment
  • SAP HANA Cloud instance with the Vector Engine enabled
  • Basic familiarity with JavaScript / Node.js and SAP CAP concepts

What You Will Learn

This lesson is designed to take you from concepts to a running, deployed application. By the end, you will have direct, hands-on experience with each layer of a modern enterprise AI stack on SAP BTP.

After completing this lesson you will be able to:

  • Design a CDS data model that includes SAP HANA Cloud's native Vector(3072) type alongside standard structured data
  • Define OData services in CAP that expose document management and chat pipeline endpoints
  • Implement a RAG engine in Node.js using the SAP AI SDK's AzureOpenAiChatClient to call GPT-4o with context retrieved from HANA
  • Understand how document chunking, vector embedding, and cosine-similarity search work together to produce grounded AI responses
  • Deploy a multi-module MTA application (UI5 Fiori frontend + CAP service + HDI deployer) to SAP BTP Cloud Foundry using an automated deployment script
  • Operate the finished application — upload documents, manage processing states (UPLOADED → PROCESSING → READY / ERROR), start chat sessions, and interrogate document content with natural language

Why these skills matter: RAG is the dominant pattern for safely putting LLMs to work on private enterprise data. By building this yourself — rather than consuming a black-box service — you understand exactly where hallucination risk is managed, how retrieval quality affects answer quality, and how the same HANA database that holds your business data can also power semantic search. That understanding is directly transferable to real customer and internal AI projects on SAP BTP.

The Lesson at a Glance

You clone a pre-scaffolded CAP project from GitHub into SAP Business Application Studio. The project has the full application structure in place — UI5 frontend, service layer, library files, and deployment scripts — but four key files are deliberately left empty. Your job is to implement them. Each file targets one layer of the stack and comes with a clear description of what it does and the exact code to apply.

 

 

 

 

 

Once the four changes are saved you run two shell commands and the complete application — three Cloud Foundry apps, one HDI container, one shared AI Core service binding — is live.

Inside the Four Implementations

Database Schema — db/schema.cds

You define four CDS entities in a single file. The key detail is DocumentChunks, which includes a Vector(3072) column — HANA Cloud's native type for storing the 3 072-dimension embeddings generated by text-embedding-3-large. Storing vectors directly in HANA means your relational joins, access controls, and backup policies all cover your AI data automatically.

Chris_Bezuidenhout2_2-1778581891428.Png

Document Service — srv/document-service.cds

This OData service at /api/documents hides the raw chunks association from API consumers — callers interact only with document-level metadata. You declare a cascade-delete action, a status-polling function, and a delete-preview function that tells the UI exactly how many sessions, messages, and chunks will be removed before the user confirms.

Chris_Bezuidenhout2_3-1778582027073.Png

Chat Service — srv/chat-service.cds

The sendMessage action is the centre of the lesson. Its return type is deliberately rich — the caller receives not just the AI reply, but a typed array of source chunks with document name and similarity score. This design makes the UI's source-citation panel possible and, more importantly, teaches you how to surface retrieval evidence to end users — a critical pattern for AI transparency in enterprise apps.

Chris_Bezuidenhout2_4-1778582079823.Png

RAG Engine — srv/lib/rag-engine.js

This is where everything comes together. The engine uses the SAP AI SDK's AzureOpenAiChatClient (singleton, instantiated once) to call GPT-4o. The prompt is assembled in layers: a system instruction that constrains the model to the retrieved context, the context block itself (each chunk labelled with its source document and cosine-similarity percentage), up to ten prior turns of conversation history, and finally the user's current question.

Chris_Bezuidenhout2_0-1778582217502.Png

What the temperature setting teaches you: Setting temperature: 0.3 keeps GPT-4o factual and close to the retrieved context. Higher values produce more creative but less reliable answers — an important trade-off to understand when designing enterprise AI interactions.

From Code to Live App in Two Commands

Once the four files are saved, the automated deployment script handles everything: it validates your configuration, generates the MTA extension, builds all modules, deploys to Cloud Foundry, binds the shared AI Core service, and restages the service app.

Chris_Bezuidenhout2_2-1778582533401.Png

Chris_Bezuidenhout2_4-1778582624095.Png

Running the Application

The lesson includes a sample product catalog CSV you can use immediately to experience the end-to-end RAG pipeline.

Upload a document — click [+] in the Documents panel and select a PDF, TXT, or CSV file (up to 10 MB):

Chris_Bezuidenhout2_5-1778582704555.Png

Chris_Bezuidenhout2_6-1778582752712.Png

Watch the pipeline run — status moves from PROCESSING (chunking + embedding in progress) to READY automatically.

Chris_Bezuidenhout2_8-1778582830736.Png

Explore with natural language — the lesson uses the included product catalog to demonstrate a realistic business query:

Chris_Bezuidenhout2_0-1778588849017.Png

 

What to Look Forward To

This lesson is a foundation, not a ceiling. Once your RAG app is running you will immediately see where the interesting problems lie — and where the next experiments are:

  • Chunk size and overlap tuning — the lesson uses ~1 000-token chunks with ~200-token overlap and sentence-aware boundaries. Changing these parameters has a direct, observable impact on answer quality that you can test immediately with your own documents.
  • Retrieval depth — the default retrieves the top 10 similar chunks per query. Lowering this tightens context; raising it risks adding noise. You can observe the trade-off in the sources panel.
  • Multi-document sessions — the data model already supports reassigning a session to a different document (updateSession action). Experimenting with this reveals how document scope affects grounding.
  • Your own documents — the product catalog is illustrative. The real value becomes visible the moment you upload something from your own domain and start asking it questions that would normally require reading through the whole file.
  • Extending the service layer — the OData services you define in the lesson are deliberate starting points. Adding new actions, custom annotations, or additional entities follows the same CAP patterns you already applied.

Chris_Bezuidenhout2_0-1778589247151.Png

 

Start Building Today

The lesson is live now inside the SAP BTP Free Basic Trial. Log in to your trial account, open SAP Business Application Studio, and find the “Build a RAG Application with SAP CAP & HANA Cloud” exercise in the curriculum.

 

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *