logo

Are you need IT Support Engineer? Free Consultant

AI: Opportunities and Risks of Agentic Ai

  • By sujay
  • 18/08/2025
  • 93 Views

The AI landscape is developing rapidly. So far, AI systems have primarily supported and reacted to instructions. They made recommendations or carried out predefined tasks on request. Now the age of Agentic AI: systems that act autonomously, adapt to real time and work together like digital colleagues begins for them.

Joule agents can help their entire company faster processes

With the increasing autonomy of AI, however, new risks are also created. So how can we deal responsibly with this new reality? At SAP we do not leave this question to chance.

Imagine you buy a car. You expect it to meet all security standards, regardless of where the components are manufactured or how the car is installed. The process that runs in the background does not change your expectations of security. The same applies to Agentic Ai.

Agentic AI systems are more than tools. They are intelligent agents who plan, learn from experiences, correct themselves and work together. They are able to coordinate complex processes, make decisions and even work with other agents or humans to achieve a goal. However, this progress brings with it a new level of complexity and risk.

Skills and risks of Agentic Ai

Agentic AI systems offer powerful functions such as planning, reflection and cooperation, so that you can autonomously do complex tasks. You can design strategies, learn from mistakes, use external tools and coordinate with people and other agents.

However, these skills also pose risks. In this way, incorrect planning can lead to inefficiencies. Reflection can increase unethical behavior. The use of other tools can cause instability if systems interact in an unpredictable way. And misunderstandings and mistakes can occur due to unclear cooperation. To balance all of this and take suitable safety precautions is essential for a safe, ethical use of Agentic AI.

Deal with autonomy: find the right measure between freedom and control

One of the urgent challenges in the use of Agentic AI is to deal with their autonomy accordingly. Uncontrolled, these systems can be misinterpreted by the course, interpret context incorrectly or bring subtle risks that cannot be recognized immediately. Here companies have to find the right measure between freedom and control.

We have learned that the level of control should depend on the amount of the risk. Areas where there is a lot at stake, such as the health sector or human resources require comprehensive control by people. Routine tasks with low risk, on the other hand, allow more autonomy. In addition, continuous monitoring is essential. Like any complex technology, agentic AI systems must be checked regularly to ensure quality, compliance and reliability.

Ein wesentliches Element dieser Kontrolle ist ein „Human in the Loop“-Konzept. At critical points, people come into play with their judgment, so that automated actions are based on human values and the goals of the company.

From the beginning, this principle was a central element of the SAP concept for the ethical handling of AI. It reflects our conviction that AI should support the decision -making of people but should not replace. In order to support this, the SAP has introduced mandatory ethics checks for all Agentic Ki application cases. This is to ensure that every use of AI is checked for its ethical effects and that our principles for the responsible use of AI is in line with our principles.

Create transparency and responsibility in AI systems

Transparency is not just a catchphrase, but a fundamental prerequisite for building trust in Agentic AI. During the design phase of AI systems, it is important from the start to classify them according to the complexity and the risk of the tasks they have carried out. The necessary safety precautions depend on this classification. It ensures that mechanisms for human intervention are integrated from the start.

With SAP Business Data Cloud, design the future of intelligent applications

If the system is running, transparency is guaranteed by the traceability and traceability of its actions. Development teams and end users must be able to understand what the system does and why. It is crucial that responsibility is always in humans or legal entities, never with the AI itself.

Governance and regulation for Agentic Ai rethink

Agentic Ai is already in use, but there are no new, specially created regulations. Existing laws and framework conditions such as the GDPR continue to apply and offer a solid governance basis. In contrast to before, however, a lot more technical care is required to remain compliant and ethically perfect. Companies now have to introduce more sustainable processes. You have to analyze applications more precisely, carry out risk-based controls that are based on the potential effects of the AI system and ensure that ethical and legal standards are observed through improved design methods and ongoing tests.

Focus on human values in design

Agentic Ai cannot be an excuse for loosened standards. Our attitude at SAP is clear and clear: Even in autonomous systems, AI must meet the highest ethical benchmarks. This means embedding principles such as fairness, transparency and inclusion of people directly into the design.

Ultimately, all users should have the necessary tools and understanding in order to monitor system behavior and to be able to intervene if necessary.

Build trust in the unknown

Trust in AI does not simply arise by itself; It has to be set up in a targeted manner and continuously strengthened. This is most effective if you provide stakeholders the right amount of information. Too many details can be overwhelmed and counterproductive. If you inform too little, this can lead to blind trust or fear of the unknown. The key is to clearly inform what the system can do, what risks and restrictions there are and how it is used appropriately. If you want to create a safe, protected and trustworthy AI environment, it is important to enable users to critically assess the behavior of the AI and know when to intervene.

Rethink KPIs in AI supported workplace

Since AI agents such as our joule agents are taking on more and more tasks, the tasks of the people will of course also change and develop. In order to keep up with this change, companies have to think about how they define and measure success in the future. This begins with investments in change management and further education programs that prepare employees to work effectively together with AI. This also includes redefining productivity indicators and, in addition to the completion of tasks, to keep an eye on how well people work together. Success should be measured by how efficiently teams use to get new insights into data and enable innovations.

Develop AI that creates trust

Agentic AI is not just another phase, but a transformation. But as with any transformative technology, success depends on how it develops, controlled and used.

In the best case, Agentic Ai extends human skills, accelerates innovations and helps to master challenges that were previously considered too complex. Agentic AI also requires a new measure of care, control and ethical considerations.

Further information:


Walter Sun is Senior Vice President and head of the AI area at SAP.

Subscribe to the SAP News Center newsletter

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

//
Our customer support team is here to answer your questions. Ask us anything!
👋 Hi, how can I help?