Aparna Biju -

Agentic AI- Autonomous, Not Responsible?

Agentic AI is making headlines with its ability to carry out complex tasks under minimal human supervision. The main feature that sets Agentic AI apart from other AI models is its ability to make decisions on its own.

Agentic AI is making headlines with its ability to carry out complex tasks under minimal human supervision. The main feature that sets Agentic AI apart from other AI models is its ability to make decisions on its own. Though independent decision-making power gives freedom, with that freedom comes great responsibility. The more agentic AI grows in popularity, the greater the vulnerability. Agentic AI can carry out the entire work process seamlessly, from gathering data, analyzing it, and making a report to finally sending the email. But its autonomous nature also opens doors to various risks, including data breaches and misuse of access credentials.

Deloitte has already forecasted that 25% of organizations that use generative AI will incorporate agentic AI into their offerings by 2025. The number is expected to increase by 50% by 2027. These numbers are intimidating, and if necessary measures are not taken, it can cause serious challenges, putting the whole world in danger. A huge amount of data is used in the training of AI agents. If this data gets rigged or tampered with by an attacker, it can cause big problems. With the ability to memorize a past command, agentic AI can not only foresee the user’s needs but also adjust itself based on previous interactions. Its memory can be manipulated and used to insert false information. When misused by attackers, it can result in the AI making harmful decisions.

In today’s context, where governments are waging wars against each other, manipulation of AI can even lead to data leakages, putting the whole country at risk. Military operations, diplomatic discussions, etc., can be leaked. Fake emails and social media posts can be generated to mislead people. Deepfake videos are a common sight these days. Imagine a situation where an agentic AI creates a deepfake video on its own to spread false information.

Like the two sides of the same coin, agentic AI also comes with its own limitations and challenges. And thus it is important to protect it from exploitation. Developers and organizations must ensure that AI remains trustworthy and safe for people to use.