Growth of AI Agents Put Corporate Controls to the Test

6 hours ago 1

The technology sector’s famous phrase, “Move fast and break things,” just got a shot in the arm with the advent of autonomous artificial intelligence agents.

“This isn’t a technical upgrade; it’s a governance revolution,” Trustly Chief Legal and Compliance Officer Kathryn McCall told PYMNTS during a conversation for the June edition of the “What’s Next in Payments” series, “What’s Next in Payments: Secret Agent.”

AI agents are now not only recommending products but executing financial transactions, and the payments industry is racing to integrate these software agents capable of acting autonomously on a user’s behalf.

While the benefits could be immense, the risks are even starker. McCall is one of a growing number of voices urging restraint, forethought and rigorous systems thinking.

“You’re messing with people’s money here,” she said. “This is a lot different from using an AI agent to plan your vacation in Paris.”

AI agents have evolved from intelligent helpers to critical decision makers inside consumer banking flows. They aren’t just suggesting purchases — they’re booking flights, sending payments and signing documents. But as McCall pointed out, this leap has implications for privacy, security and legality.

The nature of these agents introduces complexities beyond that of traditional software-as-a-service (SaaS) platforms.

Compliance Is No Longer Reactive

Where legacy systems typically had deterministic behaviors with known attack surfaces, agentic AI is non-deterministic by design. That means unpredictable outputs, evolving capabilities and new security considerations such as prompt injection, adversarial attacks and data leakage risks, just to name a few.

“[AI agents] can take actions on behalf of users or systems,” McCall said. “They can execute tasks, write code, make API calls.”

That’s the problem. Each new capability increases the “blast radius” if something goes wrong.

At Trustly, McCall isn’t just sounding the alarm. She’s drafting the blueprint for how to build internal controls before regulators inevitably catch up, proposing what she called “bounded autonomy,” which is a principle rooted in layered governance, precise scoping and the preservation of human agency at crucial decision points.

“Can your agent initiate invoice creation but not approve disbursement without human review?” McCall asked. “What’s the scope? What are they allowed to do and what are they not allowed to do?”

The key to this is infrastructure. McCall said she recommends isolated containers for agent operations, sandbox environments, time-based privileges and hard kill switches to “pause agent action at critical thresholds.”

That includes high-value transactions, cross-border transfers and new vendor engagements — all of which carry financial and legal risk.

“We’ve got to retain human accountability,” McCall said. “You’ve got to treat these AI agents as non-human actors with unique identities in your system. You need audit logs, human-readable reasoning and forensic replay.”

The Regulatory Vacuum Demands Corporate Responsibility

When asked whether the current regulatory environment is equipped to deal with these advances, McCall’s answer was unequivocal.

“No, there really isn’t anything that’s emerging [yet],” she said. “Most things out there are bespoke and patchwork.”

But that doesn’t mean companies are off the hook. If anything, McCall said she believes they have a greater responsibility to anticipate the compliance issues ahead.

“Because it’s a new thing, people think there are no regulations around it,” she said. “There are regulations — you just have to think about how they apply.”

From PCI-DSS to the Gramm-Leach-Bliley Act to cross-border data flows governed by GDPR and anti-money laundering statutes, McCall said the landscape is already filled with obligations. It’s just a matter of seeing them through a new lens.

“You trigger a lot of compliance and data privacy rules when you do that,” she said, referring to AI agents that access bank accounts or send international payments.

McCall said automation should never be mistaken for abdication of responsibility.

“Would I let a junior lawyer operate unsupervised in certain areas?” she asked. “No. You don’t let your AI do that either — unless it’s been audited, sandboxed, logged and tightly governed.”

She also flagged the importance of ensuring that AI systems do not quietly evolve into systems of opaque decision making, warning that “explainability must be designed in, not patched on.”

As companies pursue agentic AI, McCall’s insights offer a kind of unofficial field manual for the C-suite. For legal and compliance leaders especially, the time to take ownership of this transformation is now.

“The chief legal officer’s role becomes even more important to translate complex AI behavior and inform the organization about legal exposure, mitigation strategies and the ethical boundaries we must respect,” she said.

In an age of intelligent machines, McCall is making the case that intent, transparency and recoverability must remain core business values — because in the end, even when an agent acts, it’s still the company that must answer for it.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read Entire Article