
AI agents are beginning to make purchases, comparing prices, placing orders, and even managing subscriptions. This is typically referred to as Agentic Commerce: transactions that are initiated and executed by AI agents acting on behalf of a user, based on predefined mandates or permissions. The promise of agentic commerce is to offer faster and more personalised shopping experiences.
For issuers and payment networks, this means potentially more transaction volume, but it also introduces a familiar question: how authorisation, liability, and dispute evidence should be interpreted when intent is delegated to software rather than expressed directly by a cardholder.
A new layer on familiar rails
Today, agentic transactions still rely on existing payment rails. Most use standard card payments, meaning the same risk, fraud, and chargeback frameworks continue to apply. That makes early adoption easier as no new payment methods or infrastructure are required, but it also exposes a gap.
From a payment network perspective (Visa or Mastercard), these transactions are currently indistinguishable from card-not-present ecommerce payments, governed by existing authorisation and dispute rules.
This means that liability and evidence are still interpreted through the existing framework built for human intent. For example, if a cardholder’s AI orders ten washing machines instead of one, existing rules would still likely treat the transaction as authorised, because the cardholder delegated authority to the agent. In a dispute scenario, this creates a grey area: was this unauthorised use, poor order design, or misbehaviour by the AI agent?
Visa’s Trusted Agent Protocol
Visa has outlined plans for a Trusted Agent Protocol, reportedly being developed with partners such as Cloudflare, and commerce and payments providers including Adyen, Shopify, Stripe, and Microsoft.
The protocol sets a foundation for secure, transparent communication between AI agents and merchants, helping both sides verify that an agent is legitimate and acting on behalf of a real user.
The framework introduces several key data elements:
- Agent intent: confirming that the agent is acting with a genuine purchase request.
- Consumer recognition: linking the agent to a known or returning customer.
- Payment information: allowing agents to carry and present payment credentials securely.
Together, these create a cryptographically verifiable record of who approved what, when, and through which agent. This is essential for future mandate capture - the ability to prove that a cardholder explicitly authorised an agent to act on their behalf - and dispute resolution.
Rather than replacing existing rails, Visa’s Trusted Agent Protocol builds on them. It adds a layer of trust and traceability on top of existing web infrastructure, and helps merchants distinguish trusted agents from malicious automation, while keeping checkout experiences smooth for customers.
Issuers in a transition period
For issuers, the practical impact will unfold gradually. The technical groundwork is there, but agent identity and mandate data won’t appear in clearing data or dispute evidence overnight. In the meantime, cardholders will experiment with AI assistants, and some will question purchases they didn’t expect their agent to make.
That will lead to more “unauthorised” claims, with limited supporting evidence. In practice, many of these cases will sit uncomfortably between fraud and non-fraud dispute categories, with limited agent-specific evidence available to support representment. As a result, fraud systems will need to adapt as agent behaviour introduces new transaction patterns. And because AI-related disputes lack consistent data, reviews of such cases will take longer and rely more on dispute agents’ experience and judgment.
Operationally, this means higher workloads for the dispute and fraud teams before automation reaches the back office. Issuers may tighten risk controls, limit certain transaction types, or expand virtual card use to manage uncertainty, while dispute teams are likely to spend more time on such cases and helping customers understand how agent-driven purchases work.
Building readiness
Issuers can start by building awareness about the topic. Understanding how mandate capture may evolve and how it could eventually support dispute evidence is the first step. Following the developments from Visa, Mastercard, and standards bodies like EMVCo and OpenID will be essential as new data fields emerge. These bodies are likely to influence how agent identity, delegated authority, and transaction metadata are standardised across the payments ecosystem.
Fraud and dispute teams can begin updating playbooks for ambiguity, where an agent’s action sits somewhere between human error and automation. The more clearly teams can document and interpret those grey areas now, the easier it will be to integrate agent-specific evidence later.
A step toward intelligent payments
Agentic commerce adds a layer of intelligence and autonomy to payments. For issuers, the challenge is not whether this shift will happen but how quickly dispute processes, fraud models, and cardholder education can adapt.
Even when the buyer isn’t human, trust in the transaction still depends on the same fundamentals it always has: clear evidence, transparent mandates, and well-defined accountability.
As agentic commerce evolves, disputes will become less about whether a transaction happened and more about proving how it happened. Issuers will need systems that can handle ambiguity, interpret new forms of evidence, and adapt as agent identity and mandate data gradually enter the ecosystem. Our dispute management solution, Amiko, is designed for this new era of intelligent payments, helping issuers manage complex dispute scenarios, streamline operations, and stay ready as payments move beyond purely human intent. Learn more at https://rivero.tech/amiko.