
Mastercard launched Agent Pay and processed its first AI-initiated transaction in Q3 2025. They called agentic commerce a "significant paradigm shift". Visa responded with Intelligent Commerce, opening its network directly to the developers building AI shopping agents. McKinsey projects up to $1 trillion in US retail revenue orchestrated by AI agents by 2030, and $3-5 trillion globally.
On the front end, real progress is happening. The payment networks are registering and verifying trusted agents, introducing agentic tokens, using passkeys for authentication, and blocking malicious bots. Acquirers benefit from clearer liability boundaries and issuers get visibility into cardholder intent.
Commerce is shifting from human-driven clicks to agent-driven transactions. AI agents are beginning to compare prices, book travel, manage subscriptions, and execute purchases – all on behalf of a cardholder who may never see the checkout page.
The promise is indeed compelling: faster transactions, hyper-personalised shopping, and payments at a scale the industry has never seen. But here’s what nobody is saying loudly enough: the back-office hasn't caught up. And so far, no one has a credible answer on how to fix that.
Chargebacks, disputes, liability proof, and authorisation evidence were all designed around the assumption that a human being made a deliberate decision. When an AI agent acts without a human in the loop, that assumption collapses, and the entire post-transaction framework needs to be rethought.
The consent problem nobody has solved
This isn't a hypothetical concern. Consider a simple real-world scenario: a cardholder tells their AI agent, "I want to go to Stockholm tomorrow". The agent evaluates routes, costs, and timing, then presents a recommendation. The cardholder taps confirm without reading the full itinerary. Three hours later, they realise they've booked a 14-hour overnight bus instead of the flight they assumed would be suggested.
Who is liable?
Under current card scheme rules, the answer is straightforward: the cardholder is responsible for what their agent did on their behalf. Mastercard Agent Pay and Visa Intelligent Commerce both operate on the principle that the cardholder authenticates the agent using passkeys, removing the liability shift to the acquirer or merchant in fraud scenarios.
But that clarity dissolves the moment you move from fraud to dispute. A cardholder claiming misrepresentation, that the agent presented options in a confusing or incomplete way, sits in an entirely different grey zone. PSD3 is beginning to address "subliminal techniques" that could mislead agents, but the operational definitions remain undeveloped.
The legal infrastructure is years behind the commercial reality.
Beyond the checklist: proving participation, not just presence
The rules may stay the same, but the evidence required to prove cardholder participation is about to change fundamentally. Today, dispute teams rely on established data points: 3D Secure results, device fingerprinting, IP addresses, geolocation, and transaction metadata. These signals establish that a human being, in a specific location, using a known device, authorised and authenticated a transaction.
In future disputes, back-office teams will need to evaluate an entirely new layer of context:
- What did the cardholder actually ask the agent to do? The original intent or mandate is stored with the issuer.
- What options did the agent present? If the options were clear, complete, and fairly represented.
- Did the agent omit information that would have changed the cardholder's decision? If the options were complete, fair, and clearly communicated.
- Did the checkout flow use what PSD3 may define as "subliminal techniques" to mislead the agent or the cardholder? Whether the cardholder was a victim of UX manipulation.
This goes far beyond the current checklists dispute operations teams use. It requires training, tooling, and process redesign – from both the front office handling cardholder inquiries and the back office managing representments and evidence gathering.
And this is not a technology problem waiting to happen. It is happening now, in the gap between the commercial rollout of agentic commerce and the operational maturity of the institutions processing those transactions.
We explored the early contours of this challenge in our previous post "Are you ready for disputes in the age of agentic commerce?", where we outlined how liability frameworks are beginning to fragment along the lines of agent certification, intent storage, and verification standards like Visa's Trusted Agent Protocol. The commercial deployment has accelerated since then.
For a deeper look at how practitioners are thinking through these challenges in real time, watch our on-demand webinar on the impact of agentic commerce on fraud and disputes.
Should we add friction back?
The payments industry has spent two decades removing friction from the customer journey. Fewer clicks, faster checkouts, seamless experiences. Agentic commerce takes this philosophy to its logical extreme – an AI handles everything, and the human just confirms.
But here's the irony: in agentic commerce, some friction might actually be protective.
We know from behavioural research, and from everyday life, that consumers accept without reading. Cookie banners, terms of service, software updates. The instinct to confirm and move on is deeply ingrained. When an AI agent presents a cardholder with a travel itinerary, a product recommendation, or an investment opportunity, the risk of uninformed consent is real and significant.
This is where the schemes are starting to think ahead. Mastercard is reportedly planning to introduce a form of scheme-carried liability for certified agents, but only if the back-office can prove the agent didn't stay within the "intent" stored for the transaction. This is a meaningful development. It means that the ability to capture, store, and interrogate intent data isn't a nice-to-have, it's becoming a liability differentiator.
For issuers, this means dispute evidence is about to get significantly more complex. Proving authentication and authorisation will no longer be a binary yes-or-no question. It will require interpreting mandates, evaluating agent behaviour, and assessing whether the cardholder's consent was genuinely informed.
The industry will need to find the right balance between seamless commerce and meaningful cardholder protection. And dispute operations will be on the front line of that tension.
You can’t hire your way out of this
The instinct when operational complexity grows is to add headcount: more dispute analysts, more compliance staff, and more training programmes. But this is the trap.
Manual operations scale linearly. Transaction volumes in an agentic world will scale exponentially. When AI agents are executing purchases around the clock, across every product category, for millions of cardholders simultaneously, the math simply doesn't work. Hiring your way out erodes every margin gain that agentic commerce was supposed to deliver, or you slow growth to stay in control.
Here's what forward-thinking issuers should be doing today:
- Investing in dispute management infrastructure that can ingest and interpret new forms of evidence, including intent data, agent behaviour logs, and presentation records.
- Empowering operations teams with agentic co-pilot assistants to evaluate disputes using contextual data, consent records, and behavioural analysis rather than relying solely on authorisation data.
- Building flexible, automated workflows that can adapt as payment network rules, regulatory requirements, and agent certification standards evolve.
This is the lens we apply at Rivero with Amiko, our agentic dispute management platform. Amiko is purpose-built for this new era of intelligent payments, combining a 24/7 virtual agent with automation that captures the right data at intake, deflects invalid claims, and guides dispute teams through complex scenarios with intelligence.
As agentic commerce evolves, disputes will become less about whether a transaction happened and more about proving how it happened. That requires systems that can handle ambiguity, interpret new forms of evidence, and adapt as agent identity and mandate data gradually enter the ecosystem. Access the on-demand demo to see Amiko in action.