Fraud prevention company Forter, which processes data from over $1 trillion in transactions across its global merchant network, is warning that AI has fundamentally changed how e-commerce attacks are structured – shifting from obvious anomalies to attacks that move through payment systems inside valid accounts, using correct credentials and routine purchasing behavior as cover. The result is fraud that traditional detection tools are not designed to catch.
For small and midsize businesses, the shift carries a specific cost: the same lean operations that make SMBs attractive to fraudsters – fewer dedicated security staff, limited fraud tooling, thinner margins to absorb chargebacks – also make it harder to detect attacks that are deliberately engineered to look like normal customer activity.
What Is Actually Changing in Payment Fraud
Traditional fraud attempts target the point of sale with mismatched data, stolen card numbers, or obviously suspicious behavior. The new model targets identity first. Fraudsters use AI tools to execute account takeovers, build synthetic identities, and automate social engineering campaigns – then use the resulting access to place orders that appear, at every checkpoint, to be legitimate.
Dany Naigeboren, Senior Director of Risk at Forter, framed the shift directly:
“With two to three hours of work, anyone can become proficient in conducting fraud online.”
That accessibility is new. Until recently, attacks at this level of sophistication required organized fraud rings with significant engineering resources. AI has removed that barrier, allowing individual actors to operate with capabilities previously limited to those groups.
The practical consequence is that a transaction flagged for review may look identical to a genuine customer order – valid account, correct shipping address, prior purchase history, normal order value. AI-powered phishing and social engineering campaigns feed this by harvesting credentials at scale, giving attackers the raw material to populate those accounts. Naigeboren noted that AI-generated phishing emails have become significantly harder to distinguish from legitimate communications, a problem that compounds the credential theft feeding account takeover attacks. The erosion of trust in digital communications has made consumers more vulnerable to these tactics, which ultimately creates more compromised accounts for attackers to exploit.
The Measurable Cost to Smaller Operators
Deloitte estimates that generative AI will enable $40 billion in fraud losses by 2027, up from $12.3 billion in 2023 – a 32% compound annual growth rate. Those figures are drawn from broader financial fraud reporting and are not SMB-specific, but the directional implication for smaller merchants is significant: the volume and sophistication of attacks will continue rising faster than most SMB security budgets.
Beyond direct financial loss, SMBs face chargeback liability, policy abuse, and what Naigeboren described as “friendly fraud” – legitimate-looking customers exploiting return policies, promotional codes, and dispute workflows. Determining intent at the transaction level, without the behavioral history or cross-platform signals that larger operators can assemble, is where smaller merchants are most exposed. Regulators, including the IRS, have flagged the rising volume of AI-assisted scams targeting small businesses, underscoring that the fraud exposure extends beyond payment systems into tax and financial identity theft.
False declines compound the problem. Systems calibrated to catch more fraud also reject more legitimate transactions – a tradeoff that larger merchants can absorb more easily than operators running on thin margins where every lost sale is material.
What the Industry Is Doing – and What Merchants Can Do Now
Fraud detection vendors are shifting toward identity-centric architectures that evaluate behavior across multiple touchpoints – login, checkout, returns, promotions – rather than assessing each transaction in isolation. Forter’s platform, which the company says provides real-time approve-or-decline decisions drawing on its $1 trillion transaction dataset, is built on this model. Those results come from Forter’s internal reporting and have not been independently benchmarked. Mastercard has similarly deployed AI models designed to detect anomalous payment patterns at the network level, though enterprise-scale detection tools do not automatically extend their protections to small merchants operating on basic payment processor accounts.
Naigeboren’s framing of what effective detection requires: “Traditional fraud detection systems are siloed. They look at one or two data points at a single touchpoint. Modern fraud prevention needs a holistic, identity-centric view over multiple touchpoints such as checkout, returns, promotions, and logins.”
For SMBs evaluating fraud tooling, that standard is a useful benchmark for assessing whether a current provider meets it.
Merchants with limited tooling budgets can take several concrete steps without enterprise-level investment: audit payment processor fraud settings and velocity rules, which are often configurable but left at defaults; review chargeback dispute workflows to ensure timely response within processor windows; evaluate whether their platform supports 3D Secure 2 (3DS2) authentication, which shifts some fraud liability back to card issuers; and scrutinize promotion and return policy exposure, since AI-assisted abuse of these systems is rising faster than transaction-level fraud in some categories. None of these steps closes the detection gap entirely – they reduce exposure at the edges where smaller operators have the most direct control.
Whether identity-based detection systems will scale down to SMB price points fast enough to match the pace at which AI is lowering the barrier for attackers remains the central unresolved question, and the answer will depend partly on whether payment processors build these capabilities into baseline merchant accounts or reserve them for higher-tier customers.