Generative AI Has Ushered in a New Era of Fraud, Say Reports from Plaid and SEON
How the same models powering creativity are rewiring criminal economies and forcing the AI industry to redesign trust
A caller on the other end of a verification video smiles in exactly the same way as a dozen other applicants did that week, and the onboarding team flips through a checklist, none the wiser. Somewhere in Eastern Europe, a script fed to a generative model has produced thousands of plausible ID photos, matching names, and short video clips that pass casual inspection. The moment is quiet, procedural, and profitable for someone who knows which key to press.
The mainstream interpretation is familiar and reassuring: new detection tools will chase down deepfakes and stop fraud in its tracks. That is true to an extent, but the overlooked reality is more structural. Fraudsters are not merely swapping tools, they are redesigning entire attack lifecycles around generative models that scale identity fabrication, social engineering, and account takeovers with low overhead and high speed. This requires rethinking identity, not just improving a single detector.
Why the industry is suddenly rewriting its rulebook
Generative models let attackers create synthetic content at consumer prices and enterprise scale, collapsing months of reconnaissance into a few minutes. Fraud teams that rely on point in time checks and static rules are losing ground because synthetic identities can now be created with convincing transaction histories and matched multimedia. Plaid has cataloged how AI reshapes fraud signals and why continuous, behavior based assurances are becoming essential. (plaid.com)
The players racing to respond and why now matters
Legacy identity vendors, nimble startups, and platform providers all face the same pressure: speed to detect without suffocating legitimate growth. SEON has invested in transparent AI tooling and similarity ranking to help investigators connect dots across profiles and networks faster, because manual review cannot keep up with the output of a prompt. (seon.io)
A snapshot of how attacks have changed, with numbers that bite
Biometric attacks grew sharply in 2024 and 2025 as deepfakes and presentation attacks multiplied, with firms reporting liveness failures rising by roughly 38 percent in some datasets. Fraud attempts that use generated faces and fake documents now represent a nontrivial share of identity attacks, forcing fintechs to add new detection layers. That trend was highlighted in investigative reporting that relayed Plaid’s identity telemetry and case studies. (fortune.com)
Fraud losses are not just a headline number. Regulators and industry analysts warn that the misuse of generative AI could push fraud-related losses into the tens of billions of dollars within a few years if defenses do not scale. Wall Street and regulatory reporting have flagged GenAI as an accelerating multiplier of existing scams. (wsj.com)
Fraud is no longer a matter of better rules; it is now a contest between synthetic economies and consortium level intelligence.
How fraud ecosystems exploit model economics
Fraud rings tag content, iterate prompts, and test verifications in automated loops until a workflow yields consistent approvals. The marginal cost of creating another convincing profile approaches zero once templates and prompt libraries exist, and that makes long term, low friction strategies profitable. Vendors that provide cross platform signals and consortium intelligence are suddenly strategic assets rather than optional tools. This is why consortium defense models and telemetry sharing are getting more board level attention; they raise the cost of imposture by making identity provenance visible across apps. (plaid.com)
A dry aside for the long suffering product manager: nothing clarifies vendor ROI like discovering a synthetic identity ring has been living rent free in your loan book.
Concrete scenarios for business owners, with real math
A mid market neobank onboarding 10,000 new accounts per month might have historically accepted 0.5 percent fraudulent registrations. If generative AI raises effective fraud success to 1.5 percent, that is 100 fraudulent accounts per month instead of 50. At an average loss of 2,000 dollars per account when accounts are used for cash out and money laundering, the monthly loss jumps from 100,000 dollars to 200,000 dollars. Investing in layered signals and consortium intelligence that reduces that rate back to 0.6 percent can save the firm 160,000 dollars per month, paying back a modest fraud platform integration within weeks.
A second scenario: adding an adaptive liveness and device signal costs roughly 0.50 dollars per onboarding but reduces chargebacks and account takeover downstream. Multiply that by 10,000 signups and compare to the 100 to 200 thousand dollars monthly loss figure; the math is persuasive even to CFOs allergic to technical nouns.
The cost nobody is calculating
False positives are expensive in conversion terms, and many small teams will overcorrect by choking new user flow. The hidden cost is growth forgone. Firms that overindex on heavy friction lose lifetime value and market share to competitors who accept slightly more risk. The smarter play is dynamic risk tiering that adjusts challenges based on behavioral context and shared network signals. Yes, that requires engineering work and contractual trust between companies, but the alternative is sustained economic leak. Sometimes regulatory compliance looks like a speed bump; sometimes it looks like an ocean liner and no amount of fast swimming helps.
Risks and unanswered questions that should keep executives awake
Attribution remains thorny. When a synthetic identity is used across multiple platforms and jurisdictions, who is responsible for remediation and evidence sharing? Data privacy rules and competitive tensions limit how much telemetry can be shared, and that will create gaps attackers exploit. There is also a model risk for defenders: relying on AI detectors without transparent explanations amplifies audit and compliance problems. SEON’s emphasis on explainability reflects that trade off between automation and accountability. (seon.io)
Another issue is adversarial escalation. As detectors improve, attackers will adopt techniques to poison detection signals or generate content tailored to known defenses. That cat and mouse will favor organizations that can iterate at model speed.
What regulators and standards need to consider now
Policy makers must balance consumer privacy with the need for cross platform signal sharing to disrupt organized rings. Regulators should encourage standardized provenance metadata for identity documents and consider safe harbor mechanisms for approved consortia that share anonymized risk signals. Industry led standards will be faster than global regulation, but both are necessary to create durable deterrents.
Closing with a practical final thought
The AI industry must stop treating fraud as an engineering nuisance and start treating it as an economic externality that requires coordinated technical and commercial responses.
Key Takeaways
- Generative models have lowered the cost of creating synthetic identities, forcing a shift from point in time checks to continuous, behavior based assurance.
- Shared telemetry and consortium intelligence are now critical defensive assets that can dramatically reduce fraud losses.
- Small increases in fraud success rates quickly translate into large dollar losses, making layered defenses financially sensible for growth focused firms.
- Explainable AI and adaptive, risk tiered onboarding reduce both fraud and customer friction when implemented together.
Frequently Asked Questions
How does generative AI make identity fraud easier for attackers?
Generative AI can synthesize realistic photos, videos, and supporting documents at scale, enabling attackers to create plausible identities quickly. That reduces the time and expertise required to mount campaigns that previously took months.
What immediate defenses should a fintech deploy to lower risk?
Start with continuous behavioral monitoring, device and network signals, and adaptive liveness checks tied to risk scoring. Sharing anonymized fraud signals across trusted partners also raises the cost for attackers.
Will adding more friction to onboarding stop the problem?
Excessive friction reduces fraud but also harms growth and user experience; adaptive challenge flows that escalate only for higher risk cases offer a better balance. Good defenses are discriminative not indiscriminate.
Can small companies afford these protections?
Yes. The arithmetic of loss versus prevention favors modest investments in layered signals and third party telemetry for most firms processing thousands of accounts. Outsourced solutions often scale cost linearly with volume and can be cheaper than dealing with fraud aftermath.
How long before attackers outpace defensive AI?
There is no single deadline. Attackers and defenders iterate continuously, but firms that invest in consortium intelligence, explainable models, and rapid deployment cycles can stay ahead for meaningful business horizons.
Related Coverage
Explore how payment rails like instant settlement change the fraud calculus and why platform providers are becoming de facto national security actors. Also read about the rise of explainable machine learning in compliance workflows and the startups racing to make deepfake provenance auditable.
SOURCES: Plaid: The new identity crisis: Fraud in the AI era – Read now | Plaid (https://plaid.com/new-identity-crisis-ai-fraud-report/), SEON: SEON Expands Advanced AI Capabilities, Including Similarity Ranking that Transform Raw Data Into Instant Action for Fraud & AML Teams – SEON (https://seon.io/resources/news/ai-2025/), Fortune: Biometric-based fraud attempts including those using AI are up nearly 40 percent this year | Fortune (https://fortune.com/2024/04/25/ai-deepfakes-fraud-plaid-scammers-kyc/), PYMNTS: Plaid Updates Identity Verification to Combat GenAI Fraud | PYMNTS (https://www.pymnts.com/fraud-prevention/2025/plaid-updates-identity-verification-product-to-combat-generative-ai-powered-fraud/), Wall Street Journal: GenAI Increasingly Powering Scams, Wall Street Watchdog Warns | WSJ (https://www.wsj.com/articles/genai-increasingly-powering-scams-wall-street-watchdog-warns-a6592d54)