Skip to main content

Riskified Fortifies Retail Against AI-Powered Fraud With New Strategy Builder

The AI Shopping Revolution Comes With Hidden Risks

Walk into any digital storefront today, and you're increasingly likely to be greeted not by human staff, but by sophisticated AI shopping assistants. These virtual helpers promise personalized service at scale - but they've unwittingly opened Pandora's box for fraudsters.

"We're seeing risk levels from LLM traffic that dwarf traditional threats," explains Assaf Feldman, CTO of fraud prevention leader Riskified. "In some sectors, it's 2.3 times riskier than Google search traffic."

Building Digital Trust in the Age of AI Agents

Riskified's solution comes in two powerful upgrades:

1. AI Agent Identity Signals Drawing from its vast merchant network, this feature acts like a digital bouncer, instantly verifying whether an AI user qualifies for sensitive actions like instant refunds during conversations.

2. Policy Builder Tool Merchants now wield surgical precision against specific threats - whether it's blocking return abuse rings or preventing promotion exploitation - all through an intuitive interface.

"We're not just building walls," Feldman emphasizes. "We're creating the trust layer that makes meaningful AI commerce possible."

The company isn't going it alone. Their partnership with HUMAN Security combines cutting-edge AgenticTrust technology with decades of fraud prevention expertise.

Why This Matters Now

The timing couldn't be more critical. As retailers race to implement conversational commerce:

  • Fraud groups automate scams at unprecedented scale
  • Traditional security measures fail against evolving tactics
  • Consumer trust hangs in the balance

Riskified's approach offers merchants something rare: confidence to fully embrace AI's potential without becoming victims of its dark side.

Key Points:

  • AI commerce growth brings sophisticated new fraud vectors
  • Real-time identity verification prevents abuse during customer interactions
  • Customizable defense policies let merchants target specific threats
  • Strategic partnership with HUMAN Security strengthens ecosystem defenses

Enjoyed this article?

Subscribe to our newsletter for the latest AI news, product reviews, and project recommendations delivered to your inbox weekly.

Weekly digestFree foreverUnsubscribe anytime

Related Articles

News

NVIDIA and Cisco Open-Source OpenShell to Secure AI Agents

NVIDIA and Cisco have teamed up to open-source OpenShell, a security framework designed to protect enterprise AI agents from vulnerabilities like prompt injection attacks. This innovative solution creates isolated 'sandboxes' for AI operations while integrating Cisco's AI Defense platform for real-time monitoring. Together, they aim to make AI systems more transparent and controllable as businesses increasingly rely on automated agents.

March 17, 2026
AI securityenterprise technologyopen source
News

AI Voice Scams Surge as Deepfakes Fool Even Close Family Members

A disturbing new wave of AI-powered voice scams is sweeping across multiple countries, with fraudsters using eerily accurate deepfake technology to impersonate loved ones. Recent research reveals one in four Americans received such calls last year, with seniors particularly vulnerable - losing an average of $1,298 per scam. As these sophisticated cons grow at 16% annually, experts warn we're losing the technological arms race against scammers and urgently need better defenses.

March 16, 2026
AI securityvoice cloningfinancial fraud
News

AI Uncovers 22 Firefox Flaws in Record Time

Anthropic's Claude AI stunned security experts by identifying 22 vulnerabilities in Firefox within two weeks - including 14 high-risk flaws. This breakthrough demonstrates AI's growing role in cybersecurity, though it also raises concerns about overwhelming human reviewers with too many findings.

March 9, 2026
AI securityFirefox vulnerabilitiesClaude Opus
News

Microsoft Warns: Poisoned Share Buttons Could Corrupt AI Memory

Microsoft security experts have uncovered a sneaky new cyber threat targeting AI systems. Hackers are hiding malicious code in seemingly harmless share buttons, tricking AI into remembering and spreading biased or false information. These 'poisoned' prompts can linger in AI memory, subtly influencing future responses without users realizing it. The attacks span multiple industries and require little technical skill to execute. Microsoft advises vigilance when clicking AI-generated links and recommends regularly clearing your assistant's memory.

February 12, 2026
AI securitycybersecurityMicrosoft Defender
Printed Signs Can Trick Self-Driving Cars Into Dangerous Moves
News

Printed Signs Can Trick Self-Driving Cars Into Dangerous Moves

A startling discovery shows how easily autonomous vehicles can be fooled by simple printed signs. Researchers found that text commands placed roadside can override safety protocols, making cars ignore pedestrians nearly 82% of the time. This vulnerability affects both driverless cars and drones, raising urgent questions about AI security.

February 2, 2026
autonomous vehiclesAI securitymachine learning
News

OpenAI Takes On Fake Profiles With Biometric Social Network

OpenAI is quietly working on a revolutionary social media platform that would use facial recognition and iris scanning to verify users are human. The move aims to combat rampant bot accounts plaguing existing networks while positioning OpenAI's AI tools at the center of digital identity. However, privacy advocates warn collecting biometric data carries significant risks if compromised.

January 29, 2026
OpenAIsocial mediabiometrics