A Thing
The Green SheetGreen Sheet

Tuesday, October 1, 2024

AU10TIX's Ofer Friedman debunks ID fraud prevention myths

Ofer Friedman, chief business development officer at AU10TIX, recently contacted The Green Sheet to share common myths and misperceptions associated with ID fraud prevention. Following is a Q&A that resulted.

What are some common myths or misconceptions about identity fraud prevention that you've encountered in the industry?

  1. Faked ID documents can be detected by back-office experts: The ability of human "experts" to detect identity fraud has been rapidly declining. The days of "cut-paste" fraud are almost gone, especially where professionals are concerned. Perfect-looking templates are a few clicks away and not really expensive. Image editors are readily accessible and ever easier to obtain and use.

    Generative AI is already enabling mass production of near-perfect fakes in response to a clever prompt. As fraud becomes increasingly digital, the ability of humans to detect manipulations declines -- especially when professional fraud is concerned. Agent Smith in "The Matrix" was prophetically right: "Never send a human to do a machine's job."

  2. Identity fraud can all be detected by AI-enhanced analysis of ID documents and selfies submitted by customers: Case-level fraud detection, namely the analysis of ID document images and Selfies submitted by customers, is experiencing increasing difficulties in handling AI-enhanced fraud. Alarmingly, the market is still largely relying on this single line of defense to detect fraud, which is improving ("thanks" to AI) at a much faster pace than image-based detection capabilities. The double-layered identity fraud detection paradigm is becoming a must.

    Fraud-enabling AI technology is outpacing fraud-preventing AI technology at an unprecedented pace because AI technology is evolving from manipulation enablement to self-generation enablement. Anyone not employing traffic-level detection in conjunction with case-level detection is only protecting against amateur fraud, not professional fraud.

  3. Fraudsters mostly use made-up personal details or details mixed from two people to evade verification: Data verification is a basic regulatory requirement, and why would a fraudster make up data that can fail verification when genuine personal data is readily available in social media profiles? Furthermore, Darknet hosts more personal data sets than the number of inhabitants on earth (Some are duplicates).

    Most fraudsters are motivated by ease and anonymity. Relying on personal data verification per se is becoming ever more precarious.

  4. Fraudsters will not be able to defraud encrypted digital/mobile ID credentials: Inarguably, encrypted digital credentials (Digital IDs, Mobile Driver Licenses, etc.) are way more secure than paper or plastic ones. When coupled with biometrics, they are really hard to compromise. It is said that only quantum computers will be able to breach such keys in a reasonable time, and quantum-resistant encryption is already happening.

    But who says that fraudsters will continue with the modes of operation we see today? The next threat might come not from credentials but from communications. Deepfaked CEOs have already enabled withdrawals in millions. What is to stop fraudsters from deepfaking you and conducting a video call requesting a reset of your access credentials?

  5. Deepfakes can be safely detected with AI image analysis: Various claims are boldly made in the market, promising the detection of Gen-AI impersonations. However, when these detection tools are professionally tested, their limitations surface rapidly. In principle, if Gen-AI methodologies are given, then it makes sense that someone will find a plausible way to detect them. But the target is moving fast.

    We are already seeing randomization introduced into deepfaked document and face images. Images, as opposed to large language model (LLM) texts and voices, do not have to "make sense" when responding to questions or instructions. We are already seeing the quality of images and videos rapidly becoming so convincing that the good old blur areas, distortions, etc. are becoming ever rarer.

    Since the target is moving, it makes sense to adopt the strategies used in cyber risk detection and apply them to Gen-AI impersonation detection. Cyber has long developed beyond the templating of viruses and attacks and ventured into the search for anomalies that are not pre-templated. Gen-AI is produced by particular engines, each doing what it does differently. That "Algorithmic fingerprint" is one of the promising detection methodologies yet to be perfected. 

You mentioned that features like holograms and microprints are often seen as proof of an ID's authenticity. What are the limitations of relying on these features, and how do fraudsters bypass them?

These features are effective for verifying the authenticity of identification documents; however, their practical application is limited, particularly outside of controlled environments like airports equipped with document readers. Both holograms and microprints, along with various other security measures, were designed to be detected using professional scanners that utilize advanced illumination techniques and coaxial lighting.

In contrast, the typical scenario today involves customers capturing their ID and selfies under diverse and often suboptimal conditions. Experience indicates that the quality of these images frequently falls short of enabling reliable detection in the vast majority of cases, making it easy for fraudsters to fake them.

Deepfakes are a growing concern in identity fraud. What makes spotting deepfakes more difficult than simply looking for inconsistent reflections or jerky head movements?

Five years ago, this may have been feasible. However, today, unless the fraudster has used a very inexpensive tool, identifying deepfakes is incredibly challenging. And this technology is getting much more powerful; deepfakes will soon be undetectable by observation.

Certain indicators, such as "jerky head movements," may have assisted in identifying earlier versions of real-time deepfakes -- and may still do, depending on the quality of the technology employed. However, the likelihood of customers being asked to engage in such detection methods is minimal.

Politically exposed persons and sanctions checks are often cited as key in preventing money laundering. Can you explain why these alone may not be sufficient to stop identity fraud, and what more robust measures should be in place?

Politically Exposed Persons (PEPs) and sanctions are indeed valuable tools for flagging risk based on verified data. However, a critical question arises regarding the extent to which the available data encompasses all potentially relevant risk cases.

Currently, this coverage is far from comprehensive. If financial institutions, law enforcement agencies, and government entities were to make their knowledge bases accessible for screening purposes—obviously in a controlled manner that preserves privacy—the efficiency of risk assessment would significantly increase. In summary, while the PEPs and sanctions tool is undoubtedly robust, it remains only partially effective due to incomplete data availability.

What role does technology, like AI and machine learning, play in debunking these myths and helping to accurately detect and prevent ID fraud?

AI, and its broader algorithmic family machine learning, plays an increasing role in identity fraud prevention. It does it primarily in the diagnostics of photos and biometrics. AI's big plus over human examination is the ability to detect manipulations that are not visible to the human eye, or as we call them "digital manipulations and generative artifacts." AI's big pluses over AML screening and data verifications are in their ability to "connect the dots" between flags that haven't been pre-identified as related.

AI is a very effective anomaly and relationship discovery tool, so requires less dependence on the genius who may or may not discover them. AI also helps beef up fraud discovery by adding collateral factors such as device flags and digital/social footprint. Just to set the record straight, AI as a discovery or detection tool will not always be accurate since it still relies on learning (hence, "machine learning"), and the availability of the "complete" representative sample set of reality is never there.

But who says that AI will always be learning from samples? Isn't AI about artificial intelligence, and intelligence is about figuring out?

For businesses looking to protect themselves against identity fraud, what are the most effective prevention methods that go beyond the traditional myths?

Organizations stand to gain significantly by approaching identity verification and authentication—specifically onboarding and access management—analogously to their strategies for cyber defense. While early AI manipulations may be detectable through visual or auditory means, the advancement of AI, particularly generative AI, necessitates the implementation of robust automation.

It is essential to accept that reliance on human senses for detection is increasingly unreliable; therefore, detection must transition to a digital framework. This shift requires organizations to adopt a dual-layered AI attack detection strategy, encompassing both case-level and traffic-level analyses, and to critically evaluate their detection methodologies.

Solely relying on AI that distinguishes between large datasets of fake and real images will not provide long-term solutions. Organizations can draw valuable lessons from the evolution of cyber attack detection to address generative AI-powered impersonation attacks more efficiently.  end of article

The Green Sheet Inc. is now a proud affiliate of Bankcard Life, a premier community that provides industry-leading training and resources for payment professionals. Click here for more information.

Notice to readers: These are archived articles. Contact names or information may be out of date. We regret any inconvenience.

Facebook
Twitter
LinkedIn
2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007
A Thing