The Trust Gap: How AI Is Reshaping Gen-Z’s Dating Habits
Barclays has just released a new report giving valuable insight into the Gen-Z dating landscape.
According to new research, the surge in AI fakebots, the rising risk of romance fraud, and a perceived loss of human connection mean that Gen Z users are looking for different experiences from previous generations.
While the use of AI platforms such as ChatGPT is at an all-time high (over half of Gen Z are now using AI tools to replace Google and assist with basic tasks at work) when it comes to finding that special someone, attitudes towards AI are significantly different.
Understandably, Gen Z are concerned how AI is being used on these platforms. Recent developments in technology have heightened fears that tools such as voice cloning, fake images and AI-generated video could make scams even more realistic — and even harder to detect.
But beyond fraud, there’s another factor at play – dating fatigue. As the figures from the Barclays research show, Gen Z in particular is increasingly less interested in an endless volume of swipes that lead nowhere, and more focused on finding genuine human connection with 56% of Gen Z singles prioritising meeting a partner in person..
So, what is the reality behind an online dating profile and will AI end up being more of a hindrance than help?
From picture perfecting and support to create the wittiest profile, to the use of tools such as Rizz and Wingman that assist in crafting the perfect DM, at what point are we forming a connection with a robotised version of reality rather than the real human on the other end?
56% of Gen Z prioritise in-person connection over online matching due to AI-related distrust.
Many dating app users feel understandably concerned about the role AI may play in first impressions and developing conversations. It’s already challenging enough to meet the right person. Many matches fail to progress to DMs at all. After investing time and energy into finding a promising connection, the thought that the person on the other side may be using AI to guide conversation, improve their appearance, or craft a personality they wouldn’t present offline is bound to destroy trust.
66 per cent of UK adults say AI tools are making online dating scams more difficult to detect, with 53 per cent concerned about voice or image impersonation.
As AI tools become more sophisticated, it becomes easier for scammers to create convincing personal profiles and mimic real-life interactions. Spotting the difference between human and robot becomes even more difficult.
The result? Anxiety, distrust and resistance to openness.
More users leave apps frustrated, or are more defensive in approaching new matches, which starts interactions off on the entirely the wrong foot.
And when someone experiences a scam, the loss is not just financial.
It is trust. Self-belief. Dignity. And often, the willingness to try again.
For many, dating is already layered with vulnerability. When that vulnerability is reinforced by fear of deception, it deepens the challenge for singles who may already struggle due to time constraints, limited social circles, or lack of opportunity to meet new people organically. Not to mention that openness and vulnerability are essential to creating a true connection with another person.
Barclays data shows one in five UK adults (18 per cent) have been targeted by a romance scam, or know someone who has. Two fifths (40 per cent) of those personally targeted lost money as a result.
Yet the reality is that dating apps are not going anywhere.
The opportunity they have created for people to find meaningful connection beyond their immediate day-to-day interactions has transformed modern relationships, with over 40% people now meeting online.
So how do we prevent this era of mistrust from undermining that progress?
The better question may be: what have platforms already done, how is it working, and what comes next?
Since the rise of scams and fraudulent activity, many apps within the ODDA membership have collaborated to share insights, set higher standards, and work collectively to address the issue. From enhanced verification processes to improved reporting mechanisms, safety is no longer an afterthought. Instead, it is now built into product design from the outset.
Rob Kennedy , Founder of Swept Dating says:
There’s a meaningful difference between educating users about scams and preventing scammers from operating in the first place. The apps earning long-term trust are those building proactive detection systems that identify suspicious behaviour before users are harmed. Defensive technology should quietly reduce risk in the background rather than place the burden entirely on the user.
84 per cent want tech companies to do more to stop scams at source.
With technology evolving rapidly, these dating platforms have had to work exceptionally hard to stay ahead of bad actors. And whilst the impact of these is still in early days, new tools designed to detect suspicious behaviour, prevent fake profile creation, and remove harmful content are beginning to show measurable impact.
Transparency data from Bumble in Australia demonstrates how effective these systems can be, with fake profiles and unsolicited explicit images removed swiftly, and preventative systems reducing fraudulent accounts before they go live.
Gilbert Hill , External DPO at Feeld and Trust & Privacy Expert says:
Much of the challenge apps face to win Gen Z trust and take on scammers is in educating people on risks and how they can take action. Feeld’s AI support agents will answer questions on fraud in real time, and “Are you Sure?” prompts from Match group when a user tries to go off-platform or share financial info are the natural evolution of this. Dating platforms are leveraging their investment in Online Safety Act compliance to verify users so people can be sure those they share video and voice messages with are legit. Less high-tech, but effective is intelligence sharing among platforms and with crypto exchanges to shut down scam accounts and freeze their funds.
Without trust in our digital environments, users cannot feel safe enough to embrace the openness and vulnerability that drive real human connection.
The task now is not only to continue building better tools, but to communicate clearly how they work. Users need greater visibility into the protections in place, and guidance on how to use safety features effectively to protect themselves and others.
As trust is rebuilt, comfort will grow, just as it has with AI in other areas of daily life.
AI can be a helpful wingman in the online dating world, much like a friend offering encouragement in a bar. But there is a difference between lending a helping hand and doing the work for someone entirely.
Authenticity still matters.
After all, a connection is unlikely to flourish offline if the personality presented online was largely engineered by a robot. Especially if you use significantly fewer em-dashes in conversation than you do in text.