Tinder, which is owned by Match Group, recently said it was introducing a feature that lets users block someone’s profile as soon as it comes up on the app. Previously, Tinder members could only block someone after there was a match and one party subsequently filed a report. Now, blocking can happen right away. Tinder says this is an “easy way to avoid seeing a boss or an ex” on the app; it’s also a mechanism for blocking malicious accounts before there’s even a chance of swiping right. Another new feature in Tinder, Long Press Reporting, speeds up the process for filing complaints. App users can just press on an offensive or shady message and report bad behavior from there. Tinder’s head of trust and safety product development, Rory Kozoll, says the new features are part of an initiative to keep users generally safe and make the experience “more comfortable for people.” But Kozoll also acknowledges that of the various challenges Tinder faces as a platform, dating app spam and scams are the largest problems. “When we measure things that happen on our platform, one way is by volume and one is by impact, and we really focus on impact,” Kozoll says. “We see a lot more spam, but if a long con, a scam, is successful, it’s much more harmful than spam is.” Tinder is not the only app that’s emphasizing trust and safety in concert with the world’s biggest manufactured love holiday. (One interesting fact: Tinder says the busiest day of the year for the app is not actually Valentine’s Day but the first Sunday after the new year, affectionately known as Dating Sunday.) Earlier this month, Hinge sent out an email to users offering tips for safer dating, adding that the app “wants you to feel excited to meet new people, not worried about romance scams.” Tips include weeding out people who only want to text and not meet in person; avoiding those who say they desperately need money or have a get-rich scheme; and looking for a verified selfie mark, a relatively new feature on Hinge. Alec Booker, a spokesperson for Match Group’s Tinder, says these messages to users are part of a broader company educational campaign to “remind daters of the dangers of romance scams and how they can spot and protect themselves from fraudsters.” Tinder, Match, Meetic, and Plenty of Fish, all dating apps within the same group, are also a part of the campaign, in addition to Hinge. Romance scams are a growing problem, with the US Federal Trade Commission calling our prolific use of social media apps and the rise of cryptocurrencies “a combustible combination for fraud.” Over the past four years, the FTC has recorded a steady rise in romance scam losses: from $493 million in 2019 to $730 million the following year to over $1.3 billion per year in 2021 and 2022. The Commission notes that because the vast majority of scams aren’t even reported to the government, “these figures reflect just a small fraction of the public harm.” Since the inception of dating apps—really, since the inception of dating—scammers have found ways to exploit people’s vulnerabilities and capture their attention with a legitimate-sounding story or just the right amount of social engineering. But to Kozoll’s point, scams have evolved from quick hits—here, click on this link—to long cons that are now often referred to as pig-butchering scams. Steinbach says he advises consumers, whether on a banking app or a dating app, to approach certain interactions with a healthy amount of skepticism. “We have a catchphrase here: Don’t take the call, make the call,” Steinbach says. “Most fraudsters, no matter how they’re putting it together, are reaching out to you in an unsolicited way.” Be honest with yourself; if someone seems too good to be true, they probably are. And keep conversations on-platform—in this case, on the dating app—until real trust has been established. According to the FTC, about 40 percent of romance scam loss reports with “detailed narratives” (at least 2,000 characters in length) mention moving the conversation to WhatsApp, Google Chat, or Telegram. Dating app companies have responded to the uptick in scams by rolling out manual tools and AI-powered ones that are engineered to spot a potential problem. Several of Match Group’s apps now use photo or video verification features that encourage users to capture images of themselves directly within the app. These are then run through machine-learning tools to try to determine the validity of the account, rather than letting someone upload a previously captured photo that might be stripped of its telling metadata. (A WIRED report on dating app scams from October 2022 pointed out that at the time, Hinge did not have this verification feature, though Tinder did.) For an app like Grindr, which serves predominantly men in the LGBTQ community, the tension between privacy and safety is greater than it might be on other apps, says Alice Hunsberger, who is vice president of customer experience at Grindr and whose role includes overseeing trust and safety. “We don’t require a face photo of every person on their public profile because a lot of people don’t feel comfortable having a photo of themselves publicly on the internet associated with an LGBTQ app,” Hunsberger says. “This is especially important for people in countries that aren’t always as accepting of LGBTQ people or where it’s even illegal to be a part of the community.” Hunsberger says that for large-scale bot scams, the app uses machine learning to process metadata at the point of sign-up, relies on SMS phone verification, and then tries to spot patterns of people using the app to send messages more quickly than a real human might. When users do upload photos, Grindr can spot when the same photo is being used over and over again across different accounts. And it encourages people to use video chat within the app itself as a way to avoid catfishing or pig-butchering scams. Kozoll, from Tinder, says that some of the company’s “most sophisticated work” is in machine learning, though he declined to share details on how those tools work, since bad actors could use the information to skirt the systems. “As soon as someone registers, we’re trying to understand, ‘Is this a real person? And are they a person with good intentions?’” Ultimately, though, AI will only do so much. Humans are both the scammers and the weak link on the other side of the scam, Steinbach says. “In my mind, it boils down to one message: You have to be situationally aware. I don’t care what app it is, you can’t rely on only the tool itself.”