小程序
传感搜
传感圈

How businesses can use AI-powered bots to deal with spammers and scammers

2022-12-24 21:56:10
关注

Video speed dating start-up Filteroff had a growing problem with scam accounts bothering genuine users and turned to artificial intelligence for a solution. After flagging the problematic profiles, the team deployed an army of chatbots using OpenAI’s GPT-3 large language model artificial intelligence (AI) to keep them occupied with realistic conversations.

The downside, co-founder Brian Weinreich told Tech Monitor is that “the scammers are always incredibly angry” when they find out they’ve been played, and are likely to spam app stores with bad reviews.

A snippet of an AI bot’s conversation with a scammer. (Picture courtesy of Filteroff)

Any social app comes with its scammer, spammers and generally unsavoury groups of users that leave others feeling uncomfortable or unwelcome, but solving the problem isn’t easy or cheap, with companies as large as Meta and Google still plagued with issues every day.

For dating apps, a user might sign up and find someone who seems perfect for them. They chat and arrange a date, but before the meeting can take place a problem occurs: the victim of the scam might get a message claiming their date’s car broke down and they need money urgently to fix it. Once they’ve received the money they then drop all contact.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

For a small company armed with little more than “two staplers, a handful of pens, and a computer charger”, as CEO of Fliteroff Zach Schleien put it, the problem could seem impossible to solve but rather than sit back and let the scammers win, ruining the experience for everyone else, Shleien and Weinreich hatched a plan.

“We identified the problem, built a Scammer Detection System and then placed the scammers in a separate ‘Dark Dating Pool’,” Shleien said.

Not content with just fencing them off to talk to each other, the pair dropped in an army of bots full of profiles with fake photos and hooked them into OpenAI’s massive GPT-3 natural language processing model that gave them the ability to keep the scammers talking.

How Filteroff’s scammer detection system works

“When a user signs up for Filteroff, our scammer detection system kicks into full gear, doing complex maths to determine if they are going to be a problem,” says Weinreich.

Content from our partners

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers

Technology and innovation can drive post-pandemic recovery for logistics sector

Technology and innovation can drive post-pandemic recovery for logistics sector

How to engage in SAP monitoring effectively in an era of volatility

How to engage in SAP monitoring effectively in an era of volatility

“When a scammer is detected, we snag them out of the normal dating pool and place them in a separate dark dating pool full of other scammers and bots. We built a bot army full of profiles with fake photos and some artificial intelligence that lets our bots talk like humans with the scammers.”

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

This lets the bots sound like humans, leaving the scammers with no clue that they’ve been detected and making it less likely they’d just try again with a new profile. It also led to some “hilarious bot+scammer conversations”, Weinreich says.

He told Tech Monitor there are risks involved in this approach, not least because GPT-3 charges on consumption, so superfluous conversations can use a lot of tokens unless they are quickly identified. “Be mindful of your bots,” he warned. “When first setting up our bot army, I had accidentally allowed the bots to converse with one another. They ended up having long, albeit amusing, conversations about dating, which cost quite a bit of money, since the conversations were never-ending.”

Even when left free to bother the scammers, the bot solution doesn’t always work out perfectly, with the AI contradicting themselves over jobs, hobbies and location. But the scammers don’t seem to care and it appears to be “good enough” to keep them occupied, Weinreich says.

How developers can use AI and bots to defeat scammers

In one conversation a scammer named Robert went back and forth with the bot for some time in an argument over whether it was a bot or not. Robert would say “you’re bot” to which the bot would reply simply “I’m not a bot, I’m a human.”

Every time Robert asked for a video call or even a phone call the bot would reply with “I’m not comfortable with that”. There was a string of “I am” and “you’re not” in response to being called a bot, followed by Robert saying “they use you to trap me here” and the bot responding with “I’m not a trap, I’m a human.”

Eventually, Robert got angry with the bot and stared swearing and demanding to speak to the developer. This is common, Weinreich told Tech Monitor. “The scammers were always incredibly angry. I think some of our bad app reviews are due to scammers getting angry at our platform,” he says.

He said that any other developers using AI to deal with scammers and spam accounts, including those contacting customer services, should be aware it will always “be a game of cat and mouse”, with the best solution to not let a bad account know you’ve identified them as spam. “Scammers are quite good at creating new accounts, by either reverse engineering your API and setting up a bot to automatically create new accounts, or by sheer brute force,” Weinreich says. “However, they likely won’t create a new account if they think their current setup is working. 

“Also, a robust reporting system that users can access will help you quickly identify if there is a new pattern scammers are using to bypass whatever detection system you have in place.”

Read more: Five million digital identities up for sale on dark web bot markets

Topics in this article : AI

参考译文
企业如何使用人工智能机器人来对付垃圾邮件发送者和骗子
视频速配初创公司Filteroff面临一个日益严重的问题,即诈骗账户不断骚扰真实用户,因此该公司求助于人工智能来寻找解决方案。在标记出有问题的资料后,团队部署了一支由OpenAI的GPT-3大型语言模型人工智能驱动的聊天机器人队伍,与这些账户进行逼真的对话,以分散其注意力。联合创始人布莱恩·魏内里希(Brian Weinreich)告诉《Tech Monitor》,缺点在于“骗子一旦发现他们被耍了,往往会非常愤怒”,并可能在应用程序商店留下大量差评。以下是一段人工智能机器人与骗子之间的对话片段。(图片由Filteroff提供)任何社交应用都会遇到骗子、垃圾信息发送者以及其他令人反感的用户群体,让其他用户感到不适甚至不受欢迎。但解决这一问题并不容易,也不便宜,即便是像Meta和谷歌这样的大公司也每天面临这些问题。对于约会类应用来说,用户可能注册后会遇到一位看起来非常合适的对象。他们聊天并安排见面,但在见面之前就会出现一个问题:受害者可能会收到一条消息,声称他们的“约会对象”汽车抛锚了,急需资金维修。一旦收到钱,骗子便彻底失去联系。 公司情报 查看所有报告 查看所有数据洞察 查看所有内容 对于一家除了“两个订书器、几支笔和一个电脑充电器”外几乎一无所有的小型公司(Fliteroff首席执行官扎克·施莱恩(Zach Schleien)这样形容),这个问题看似无法解决。但Schleien和Weinreich并没有坐视不管,让骗子赢得胜利、破坏其他用户的体验,而是想出了一个计划。施莱恩表示:“我们识别出问题,建立了诈骗者检测系统,然后将这些骗子分到一个单独的‘黑暗约会池’。” 不仅如此,他们还为这些骗子设置了一个由虚假照片组成的虚假资料组成并连接至OpenAI GPT-3强大自然语言处理模型的机器人军团,让他们能与这些骗子展开对话。 Weinreich表示:“当用户在Filteroff上注册时,我们的诈骗检测系统便会全面启动,通过复杂的计算来判断用户是否会带来问题。” 内容来自我们的合作伙伴 采用B2B2C模式,使制造商能够前所未有地接近消费者 科技与创新可以推动后疫情时代的物流业复苏 在波动时代,如何有效进行SAP监控 注册我们的所有通讯 数据、洞察和分析直接送达您邮箱 由《Tech Monitor》团队提供 注册在此 这使得机器人听起来像真实用户,让骗子们毫无察觉自己已被识别,也降低了他们换用新账号继续骚扰的可能性。同时,这也导致了一些“令人忍俊不禁的机器人+骗子的对话”,Weinreich表示。他告诉《Tech Monitor》,这种方法存在风险,最主要的是GPT-3按使用量计费,因此如果不能快速识别出不必要的对话,会消耗大量“token”。他警告道:“请注意你的机器人。” “刚开始设置机器人军团时,我无意间允许机器人之间进行对话,他们最终展开了长篇大论,尽管有趣,但关于约会的对话耗费了相当多的资金,因为这些对话是没有尽头的。” 即便将机器人自由地交由骗子骚扰,这种解决方案也不总是完美无缺。AI有时会自相矛盾地谈论工作、爱好和位置等问题。但骗子似乎并不在意,Weinreich认为,至少目前来看,这种对话“足够逼真”即可达到目的。 开发者如何利用AI和机器人打败骗子 在一次对话中,一个名叫Robert的骗子与机器人展开长时间的争论,争论的焦点在于机器人到底是不是机器人。Robert会说:“你是个机器人。”机器人则会简单地回应:“我不是机器人,我是人类。”每次Robert要求视频通话或甚至电话通话时,机器人也会回应:“我不太舒服。” 对于被称作机器人时,机器人也回应了一系列“我是”和“你不是”的对话,随后Robert说:“他们用你来在这里设下陷阱。”机器人则回应:“我不是陷阱,我是人类。”最终,Robert对机器人感到愤怒,开始咒骂并要求和开发者对话。 这是常见的情况,Weinreich告诉《Tech Monitor》。“这些骗子总是非常愤怒。我想,我们的一些差评也是因为骗子对我们的平台感到不满。”他说。 他还表示,任何其他开发者使用AI应对骗子和垃圾账户,包括联系客服的用户,都应该意识到,这将永远是一场“猫鼠游戏”,最佳的解决方案是不让坏账户意识到你已经将其识别为垃圾信息。 Weinreich表示:“骗子非常擅长创建新账户,他们可能通过反向工程你的API并设置自动创建新账户的机器人,或者通过暴力破解。”但他也指出,“如果他们认为当前设置有效,可能不会去创建新账户。” 此外,一个用户可以轻松访问的强大举报系统将帮助你快速识别骗子是否使用了新的方式绕过你现有的检测系统。 阅读更多:五百万个数字身份在暗网机器人市场上出售 本文主题:人工智能
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告
提取码
复制提取码
点击跳转至百度网盘