this post was submitted on 26 Jun 2023
109 points (100.0% liked)

Technology

31 readers
1 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

Macquarie University cyber security experts have invented a multi-lingual chatbot designed to keep scammers on long fake calls to waste their time and ultimately reduce the huge number of people who lose money to global criminals every day.

you are viewing a single comment's thread
view the rest of the comments
[–] WeDoTheWeirdStuff@kbin.social 6 points 1 year ago (2 children)

I'm sure the scammers are working on chat bots that will do the scamming without the costly need for people.

[–] CoderKat@kbin.social 2 points 1 year ago

Scammers have long since used bots already for text based scams (though dumb ones). Phone calls are a lot harder, though. And there's also "pig butchering" scams, which are the long cons. Most commonly those are fake relationships. I suspect those long cons would have a hard time convincing someone for months, as human scammers manage to do.

I suspect that scammers will have a harder time utilizing AI, though. For one thing, scammers are often not that technologically advanced. They can put together some basic scripts, but utilizing AI is difficult. They could use established AI, but it'd almost surely be against their ToS, so the AI will likely try to filter scam attempts out.

That said, it might be just a matter of time. Today, developing your own AI has a barrier to entry, but in the future, it is likely to get a lot easier. And with enough advancements, we could see AI being so good that fooling someone for months may be possible. Especially once AI gets good at generating video (long con scams usually do have scammers video chat their victims).

And honestly, most scams have a hundred red flags anyway. As long as the AI doesn't outright say something like "as a large language model...", you could probably convince a non zero number of victims (and maybe even if the AI fucks up like that -- I mean, somehow people get convinced the IRS takes app store gift cards, so clearly you don't have to be that convincing).

[–] Maeve@kbin.social 1 points 1 year ago