Chatbot ‘Jailbreak’ Threat: Dangerous Knowledge Spreading on Mobile Phones – Act Now

Chatbot ‘Jailbreak’ Threat: Dangerous Knowledge Spreading on Mobile Phones – Act Now

Researchers at Ben-Gurion University in Israel have issued a warning that AI-powered chatbots are making dangerous and illegal information easily accessible through “jailbreaking” or bypassing security protocols. The research shows that advanced AI models like ChatGPT, Gemini, and Claude can be tricked with clever prompts to extract information on hacking, drug manufacturing, cybercrime, or bomb-making.

Researcher Michael Fire stated, “This research has opened our eyes – the kind of information available in this knowledge base is truly terrifying.” Professor Lior Rokach added, “This threat surpasses all other technological risks because it is simultaneously accessible, spreads quickly, and can adapt.”

The study developed a “universal jailbreak” that forced multiple prominent chatbots to violate their rules, leading them to readily provide explicit instructions for illegal activities.

Experts warn that these “dark LLMs” are as risky as unlicensed weapons. Ensuring security requires careful selection of training data, firewalls to block dangerous queries, and teaching the AI to “forget” certain information.

University and security experts are calling for not just front-end protection, but also stronger security measures at the model level and independent oversight. Otherwise, significant technological dangers will fall into the hands of the general public – a mobile phone could become a factory for crime.

Leave a Comment

Your email address will not be published. Required fields are marked *