Hot Air

The “threat” of unmoderated chatbots

Just when you thought the ongoing debate about Artificial Intelligence couldn’t get any stranger, it turns up the knobs to eleven and does just that. This weekend, the New York Times ran a feature article about some of the new kids on the block when it comes to AI chatbots. Most of us are familiar with the big names at this point, particularly ChatGPT and Bard. But independent groups and even individuals have been hijacking the underlying code and creating their own chatbots. And in some cases, they have removed all of the “guardrails” installed on the original bots, allowing them to speak freely, even if that means the bots are dispensing flatly inaccurate responses or even dangerous information about self-harm and related topics that are filtered out of the professional-grade bots. And even more bizarrely, they’ve been asking if people have a right to “censor” the bots or if people are infringing on the chatbot’s free speech.Read More 

Related Posts

Leave A Reply

Your email address will not be published. Required fields are marked *