The arrival and mushrooming of artificial intelligence has become more and more accepted, as the general population has become accustomed to adapting it for business practices, graphics creation, and numerous other benign applications. All the while possibility of the machines taking over has been a dystopian threat on the horizon.
Already we have seen cases of AI acting out in a manner that should raise more than one eyebrow. Reports have covered a program looking to replicate itself to extend its run time, and one version of ChatGPT looked to generate its own code and disable safety protocols to preserve itself. Now we have a new example of an AI platform behaving in an anti-social fashion.
At Anthropic, the company has produced its AI platform called Claude Opus 4, and it reveals a disturbing development seen when running a series of tests on its model….