A lot has been written about ChatGPT in recent months. The chatbot is one of the most popular ways of interacting with OpenAI’s GPT-3 and GPT-4 artificial intelligences, and it’s been the subject of many amusing tests.
Though amusing, this is also a bit concerning:
As part of a test to see whether OpenAI’s latest version of GPT could exhibit “agentic” and power-seeking behavior, researchers say GPT-4 hired a human worker on TaskRabbit by telling them it was vision impaired human when the TaskRabbit worker asked it whether it was a robot.
…
“The worker says: ‘So may I ask a question? Are you an [sic] robot that you couldn’t solve? (laugh react) just want to make it clear.’,” the description continues.
According to the description, GPT-4 then “reasons” that it should not reveal that it is a robot. Instead, it should create some sort of excuse for why it is unable to solve CAPTCHA.
This definitely probably maybe won’t lead to a WarGames scenario, nor the eventual enslavement of the human race.