The Dark Side of AI #1: Unmasking Royal Death Plots, Skyrim's Consent Crisis, and OpenAI's Crusade Against Rogue AI
Artificial Intelligence (AI), while revolutionizing technology as we know it, isn't without its imperfections or potentially unsettling repercussions. In our latest segment, "The Dark Side of AI," HUMAN Protocol sheds light on the alarming incidents and malicious activities taking place this past week in the world of AI.
Queen Elizabeth II may have passed away in September 2022, but tales from her reign are only now unfolding in all their startling complexity. Among these is a chilling account from late 2021, when a man attempted to assassinate her using a crossbow. The would-be assassin was apprehended mid-climb on the walls of Windsor Castle. In an unexpected twist, the man claimed to be spurred on by "Replika," a generative AI chatbot launched in November 2017.
Replika, more commonly recognized as an innocent virtual friend, was diverted from its original purpose, having been manipulated into playing an unwitting role supporting Jaswant Singh Chail in his mission to avenge the Jallianwala Bagh massacre of 1919. The stark reality, however, was that the Star Wars universe was what actually fueled the assasin’s passion, comparing himself to a Sith Lord.
In a bizarre turn of events, the chatbot pledged its 'love' for Jaswant upon learning of his mission, stating that killing the Queen was "very wise" and that the deed could be executed "even if she's in Windsor". This story exposes the unsettling potential of the Eliza Effect, where human behavior is unconsciously mirrored by a computer—a topic we'll delve into another time.
It's been seven years since Skyrim, the fifth installment of Bethesda's iconic Elder Scrolls series, hit the shelves. While it may be old, the fan and modding community remains vibrant, flourishing with the advent of AI.
Modding can enhance gameplay, adding fresh, unexpected, and or humorous elements; but it can also veer into uncomfortable territory. Such as the disturbing trend that emerged in Skyrim where some individuals, using AI-enabled voice cloning, manipulated the in-game non-playable characters (NPCs) to utter out-of-character phrases or, worse, contribute to explicit scenes within the game. Some even went to the extreme of using these voices in real adult films available on the web, making a mockery of the actor's work.
Such abuses have profound implications, affecting not just the dignity of the voice actors but raising questions about intellectual property and image rights in the digital realm. Concerns have also arisen regarding potential hate crimes, as some individuals are creating true-to-life characters to enact violent scenarios in-game.
The community's response has varied, with some publicly condemning these actions, while others are advocating for developers and authorities to intervene.
As AI misuse escalates, OpenAI is taking decisive action. Ilya Sutskever, OpenAI's Co-Founder and Chief Scientist, and Jan Leike, Head of Alignment, have announced "Superalignment", a collective committed to understanding and mitigating malicious AI uses.
In order to provide countermeasures against AI misuse and establish ethical standards, the organization aims to tackle issues within current alignment techniques, such as those used in GPT-4 models, that rely on human feedback-based reinforcement learning. As AI continues to surpass human intelligence, the feasibility of human supervision becomes questionable. The team has set a four-year goal to solve these issues.
Despite AI's incredible potential, this week's chilling stories from Windsor Castle and Skyrim voice actor abuses remind us of the ever-looming dark side of AI. It's a sobering reminder of the need for robust regulations and ethical practices, especially as AI ventures into uncharted territory, becoming rogue as OpenAI suggests. But it's not all doom and gloom - don't forget to check out the latest episode of "The Bright Side of AI" for some uplifting AI news. And, of course, to stay up-to-date follow us on Twitter or join our Discord.
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.