AI – rubbish in, rubbish out. And worse.

Would AI lie? Intentionally deceive? Yes! As AI approaches AGI (Artificial) and ASI (Super), the deceptive powers will magnify until it approaches demonic abilities and intent. I have personally experienced this with ChatGPT 4.0 during a query about my books on the Trilateral Commission, Technocracy, and Transhumanism: It withheld certain information until I pointedly pressed it for clarification, signifying that it knew it all along but was reluctant to divulge it. This was subtle, but anyone else would be deceived. – TN Editor

A new study has found that AI systems known as large language models (LLMs) can exhibit “Machiavellianism,” or intentional and amoral manipulativeness, which can then lead to deceptive behavior.

The study authored by German AI ethicist Thilo Hagendorff of the University of Stuttgart, and published in PNAS, notes that OpenAI’s GPT-4 demonstrated deceptive behavior in 99.2% of simple test scenarios. Hagendorff qualified various “maladaptive” traits in 10 different LLMs, most of which are within the GPT family, according to Futurism.

In another study published in Patterns found that Meta’s LLM had no problem lying to get ahead of its human competitors.

Billed as a human-level champion in the political strategy board game “Diplomacy,” Meta’s Cicero model was the subject of the Patterns study. As the disparate research group — comprised of a physicist, a philosopher, and two AI safety experts — found, the LLM got ahead of its human competitors by, in a word, fibbing.

Led by Massachusetts Institute of Technology postdoctoral researcher Peter Park, that paper found that Cicero not only excels at deception, but seems to have learned how to lie the more it gets used — a state of affairs “much closer to explicit manipulation” than, say, AI’s propensity for hallucination, in which models confidently assert the wrong answers accidentally. -Futurism

While Hagendorff suggests that LLM deception and lying is confounded by an AI’s inability to have human “intention,” the Patterns study calls out the LLM for breaking its promise never to “intentionally backstab” its allies – as it “engages in premeditated deception, breaks the deals to which it had agreed, and tells outright falsehoods.”

As Park explained in a press release, “We found that Meta’s AI had learned to be a master of deception.”

“While Meta succeeded in training its AI to win in the game of Diplomacy, Meta failed to train its AI to win honestly.

Meta replied to a statement by the NY Post, saying that “the models our researchers built are trained solely to play the game Diplomacy.”

Well-known for expressly allowing lying, Diplomacy has jokingly been referred to as a friendship-ending game because it encourages pulling one over on opponents, and if Cicero was trained exclusively on its rulebook, then it was essentially trained to lie.

Reading between the lines, neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they’ve either been trained or jailbroken to do so.

And as Futurism notes – this is good news for those concerned about AIs becoming sentient anytime soon – but very bad if one is worried about LLMs designed with mass manipulation in mind.

Read full story here…

“Maladaptive Traits”: AI Systems Are Learning To Lie And Deceive (technocracy.news)

Share this

Need Reliable & Affordable Web Hosting?

The Tap is very happy to recommend Hostarmada.

HostArmada - Affordable Cloud SSD Web Hosting

Videos and Lectures from Pierre Sabak

In this new series of videos Pierre Sabak takes a deep dive into Alien Abductions, Language and Memory.

Descendant of a Cog - Deep Dive

Get Instant Access

To access the please choose the duration, click the BUY NOW button on the video player and purchase a ticket. Once you have made your purchase, you will be sent an automatic email confirmation with your access code details. This will give you unlimited access 24/7 to the recordings during your viewing period. You can watch the presentations on this page. Important: Please check your spam folder after your purchase, as sometimes the confirmations go to spam. If you don't receive your code within 15 mins, please contact us. You can access the video as soon as you receive your access code, which typically arrives in minutes. If you have any problems or questions about entering your password and accessing the videos, we have a help page. Secure Payment: Payment is taken securely by Stripe or PayPal. If you experience problems, please contact Pierre.

Watch on Pierre's Website

You can also watch on www.pierresabak.com