Technocracy’s “Science of Social Engineering” is taking a dark turn after it was discovered that AI can “deprogram” or “reprogram” your brain to give up any other ideas that don’t fit their narrative. Think about it: you don’t need to check into a reeducation center; you are painlessly reprogrammed at home; no other humans need to be involved; the whole world can be reprogrammed in unison. The text below is directly from the study.

Introduction

Widespread belief in unsubstantiated conspiracy theories is a major source of public concern and a focus of scholarly research. Despite often being quite implausible, many such conspiracies are widely believed. Prominent psychological theories propose that many people want to adopt conspiracy theories (to satisfy underlying psychic “needs” or motivations), and thus, believers cannot be convinced to abandon these unfounded and implausible beliefs using facts and counterevidence. Here, we question this conventional wisdom and ask whether it may be possible to talk people out of the conspiratorial “rabbit hole” with sufficiently compelling evidence.

Conclusion

It has become almost a truism that people “down the rabbit hole” of conspiracy belief are virtually impossible to reach. In contrast to this pessimistic view, we have shown that a relatively brief conversation with a generative AI model can produce a large and lasting decrease in conspiracy beliefs, even among people whose beliefs are deeply entrenched. It may be that it has proven so difficult to dissuade people from their conspiracy beliefs because they simply have not been given sufficiently good counterevidence. This paints a picture of human reasoning that is surprisingly optimistic: Even the deepest of rabbit holes may have an exit. Conspiracists are not necessarily blinded by psychological needs and motivations—it just takes a genuinely strong argument to reach them. (emphasis added) — Technocracy News & Trends Editor Patrick Wood


Posted By: Jacob Bruns via Headline News

Might the New World Order use biased, pre-manipulated artificial intelligence programs to try to “deprogram” those with unpopular opinions by persuading them that their logic does not compute?

A recent study on that subject underwritten by the John Templeton Foundation might give so-called conspiracy theorists one more thing to be paranoid about, according to Popular Science.

Critics have already sounded the alarm that leftist radicals in Silicon Valley and elsewhere were manipulating the algorithms used to train AI so that it automatically defaulted to anti-conservative biases.

The next step may be programming any verboten viewpoints into the realm of “conspiracy theory,” then having powerful computers challenge human users to a battle of logic that inevitably is stacked against them with cherrypicked data.

The Evil Twins of Technocracy and Transhumanism

The study, titled “Durably reducing conspiracy beliefs through dialogues with AI,” attempted to counter the common view that some people will not change their minds, even when presented with facts and evidence.

Addressing the problem of “widespread belief in unsubstantiated conspiracy theories,” researchers postulated that conspiracy theories can, contrary to the scientific narrative, be countered by way of systematic fact-checking.

Among those theories tested were more traditional conspiracies such as those involving the assassination of John F. Kennedy or the possibility of alien landings that were known to the United States government.

But others included more immediately politicized claims, such as the lawfulness of COVID lockdowns or the validity of the 2020 presidential election, both of which are a “major source of public concern.”

The study was conducted by having conspiratorial participants engage in brief conversations with AI, with the aim of “curing” the participants of their ostensibly false opinions.

Researchers concluded that “the treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average,” suggesting that “treating” people with certain facts can indeed alter their opinions, particularly when those facts come from AI bots.

The “treatment” received also reportedly “persisted undiminshed for at least 2 months,” meaning that such conditioning could eventuate in regular treatment for those deemed conspiracy theorists.

Ultimately, then, AI conditioning was determined to be a potentially useful tool in addressing the “psychological needs and motivations” of such people. Researchers speculated that the technology could be implemented online in the coming years, particularly in online forums or on social media.

 

David Rand, a professor at the Massachusetts Institute of Technology who co-authored the study, told reporters that he was optimistic about the future of AI conditioning.

“This is really exciting,” he said. “It seemed like it worked and it worked quite broadly.”

Read full story here…

https://www.activistpost.com/2024/09/technocrats-target-ai-to-deprogram-conspiracy-theorists.html