Is AI causing another ‘mass formation psychosis’?
An article about how AI narrative has captured and monopolised media, politics and business — and whether it caused an unhealthy preoccupation akin to a ‘mass formation psychosis’ like during the Covid pandemic.
I recently read an article in the Guardian that OpenAI is looking to float at a $1trn (a trillion, yep!) valuation. My initial cynical and satirical reaction to it was:
“AI — the greatest grift: First steal millions of copyrighted works, then pay poor people $2 an hour to annotate the data, make absurd claims about AGI and curing cancer and finally float at 1trn.”
But while this may be satirical and cynical, I am interested in what is really going on, on a deeper level as well, and not only in a business context. How did we end up with the topic of AI having close to monopolised media, politics and business (apart from Trump and never-ending wars) all within a few years, and is now on the verge of not only promising artificial general intelligence but also unlimited riches to the people in the driver's seat.
I remember: back when the Covid pandemic was nearly over, vaccines were already widely available and the virus had largely lost its danger, a vaccine scientist Dr. Malone outraged many people on a Joe Rogan podcast when claiming what was going on — mask mandates, vaccine mandates, Covid passports, complete capture of media and politics — was essentially a ‘mass formation psychosis’, meaning that people’s anxiety and a sense of unreality was leading them to blindly follow the rules. Whether it was an accurate description of the situation at that time, I can’t say. However, what I experienced was that the virus had captured the public's minds and any economic, political and social activities, and the media were nearly exclusively revolving around how to deal with the virus; other topics have been shut out, and the needs of people that were less urgent than dealing with the virus were ignored.
We’re now dealing with the aftermath of this phase, and many politicians have come out now that the measures, at that point, meaning once vaccines were widely available and the virus was not in the most dangerous form anymore in late 2021, were a grave mistake. One should have declared the threat as over and moved to business as usual again.
So why am I drawing parallels between AI and that phase of the Covid pandemic? For me, it is the ongoing dominance of the AI topic in media, culture, politics, Politeconomy and business at a point when the disruption of the technology has already been internalised in society, more than two years after it became public.
What was revolutionary in 2022 years ago (I can chat with it, it answers; it can do my homework,...) is hardly a feat anymore. Everyone uses AI and it has embedded itself into all walks of life, business and private, from writing, to graphic design, video, audio and voice.
Around 700-800 million people use ChatGPT on a weekly basis. People use it for everything, including writing messages on dating apps (Chatphising), where they see its benefit. Musk’s xAI is even betting on sexy-bots as a main use case — a truly human-centric AI. The thing is done; we use the tech and that’s it.
Yet, every single day we’re assaulted with news about the imminent next phase of AI and its never-ending glorious conquest, whether it is business news (Nvidia invests Xbn in Y, Trump signs AI deal with country Y, Stramer talks about UK as AI superpower, AI company XZY raises Ybn in capital,... societal(AI pretended to by my boyfriend and scammed me, AI causes mental health crisis,...), or simply unfounded doomsday predictions (Godfather of AI says its going to kill us, AI apocalypse around the corner,….) — the list is endless.
Have we all gone psychotic? What is going on here, and whose interest does it serve?
We need to understand the media needs clicks more than ever, and besides Trump and tragedies, unfortunately, AI seems to be the next on the list to command heavy controversy, opinions and outrage. But on a deeper level, I am concerned, that we’ve really reached a point where a not insignificant portion of the public does believe the narrative they are being fed contrary to any evidence — similar as with the end of the Covid pandemic, causing a type of ‘mass formation psychosis’ or preoccupation with the Virus, this is happening now with the technology in a clearly, unhealthy way that keeps hammering like ‘Last Christmas’ on the radio in December and hopefully, will stop abruptly, like ‘Last Christmas’ on December, 25.
Clearly, the technology bears all hallmarks the pandemic did as well in terms of drawing people in: it appears threatening or dangerous because it can take your job, and this has either been proven already or is constantly being postulated by layoffs and graphs showing reduced hiring (whether or not AI is really the driver behind this, is not really straightforward to prove). It furthermore appears dangerous because many people from the industry, scientists themselves — similar to the pandemic — keep postulating how dangerous it will (not may, not could, will!) become very soon, with absurdly apocalyptic scenarios akin to the Terminator movies, wiping out humanity. Business leaders, serving their own interest, are gladly jumping into the same conversation, constantly warning about absurd, maximalist threats once (not if!) we reach artificial general intelligence to underline their importance of being here as the reasonable stewards which will fend said threats off (yet, they don’t usually talk about real, existing practical threats now about the technology, being embedded in business processes, is prone to poison data, disenfranchise customers, leak company secrets if exploited by attackers or simply normal users). Keeping this narrative cooking looks to serve everybody’s interest: business leaders, politicians (it creates growth, after all), the media, and anybody feeding and feeding off the frenzy, similar to the pandemic.
Unfortunately, it looks like we, similarly as with the pandemic reached a point, where a not insignificant proportion of people start taking these completely unfounded beliefs for granted: quite similar to what I recall friends telling me just before the pandemic was over: that they were expecting to wear masks till their end of their lives, expecting covid passports in checkpoints everywhere in society, and getting vaccinated four times for the rest of their lives, Though, clearly, these beliefs were absurd, from a historical perspective of previous pandemics and from anything we typically know.
So I am encountering something similar around me: not only some Silicon Valley wonks, AI scientists in a myopic opinion bubble, but ordinary people, politicians and — worst of all, investors! — have picked on the belief or narrative that AI will continue to evolve at breakneck speed and end up at something that is more intelligent and capable than people, reaping unfounded riches for the people who control it, giving them a tool of unprecedented power (the new ‘nuke’) and also becoming their real, sentient !! companion (while forgetting about their loyal and good-natured sentient companions, animals, who share their lives with them right here now and continue to abuse them for research like with Brain-to-Computer interfaces and as a source of food).
What if not a kind-of psychosis is this? A psychosis is characterised by the fact that a person's perception of reality is disrupted and different from the norm (‘e.g, hearing voices) or to have a fixed belief in something that most people find absurd (e.g. being able to talk to the dead). So, anyone telling me a silicon chip that sends signals that then my silicon machine is transmitting to a LED interface (screen) to visualise something I can perceive as human language can become in any way close to the sentience an animal possesses, isn’t this close to an absurd belief? Similarly, the opinion or belief that these systems, which work as stochastic parrots, mimicking language by picking words according to probabilities without true understanding, can reach a level where they outsmart any human in any possible task, including edge cases they have never encountered in their training data, isn’t this an absurd belief as well?
But we’re likely at an inflection point similar as to the end of the pandemic where the grounded reality will reign in the AI-zealots, will teach AI-guru-followers a painful lesson of money lost (anyone who bought Moderna stock at 4x annual vaccine into perpetuity valuation in 2021 will surely understand), will make Sci-Fi-nerds go back to watching 1970s Star-Trek on a TV (since nobody seems to care about Apples’ Babelfish and their VR-Googles), Writers like me will go back to actually writing (just because it's intellectually more satisfying), and AI companies will grind for survival by cutting costs and actually selling to enterprise boring use cases that matter (reconciling invoices, cleaning data, writing nonsensical business documents,...) once the money-tap runs dry.
Just wonder if this will be pre-OpenAI IPO or post. Nevertheless, I can’t wait to read that IPO research and what the 4x annual vaccines in perpetuity for everyone revenue driver will be (never mind the costs).