Facebook’s New Name Comes From a Fictional Dystopia

Facebook’s New Name Comes From a Fictional Dystopia | INFBusiness.com

By rebranding itself as Meta, Facebook named itself an evil empire. It did so on purpose, and quickly enough to express its long-time motto, “Move fast and break things,” which, investors take note, is the antithesis of a long-term strategy. Moving too fast is probably why Facebook used open-access security protocols, a systematic sloppiness enabling whistleblowers to swipe top-secret papers documenting deliberate deception and damage.

Those documents show a growing global body count, including sad teenage girls, angry flag-wavers, sex traffickers and foreign dissidents. They prove that Facebook can’t be trusted even with its own security, much less with anyone else’s.

Looking for a Safe Place in Facebook’s Digital Universe

READ MORE

Imagine if a few years ago, the weirdly-lettered Google had renamed its parent company not the bland abstraction, Alphabet, but the chilling numbers 1984. George Orwell’s novel, “1984,” along with Aldous Huxley’s “Brave New World,” came to represent the dystopian genre of literature. “1984” in particular deployed the concepts of high-bandwidth surveillance and technological manipulation, both now Google’s specialties, to show how awful such a world would be. Naming a company 1984 would have officially espoused the evil Google once foreswore.

Manipulative Interference

In his 1992 novel “Snow Crash,” Neal Stephenson imagined digital mind-worms and a manipulative, immersive, corporate-sculpted social network called the Metaverse. “Snow Crash,” with its virtual avatars, skins and real-estate booms, showed how monetized technological interactions make people miserable. A major point of the book is that people are easy to program. Stephenson’s for-profit Metaverse was the original evil social network, ruthlessly converting human despair into corporate revenue.

The Facebook version will be yet more dangerous — if they can actually build it. At the moment, Epic’s game “Fortnite” is closer to Stephenson’s high bandwidth, synchronous space. Facebook’s version will add wireless, geo-tracking, news feeds, face-reading and biometric scanning — all intrusive technologies that magnify exploitation a hundredfold beyond what Stephenson imagined 30 years ago.

Embed from Getty Images

The inevitable result, as mathematical analysis predicted and medical evidence continues to support, is that technological socialization inflicts deep psychic damage. The medical case is simple: Humans evolve to communicate directly with in-born high-bandwidth sensory systems. Any damage to that data flow damages felt connection, but unconsciously, like lead poisoning. Monetized and manipulative interference causes the most harm.

Worse, trying to socialize through technology has horrific impacts on children’s emotional health. Even Stephenson’s original Metaverse, evil as it was, didn’t strip-mine children’s attentional systems for short-term revenue, as Instagram Youth will likely do.  

Unfortunately, those activities are still legal in the US, while lying to investors and government officials is not. Since social media’s damage to children’s mental health became clear only recently, those activities have been incentivized, nay required, in the name of “adding shareholder value.” Blame for past deeds will be legally murky.

Preventing Future Damage

Preventing future damage is more important, and governments already possess the know-how. If Facebook were a person, its relentless, breakneck, reckless disregard for law and safety over decades would make it by any reasonable standard an incorrigible repeat offender, an addict, a sociopath. Its data flows, incentives and resulting behavior show it to be adept at growing fast and making money, but only by means of a toxic business model and lies.

The growing body count of Facebook and similar technologies present a blunt challenge to governance of any kind. Even if exactly the right laws were passed, how could such a company obey? It’s not in its DNA, its workflow or its charter. How can governments deal with hardwired intransigence when public health is at stake?

Embed from Getty Images

The bad news is, the bad stuff is easy to hide. There are myriad curlicue ways that a complex, real-time technology company like Facebook can make obvious short-term money while creating hidden long-term damage. The common thread is algorithms that track money but not damage. 

Such systems can be so well-automated, humans may not even know the harm is happening at all. The good news is, all the information necessary to find and solve the problem is right there in the database, if you know where and how to look. (Spying solo into numbers on behalf of CEOs was my main job for several years; database queries don’t lie).

The good news is, governments already protect public health just fine in other domains, through regular inspections. Elevators, building codes, sprinkler systems, water quality and restaurant sanitation all invoke the same common-sense principle that if some technology endangers human beings, the thing itself must be inspected by a neutral party. Inspect the thing in question, not the paperwork that might be fudged. Governments know that when saving money costs lives, legalistic disclaimers and self-regulation don’t work. Random inspections do.

A crack team of independent data scientists, given support and read-only access to the Facebook databases and file systems, could in short order provide public health answers the public needs but that Facebook executives themselves don’t know.

Because damage done in ignorance is less culpable, some sort of amnesty will be necessary to grease the informational wheels, say immunity from prosecution for past deeds in return for new transparency, the way South Africa’s Truth Commission did. Right now, putting numbers on the body count and stopping it matter more than finding whom to blame.

Inspecting safety numbers is the same essential public-health service performed by a city inspector checking the temperature on a restaurant dishwasher. Toxic social-media metrics like “time on device” and “cost per conversion” already hurt children right now, and they’re already tracked. Best of all, their algorithmic influence can be turned off in an instant. Volunteers?

Source: fairobserver.com

Leave a Reply

Your email address will not be published. Required fields are marked *