Mind-Reading AI Tech Has Arrived, Dangerous as the H-Bomb, and a Morally Ignorant World Just Yawns

By Tom Gilson Published on November 25, 2023

Mind-reading has become a reality. Seriously. Universities and corporations have built machines now exist that can read a person’s thoughts almost word for word, and see what they’re seeing in their mind’s eye.

I can hardly believe I wrote that line. But it’s true, and it’s true right now. This isn’t someday science fiction. It ought to be as worrisome as nuclear weapons, but the world is giving it a big yawn instead. If there’s a problem with it, it’s purely academic: “More work needs to be done on the ethics,” and all that.

But this isn’t the least bit academic. We’re not far from the day when you could get a dose of sleeping medicine and wake up wearing a device that knows what you’re thinking. You dare not even think of removing it, because it knows your thoughts, and every time you start thinking that way way it zaps you with blinding pain. Who needs handcuffs? Who needs prison cells? Who can think their own thoughts anymore?

If you thought the 2020 election was tampered with, you ain’t seen nothin’ yet. This ought to scare us like the H-bomb. We’ve kept the Bomb in check. It hasn’t been easy, but world leaders knew it had to be done, so they’ve done it (so far). I don’t see anyone giving the slightest attention to this danger, though.

No Tinfoil Hats Here: This is Real

Now, I can imagine what it feels like to you reading this. Tinfoil-hat crazy. Unhinged. Loony. It feels even crazier writing it. Except this isn’t sci-fi, it isn’t far off, and if there’s looniness involved, don’t blame me for it. Scenarios like the one above may not be possible at the moment, but the barriers are no longer primarily technological. The major pieces are in place even now:

  • Scientists at the University of Texas, Austin, have trained AI machines to transcribe thoughts subjects are thinking nearly word for word. It’s a slow and cumbersome process, as the training required subjects to sit quietly inside an fMRI (Functional MRI) machine for 15 hours each. One person’s training worked for that person only. The research thus proved mind reading to be entirely possible, but only for highly cooperative subjects in a highly controlled environment. That would be comforting, except …
  • Researchers at Meta have just revealed an AI system that can reproduce the picture a person is thinking in his or her mind’s eye, and do it with stunning accuracy. Unlike the UT system, this one is based on publicly available training sets, so it works without all those hours of individualized training.

The funny thing? Mark Zuckerberg, Meta’s CEO, says, “Protecting people’s privacy is always important … it just has always been a thing that we care about” — when it comes to abortion.

Meta’s system can’t read the words you’re thinking (yet), but if it can see the image you’re picturing, how far off can words be? Meta’s method uses MEG (magnetoencephalography), which is orders of magnitude faster than fMRI. It still requires the subject to sit still inside the scanner, which would be comforting, except …

See for yourself:

So should we worry about this, really? Maybe not. It’s way too expensive too produce and use in any quantity, and while the tech is very real, it’s far too cumbersome for general use.

But oops, pardon me, I was letting my wander. For a moment there I found myself talking about AI the way it was in 2011, when Watson won a Jeopardy tournament. Please accept my apologies, and pay no attention to any possible lessons you might learn from that example. Don’t even think about how 22 years ago, AI meant a room-sized computer wowing the world by giving short answers to short trivia questions. Don’t you dare compare that to ChatGPT today, which can write a halfway passable book for you in no time at all.

Granted, it’s only halfway passable. The point remains that AI, once so hugely expensive and cumbersome, has now become immensely more capable as well as universally available. You can even install and run your own independent ChatGPT-like AI at home. Give me $100, lend me a keyboard, mouse, and monitor, and I could buy the rest of the hardware and set the whole thing up for you from scratch — in an hour or less. And I’m only moderately tech-savvy. So we can’t afford to relax in the thought, “it’s all too far in the future” — not when we’ve just seen “the future” showing up in just 22 years.

Why So Little Concern?

It would worry me less if it worried others more. I surveyed reports on these developments from major news agencies including CNN, NBC, The New York Times, and the Guardian. I found nothing but optimism about potential health benefits, coupled with bland academic cautions like this one from 60 Minutes in 2019: “I think it will be technologically possible to invade people’s thoughts. But it’s — it’s our societal obligation to make sure that never happens.” When a researcher says “it’s our societal obligation,” what he really means is, “Someone needs to stop me, but I doubt they will, and I sure won’t stop myself.”

Mark Zuckerberg, Meta’s CEO, says, “Protecting people’s privacy is always important … it just has always been a thing that we care about” — when it comes to abortion.

Or take the ColdFusion video above. With a title like “The End of Privacy” you’d think it would take the matter seriously. Instead it’s filled with “Gee, whiz” fascination, coupled with ridiculous triviality: “Imagine if a smartphone’s volume, notifications, and music selection would adapt to a user’s mood or brain activity … It would be like speaking to Google Assistant, but with your brain.” No thanks. I’d rather not even have Google Assistant tracking my voice, much less my thoughts.

There’s a warning in there somewhere that Meta “might” use technology wrongly. Ya think? And again near the end, “Companies could misuse this for this for their own selfish gains.” No, they will misuse it. That’s what people do when they get too much power. But even that anodyne observation was soon followed by, “Personally I found this [technology] quite fascinating.”

No Moral Intelligence in AI

It doesn’t take a computer to mind-read the culture we live in: We’re frightened to death over illnesses – our own and the planet’s – but unreasonably optimistic toward human nature and our use of tech. Atheist author Sam Harris, who thought he was writing a moral treatise with his 2010 book The Moral Landscape, looked to mind-reading technology as a good thing:

The development of a reliable lie detector would only require a very modest advance over what is currently possible through neuroimaging … . There may come a time when every courtroom or boardroom will have the requisite technology discreetly concealed behind its wood paneling, Thereafter, civilized men and women might share a common presumption: that wherever important conversations are held, the truthfulness of all participants will be monitored.

As I wrote at the time, “I suppose he thinks the agency that mandates all these truth detectors will install them in its own council chambers, too. Good luck with that.”

Today I’d also ask whether he thinks we can trust Meta with tech like this – tech it is perilously close to attaining. The company abuses what power it has already. No one who would seek this much power should be allowed to have it. No one.

The world has turned scarier with the rise of this technology. More foolish, too. Some AI leaders think of it as a kind of god. One day they will meet the true God – the only Power who can be trusted to use His power strictly for good – for instance, defeating evils like this.  

 

Tom Gilson (@TomGilsonAuthor) is a senior editor with The Stream and the author or editor of six books, including the highly acclaimed Too Good To Be False: How Jesus’ Incomparable Character Reveals His Reality.

Like the article? Share it with your friends! And use our social media pages to join or start the conversation! Find us on Facebook, X, Instagram, MeWe and Gab.

Inspiration
The Good Life
Katherine Wolf
More from The Stream
Connect with Us