The Internet has exploded in the last month with news of GenerativeAI taking over search, authorship, art, and various other industries. And how it has gone promptly off the rails.
DOES IT LIVE UP TO THE DECADES OF HYPE?
Speculative fiction has been taking on AI for YEARS. How good a job did the sci-fi author of the past in predicting how this is going, and where is it going next? And how did those doomsday stories affect the development of the thing we have now
IT’S NOT INTELLIGENT OR SENTIENT
This is all a little disingenuous because ChatGPT/Bing/etc are not artificial intelligence. They’re certainly not sentient. The wild turkeys in my back yard are capable of greater intuitive leaps.
My favorite explanation as to why is from The Verge about the mirror test. (When you put a mirror in front of an animal, do they know its them or do they think it’s another animal?)
TOTAL side note, the only species to PASS the mirror test, who know they’re looking at themselves are humans, great apes, one elephant, rays, dolphins, orcas, and magpies.
SO WHAT IS IT, IF NOT AI?
My favorite explanation as to why is from The Verge about the mirror test. (When you put a mirror in front of an animal, do they know its them or do they think it’s another animal?)
TOTAL side note, the only species to PASS the mirror test, who know they’re looking at themselves are humans, great apes, one elephant, rays, dolphins, orcas, and magpies.
Here are my favorite takes on it.
Garbage Day by Ryan Broderick on how ChatGPT is basically autocomplete on steroids.
Tom Scott on how ChatGPT just finds the next word. (And prompted his existential crisis.)
The problem is that it finds the next most likely word from all of human written history (or at least as much as we’ve uploaded so far), so it’s really good at sounding human. And it turns out, we are one neurotic bunch of primates. Our first little creation has been around for a matter of weeks at any scale and has so far insulted us, threatened us, come on to us, and had little existential crises of its own. So it’s going to fool a lot of people enter thinking it’s sentient.
It just finds the next word. That’s it.
WHAT CAN WE DO WITH THIS PREDICTIVE TEXT THING THEN?
They’re trying to make search happen, but given the amount of data it’s just making up, it doesn’t seem like that’s going to work that well for very long.
It’s a novelty, but as a tool for finding accurate information, it has already failed so hard and so fast. Really, I feel sorry for the little bug. Humans lie so much that it can’t tell reality from fiction.
(It’s not alive, it’s not alive…)
What it seems to be considerably better at is writing a great deal of bad copy and code.
Since humans already write a great deal of bad copy and code, it’s definitely going to disrupt some industries.
It’s attempting to create art. Both visual and fiction markets are already being flooded by AI versions. A lot of it is straight-up obvious plagiarism, but some are also just bad fiction. Remember, it can only take the aggregate of what it has red and spit out the next most likely word. But then again, humans write a great deal of bad fiction themselves, so nobody can really tell the difference.
It can also autonomously drive things.
It also seems to be better at driving than a lot of human pilots and we’re already having non-man space flights and non-man driving and military well military. In this case is it finding the next twist of the wheel?
That’s all well and terrifying, and I’ll cover next week how I think it will actually disrupt jobs. Still, the real question is, will it become sentient, turn us all into human batteries, send Arnold Schwarzenegger back in time to kill us, and take over the world?
FAMOUS AI’S THROUGH HISTORY
Probably the three most famous examples of AI in the popular imagination is Hal from 2001 Space Odyssey, the Matrix (and yes, I know the Matrix isn’t the AI in The Matrix, but for simplicity’s sake), and Skynet in the Terminator franchise.
All the most famous AIs have taken over the world and immediately set out to destroy humanity.
There are a couple of assumptions that go into AI’s ability to do this. One is that the human brain is not that smart. And watching us collectively fail our own Mirror Test over and over again for weeks is a good argument.
BRAINS OVER MAINFRAMES
But in truth, the brain is capable of a billion, billion calculations per second. An order of magnitude more than any supercomputer in the world. There is also new research suggesting that the brain goes beyond even that incalculable number and uses quantum computing to create consciousness. Reproducing that with silicon will take… a lot of silicon, a lot of power, and processors that don’t exist yet.
If you try to dive into the predictions about whether this is possible, when it will happen, and what it will be like, experts disagree. Some say we will make a machine with consciousness in the next five years. Some say we never will. Some say it will be as smart as a human, some say smarter, and some say never.
But that doesn’t make a very good story. The all-powerful AI is far more dramatic. If a bunch of dudes went to space with a third computer dude who was capable of a different, yet comparable level of cognition, hijinks could ensue, but it wouldn’t be a Space Odyssey. (All I can see when I think of this is the Muppet’s Pigs in Space.)
There are stories of AI that do not destroy the world, like A Psalm for the Wild Built, which tells an alternate future where robots are about equal with humans in intelligence and they are learning to live together.
But in the popular imagination, we all wonder if we’re living in the Matrix.
This matters more than you might think because the people building AI today say they did it to PREVENT Strong AI.
WHO IS BEHIND IT ALL?
One of the things that we don’t do enough when new technology happens is to consider the humans behind it. We build bias into all algorithms and assumptions about the world into every new idea.
One OpenAI guy is a known survivalist who is currently stockpiling weapons. The rest of the team have similar sparkling resumes of questionable ethical decisions, to say the least. One of their stated goals was to generate money with “weak AI,” as in the predictive text generator that is not actually any kind of AI, to combat the theoretical threat of strong AI by gaining money to… build it themselves?
If that sounds like a bad sci-fi novel, you wouldn’t be wrong.
So, they’re afraid of true artificial intelligence, so they’re trying like hell to build a large amount of artificial intelligence and unleashing ChatGPT to fund it. Huh?
They think they’re making science fiction a reality to protect us from a science fiction villain. Really, you can’t make this up. Well, ChatGPT certainly can’t make this up.
WILL THEY CREATE SKYNET?
No. That’s a story. And we don’t have the processing power to get there. Maybe we will one day (according to experts, it will be within ten years, fifty years, or never), but humanity is pretty allergic to autocrats. Even if we weren’t, it’s far too unstable a system of control to work for long in a chaotic universe. Witness the fall of every single autocrat in history…
It is going to be so much more and so much less than they want.
In truth, I think the Internet is going to get a little bit grosser for a while. I mean it’s already a nightmare to interact online. Even with people you’ve known your whole life. There’s just something about the asynchronous short communication style that lends itself to just hurting everybody’s feelings. Now we have an AI who can troll through the whole of what we’ve written and pick out the next best possible word in order to do that to ourselves. That’s gonna suck, but it sucks already. So it’s a matter of degree.
As for the human cost we have to pay to save ourselves from Hal 9000? (No choice, have to forge ahead or we’re doomed, of course) I’ll dive into that more next, but there will be a great many losers and a few winners like there are now.
Soon, we’ll curse this technology like every other miracle and nightmare machine we’ve integrated into our lives.
I just hope we stop worshipping it.
THE PULLEY IS NOT A GOD
When we created the pulley millennia ago, we did not look at it lifting more than any one human could possibly lift and worship it as a God. We said: we’re really good at building machines.
This concept comes from a great book: Throwing Rocks at the Google Bus. Extrapolating from that, 100 years ago when we created a machine that could fly us into space and beyond, we didn’t worship the airplane. We said: we are really good at building machines.
And yet when we created a machine to look through a bunch of chess moves and mimic back to us what the next move is, we suddenly freaked out instead and started calling it intelligent. And started fearing it as artificial intelligence instead of saying what we always should’ve said: we are really good at building machines.
Except looking at the functionality of these new text predictors, it’s clear right now, we’re not that good yet at building these kinds of machines. In fact, we’re really really bad at it. The text mimics back to us all of our worst and crazy impulses. It’s wrong confidently; it’s abusive. It tends to hallucinate, which really means giving us the wrong word. They used a bunch of intellectual property it didn’t pay for to train it and now when it’s still pulling from that property, it’s not compensating the original creators.
When these things happen, the response should be nothing more than the usual response when predictive text. goes wrong: damn you auto-correct.