When Will AI Be Smarter Than Humans? Don’t Asknews24 | News 24
Dark Mode Light Mode
Dark Mode Light Mode

When Will AI Be Smarter Than Humans? Don’t Asknews24

(Bloomberg Opinion) — If you’ve heard the term artificial general intelligence, or AGI, it probably makes you think of a humanish intelligence, like the honey-voiced AI love interest in the movie Her, or a superhuman one, like Skynet from The Terminator. At any rate, something science-fictional and far off.

But now a growing number of people in the tech industry and even outside it are prophesying AGI or “human-level” AI in the very near future.

These people may believe what they are saying, but it is at least partly hype designed to get investors to throw billions of dollars at AI companies. Yes, big changes are almost certainly on the way, and you should be preparing for them. But for most of us, calling them AGI is at best a distraction and at worst deliberate misdirection. Business leaders and policymakers need a better way to think about what’s coming. Fortunately, there is one.

Sam Altman of OpenAI, Dario Amodei of Anthropic and Elon Musk of xAI (the thing he’s least famous for) have all said recently that AGI, or something like it, will arrive within a couple of years. More measured voices like Google DeepMind’s Demis Hassabis and Meta’s Yann LeCun see it being at least five to 10 years out. More recently, the meme has gone mainstream, with journalists including the New York Times’ Ezra Klein and Kevin Roose arguing that society should get ready for something like AGI in the very near future.

I say “something like” because oftentimes, these people flirt with the term AGI and then retreat to a more equivocal phrasing like “powerful AI.” And what they may mean by it varies enormously — from AI that can do almost any individual cognitive task as well as a human but might still be quite specialized (Klein, Roose), to doing Nobel Prize-level work (Amodei, Altman), to thinking like an actual human in all respects (Hassabis), to operating in the physical world (LeCun), or simply being “smarter than the smartest human” (Musk).

So, are any of these “really” AGI?

The truth is, it doesn’t matter. If there is even such a thing as AGI — which, I will argue, there isn’t — it’s not going to be a sharp threshold we cross. To the people who tout it, AGI is now simply shorthand for the idea that something very disruptive is imminent: software that can’t merely code an app, draft a school assignment, write bedtime stories for your children or book a holiday — but might throw lots of people out of work, make major scientific breakthroughs, and provide frightening power to hackers, terrorists, corporations and governments.

This prediction is worth taking seriously, and calling it AGI does have a way of making people sit up and listen. But instead of talking about AGI or human-level AI, let’s talk about different types of AI, and what they will and won’t be able to do.

Some form of human-level intelligence has been the goal ever since the AI race kicked off 70 years ago. For decades, the best that could be done was “narrow AI” like IBM’s chess-winning Deep Blue, or Google’s AlphaFold, which predicts protein structures and won its creators (including Hassabis) a share of the chemistry Nobel last year. Both were far beyond human-level, but only for one highly specific task.

If AGI now suddenly seems closer, it’s because the large-language models underlying ChatGPT and its ilk appear to be both more humanlike and more general-purpose.

LLMs interact with us in plain language. They can give at least plausible-looking answers to most questions. They write pretty good fiction, at least when it’s very short. (For longer stories, they lose track of characters and plot details.) They’re scoring ever higher on benchmark tests of skills like coding, medical or bar exams, and math problems. They’re getting better at step-by-step reasoning and more complex tasks. When the most gung-ho AI folks talk about AGI being around the corner, it’s basically a more advanced form of these models they’re talking about.

It’s not that LLMs won’t have big impacts. Some software companies already plan to hire fewer engineers. Most tasks that follow a similar process every time — making medical diagnoses, drafting legal dockets, writing research briefs, creating marketing campaigns and so on — will be things a human worker can at least partly outsource to AI. Some already are.

That will make those workers more productive, which could lead to the elimination of some jobs. Though not necessarily: Geoffrey Hinton, the Nobel Prize-winning computer scientist known as the godfather of AI, infamously predicted that AI would soon make radiologists obsolete. Today, there’s a shortage of them in the US.

But in an important sense, LLMs are still “narrow AI.” They can ace one job while being lousy at a seemingly adjacent one — a phenomenon known as the jagged frontier.

For example, an AI might pass a bar exam with flying colors but botch turning a conversation with a client into a legal brief. It may answer some questions perfectly, but regularly “hallucinate” (i.e. invent facts) on others. LLMs do well with problems that can be solved using clear-cut rules, but in some newer tests where the rules were more ambiguous, models that scored 80% or more on other benchmarks struggled even to reach single figures.

And even if LLMs started to beat these tests, too, they would still be narrow. It’s one thing to tackle a defined, limited problem, however difficult. It’s quite another to take on what people actually do in a typical workday.

Even a mathematician doesn’t just spend all day doing math problems. People do countless things that can’t be benchmarked because they aren’t bounded problems with right or wrong answers. We weigh conflicting priorities, ditch failing plans, make allowances for incomplete knowledge, develop workarounds, act on hunches, read the room and, above all, interact constantly with the highly unpredictable and irrational intelligences that are other human beings.

Indeed, one argument against LLMs ever being able to do Nobel Prize-level work is that the most brilliant scientists aren’t those who know the most, but those who challenge conventional wisdom, propose unlikely hypotheses and ask questions nobody else has thought to ask. That’s pretty much the opposite of an LLM, which is designed to find the likeliest consensus answer based on all the available information.

So we might one day be able to build an LLM that can do almost any individual cognitive task as well as a human. It might be able to string together a whole series of tasks to solve a bigger problem. By some definitions, it would be human-level AI. But it would still be as dumb as a brick if you put it to work in an office.

Human Intelligence Isn’t ‘General’

A core problem with the idea of AGI is that it’s based on a highly anthropocentric notion of what intelligence is.

Most AI research treats intelligence as a more or less linear measure. It assumes that at some point, machines will reach human-level or “general” intelligence, and then perhaps “superintelligence,” at which point they either become Skynet and destroy us or turn into benevolent gods who take care of all our needs.

But there’s a strong argument that human intelligence is not in fact “general.” Our minds have evolved for the very specific challenge of being us. Our body size and shape, the kinds of food we can digest, the predators we once faced, the size of our kin groups, the way we communicate, even the strength of gravity and the wavelengths of light we perceive have all gone into determining what our minds are good at. Other animals have many forms of intelligence we lack: A spider can distinguish predators from prey in the vibrations of its web, an elephant can remember migration routes thousands of miles long, and in an octopus, each tentacle literally has a mind of its own.

In a 2017 essay for Wired, Kevin Kelly argued that we should think of human intelligence not as being at the top of some evolutionary tree, but as just one point within a cluster of Earth-based intelligences that itself is a tiny smear in a universe of all possible alien and machine intelligences. This, he wrote, blows apart the “myth of a superhuman AI” that can do everything far better than us. Rather, we should expect “many hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash.”

This is a feature, not a bug. For most needs, specialized intelligences will, I suspect, be both cheaper and more reliable than a jack-of-all-trades that resembles us as closely as possible. Not to mention that they’re less likely to rise up and demand their rights.

None of this is to dismiss the huge leaps we can expect from AI in the next few years.

One leap that’s already begun is “agentic” AI. Agents are still based on LLMs, but instead of merely analyzing information, they can carry out actions like making a purchase or filling in a web form. Zoom, for example, soon plans to launch agents that can scour a meeting transcript to create action items, draft follow-up emails and schedule the next meeting. So far, the performance of AI agents is mixed, but as with LLMs, expect it to dramatically improve to the point where quite sophisticated processes can be automated.

Some may claim this is AGI. But once again, that’s more confusing than enlightening. Agents won’t be “general,” but more like personal assistants with extremely one-track minds. You might have dozens of them. Even if they make your productivity skyrocket, managing them will be like juggling dozens of different software apps — much like you’re already doing. Perhaps you’ll get an agent to manage all your agents, but it too will be restricted to whatever goals you set it.

And what will happen when millions or billions of agents are interacting together online is anybody’s guess. Perhaps, just as trading algorithms have set off inexplicable market “flash crashes,” they’ll trigger one another in unstoppable chain reactions that paralyze half the internet. More worryingly, malicious actors could mobilize swarms of agents to sow havoc.

Still, LLMs and their agents are just one type of AI. Within a few years, we may have fundamentally different kinds. LeCun’s lab at Meta, for instance, is one of several that are trying to build what’s called embodied AI.

The theory is that by putting AI in a robot body in the physical world, or in a simulation, it can learn about objects, location and motion — the building blocks of human understanding from which higher concepts can flow. By contrast, LLMs, trained purely on vast amounts of text, ape human thought processes on the surface but show no evidence that they actually have them, or even that they think in any meaningful sense.

Will embodied AI lead to truly thinking machines, or just very dexterous robots? Right now, that’s impossible to say. Even if it’s the former, though, it would still be misleading to call it AGI.

To go back to the point about evolution: Just as it would be absurd to expect a human to think like a spider or an elephant, it would be absurd to expect an oblong robot with six wheels and four arms that doesn’t sleep, eat or have sex — let alone form friendships, wrestle with its conscience or contemplate its own mortality — to think like a human. It might be able to carry Grandma from the living room to the bedroom, but it will both conceive of and perform the task utterly differently from the way we would.

Many of the things AI will be capable of, we can’t even imagine today. The best way to track and make sense of that progress will be to stop trying to compare it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?

More From Bloomberg Opinion:

This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Gideon Lichfield is the former editor-in-chief of Wired magazine and MIT Technology Review. He writes Futurepolis, a newsletter on the future of democracy.

More stories like this are available on bloomberg.com/opinion

Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Carlos Alcaraz wins Monte-Carlo Masters after beating Lorenzo Musetti in comeback victory | Tennis Newsnews24

Next Post

Pope makes appearance in St Peter's Square for Palm Sunday mass | World Newsnews24