DeepMind researcher claims new ‘Gato’ AI could lead to AGI, says ‘the game is over!’

Scaling Uber Alles

How did we get here?

DeepMind recently released a research paper andpublished a blog poston its new multi-modal AI system. Dubbed ‘Gato,’ the system is capable of performing hundreds of different tasks ranging from controlling a robot arm to writing poetry.

The company’s dubbed it a “generalist” system, but hadn’t gone so far as to say it was in any way capable of general intelligence — you can learn more about what that meanshere.

It’s easy to confuse something like Gato with AGI. The difference, however, is that a general intelligence could learn to do new things without prior training.

In my opinion piece, I compared Gato to a gaming console:

Doctor de Freitas disagrees. That’s not surprising, but what I did find shocking was the second tweet in their thread:

The bit up there addressing “philosophy about symbols” might have been written in direct response to my opinion piece. But as sure as the criminals of Gotham know what the Bat Signal means, those who follow the world of AI know that mentioning symbols and AGI together are a surefire way to summon Gary Marcus.

Enter Gary

Marcus, a world-renowned scientist, author, and the founder and CEO ofRobust.AI, has spent the past several years advocating for a new approach to AGI. He believes the entire field needs to change its core methodology to building AGI, and wrote a best-selling book to that effect called “Rebooting AI” with Ernest Davis.

He’sdebated and discussedhis ideas with everyone from Facebook’s Yann LeCun to the University of Montreal’s Yoshua Bengio.

And, for theinaugural edition of his newsletter on Substack, Marcus took on de Freitas’ statements in what amounted to a fiery (yet respectful) expression of rebuttal.

Marcus dubs the hyper-scaling of AI models as a perceived path to AGI “Scaling Uber Alles,” and refers to these systems as attempts at “Alt intelligence” — as opposed toartificialintelligence that tries to imitate human intelligence.

On the subject of DeepMind’s exploration, he writes:

Marcus goes on to describe the problem of incomprehensibility that inundates the AI industry’s giant-sized models.

In essence, Marcus appears to be arguing that no matter how awesome and amazing systems such as OpenAI’s DALL-E (a model that generates bespoke images from descriptions) or DeepMind’s Gato get, they’re still incredibly brittle.

He writes:

While that’s certainly worth a chuckle, there’s a serious undertone there. When a DeepMind researcher declares “the game is over,” it conjures a vision of the immediate or near-term future that doesn’t make sense.

AGI? Really?

NeitherGato, DALL-E, nor GPT-3 are robust enough for unfettered public consumption. Each of them requires hard filters to keep them from tilting toward bias and, worse, none of them are capable of outputting solid results consistently. And not just because we haven’t figured out the secret sauce to coding AGI, but also because human problems are often hard and they don’t always have a single, trainable solution.

It’s unclear how scaling, even coupled with breakthrough logic algorithms, could fix these issues.

That doesn’t mean giant-sized models aren’t useful or worthy endeavors.

What DeepMind, OpenAI, and similar labs are doing is very important. It’s science at the cutting-edge.

But to declare the game is over? To insinuate that AGI will arise from a system whose distinguishing contribution is how it serves models? Gato is amazing, but that feels like a stretch.

There’s nothing in de Freitas’ spirited rebuttal to change my opinion.

Gato’s creators are obviously brilliant. I’m not pessimistic about AGI because Gato isn’t mind-blowing enough. Quite the opposite, in fact.

I fear AGI is decades more away — centuries, perhaps — because of Gato, DALL-E, and GPT-3. They each demonstrate a breakthrough in our ability to manipulate computers.

It’s nothing short of miraculous to see a machine pull off Copperfield-esque feats of misdirection and prestidigitation, especially when you understand that said machine is no more intelligent than a toaster (and demonstrably stupider than the dumbest mouse).

To me, it’s obvious we’ll need more than just…more… to take modern AI from the equivalent of “is this your card?” to the Gandalfian sorcery of AGI we’ve been promised.

As Marcus concludes in his newsletter:

Story byTristan Greene

Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns:(show all)Tristan is a futurist covering human-centric artificial intelligence advances, quantum computing, STEM, physics, and space stuff. Pronouns: He/him

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with

More TNW

About TNW

How the laws of physics could prevent AI from gaining sentience

From Elon’s mind to Bill Gate’s wallet: How GPT-3 ended up on Azure

Discover TNW All Access

Google will outpace Microsoft in AI investment, DeepMind CEO says

To achieve AGI, we need new perspectives on intelligence