Is DeepMind’s Gato the world’s first AGI?

Artificial General Intelligence (AGI) is back in the news, due to Gato’s recent introduction from Deepmind. Anyway, AGI calls for images of Skynet (Terminator Lore) which was originally designed as a threat analysis software for the military, but it quickly became seen as an enemy of humanity. Although fictional, this should give us a break, especially since soldiers around the world are pursuing AI-based weapons.

However, Gato does not appear to be raising any of these concerns. The Deep Learning Transformer model has been described as a “generalist agent” and aims to perform 604 specific and mostly physical tasks with various methods, observations and action specifications. They are known as the AI ​​models of the Swiss Army Knife. It is clearly more common than other AI systems developed so far and seems to be a step towards AGI in that regard.

Diagram descriptions are generated automatically
Generalist Agent. Gato can understand and function different embodiments across a wide range of environments using a single neural network with a mass of equal weight. Gato was trained with 604 different tasks with different methods, observations and action specifications. Source: Deep Mind

Multimodal neural networks

Multimodal systems are not new – as evidenced by GPT-3 and others. What is arguably new is the purpose. By design, the GPT-3 was to become a huge language model for text generation. It could also create images from captions, generate programming code, and other functions were add-on benefits that emerged after the fact and often surprised AI experts.

By comparison, GATOs are purposefully designed to address many different tasks. Deepmind explains, “The same network attic with the same weight, caption images, chat, a real robot can stack blocks by hand and much more, depending on its context determines the output of text, composite torque, button press or other tokens. To do. “

Although Deepmind claims that Gato is ahead of humans for many of these tasks, the first iteration yields less than impressive results on many activities. Observers have noted that it does not perform particularly well in many of the 604 tasks, with one observer summing it up as follows: “An AI program that does so much work on so many things.”

But this dismissal point is missed. So far, there is only “compressed AI” or “weak AI”, which has been defined as adhering to a single dedicated purpose, with “one purpose” meaning two things:

  1. An algorithm designed to do one thing (say, developing a beer recipe) cannot be used for anything else (for example, play a video game).
  2. An algorithm that “learns” anything cannot be effectively transferred to another algorithm designed to serve a specific purpose.

For example, AlphaGo, DeepMind’s neural network, which surpassed the human world champion in the game of Go, cannot play other games and meet any other requirement, even though the games are very simple.

Strong AI

The other end of the AI ​​spectrum is considered “strong AI” or alternatively, AGI. This would be a single AI system – or possibly a group of linked systems – that could be applied to any task or problem. Unlike narrow AI algorithms, the knowledge gained through common AI can be shared and maintained among system components.

In the typical AI model, the world’s best beating algorithm on the go will be able to learn chess or any other game as well as perform additional tasks. AGI is generally thought of as an intelligent system that can function and think like humans. Murray Shanahan, a professor of cognitive robotics at Imperial College London, said on the Exponential View podcast that AGI “is in some sense as smart as humans and capable of the same level of generalization as humans are capable and have common sense.”

However, unlike humans, it operates at the speed of the fastest computer systems.

A matter of scale

Deepmind researcher Nando de Freitas believes that Gato is effectively one AGI display, Only the sophistication and lack of scale that can be achieved through more model refinement and extra computing power. The size of the Gato model is relatively small at 1.18 billion parameters, which is essentially proof of concept, leaving much to be desired performance with additional scaling.

Algorithm training to scale AI models requires more data and more computing power. We are upset over the data. Last year, industry analyst firm IDC said that “the amount of digital data created in the next five years will be more than double the amount of data created by the advent of digital storage.” In addition, computing power has grown rapidly over the decades. Despite the evidence, the speed is slowing down due to barriers to the physical size of semiconductors.

However, the The Wall Street Journal Noting that chipmakers have advanced the technological envelope, finding new ways to cram into more computing power. Mostly this is done by heterogeneous designs, making chips from a variety of specialist modules. This approach is proving to be effective, at least in the near future, and will continue to drive this model scale.

Geoffrey Hinton, a professor at the University of Toronto who is a pioneer in deep learning, told Scale: “There are a trillion synapses in a solid centimeter of the brain. If there is such a thing as normal AI, [the system] Maybe a trillion synapses will be needed. ”

An AI model with one trillion plus parameters – the neural network equivalent of Synapse – is emerging, with Google developing a 1.6-trillion-parameter model. However, this is not an example of AGI. The consensus of some surveys of AI experts suggests that AGI is still decades into the future. Either Hinton’s assessment is only part of the issue for AGI or the expert opinion is conservative.

Perhaps the scales are best displayed with advances from GPT-2 to GPT-3 where the difference is largely more data, more dimensions – 1.5 billion with GPT-2 to 175 billion with GPT-3 – and more computing power – e.g. , More and faster processors, some of which are specifically designed for AI functionality. When GPT-3 appeared, San Francisco-based developer and artist Aram Sabeti, Tweeted “Playing with GPT-3 looks like the future. I got it for writing songs, stories, press releases, guitar tabs, interviews, essays, technical guides. It’s shockingly good. ”

However, AI Deep Learning Skeptic Gary Marcus believes that “there are serious loopholes in the scaling argument.” He claims that the criteria that other people have seen, such as guessing the next word in a sentence, are not the same as “deep understanding of true AI.” [AGI] Will need. ”

Facebook owner Matana’s chief AI scientist and past winner of the Turing Award for AI, Yan Leku, said in a recent blog post after Geto’s release that there is no such thing as AGI right now. Furthermore, he does not believe that the scaling-up model will reach this level, it will need additional new concepts. He acknowledges, however, that some of these concepts, such as generalized self-observation education, are “probably around the corner.”

Jacob Andreas, an assistant professor at MIT, argues that Gato can do many things at once, but it is not like being able to adapt to new tasks in a meaningful way that is different from what he was trained to do.

While Gato may not be an example of AGI, there is no denying that it provides an important step beyond narrow AI. It provides further evidence that we are entering a twilight zone, a narrow area between narrow and normal AI. The AGI discussed by Shanhan and others may still be decades in the future, although Gato may have accelerated the timeline.

Gary Grossman is Senior VP of Technology Practice at Edelman and Edelman is the Global Lead of the AI ​​Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.