Why the AGI discussion is getting heated again

Every once in a while, the argument that Artificial General Intelligence (AGI) is right around the corner resumes. And right now, we’re in the middle of one of those cycles. Tech entrepreneurs warn of AGI’s alien invasion The media is flooded with reports of AI systems that have mastered the language and are moving towards generalizations. And social media is full of heated discussions about deep neural networks and consciousness.

Recent years have seen some really impressive advances in AI, and scientists have been able to make progress in some of the most challenging areas of the field.

But as has often happened throughout AI’s decades-long history, part of the current rhetoric surrounding AI advances may be unfair publicity. And there are areas of research that have not received much attention, in part due to the growing influence of large tech companies on artificial intelligence.

Crossing the boundaries of deep study

In the early 2010’s, a group of researchers managed to win the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) by a wide margin, using a deep learning model. Since then, Deep Learning has become a major hub for AI research.

Deep learning has managed to make progress on many tasks that were previously very challenging for computers, including image classification, object detection, speech recognition, and natural language processing.

However, the growing interest in deep learning has also highlighted some of its shortcomings, including its limited generalization, conflict with causality, and lack of interpretation. Moreover, most deep learning applications require manually annotated training instances, which became a hurdle.

Recent years have seen interesting progress in some of these areas. One major innovation is the Transformer model, the Deep Learning Architecture introduced in 2017. An important characteristic of transformers is their measurement capability. Researchers have shown that the performance of transformer models is constantly improving as they get bigger and are trained on more data. Transformers can also be pre-trained through non-observation or self-inspection education, which means they can use terabytes of unlabelled data available on the Internet.

Transformers has spawned a generation of giant language models (LLM) such as OpenAI’s GPT-3, DeepMind’s Gopher and Google’s PaLM. In some cases, researchers have shown that LLMs can perform many tasks without additional training or with very few training instances (also called zero-, one- or short-shot learning). While transformers were initially designed for language functions, they have expanded into other areas, including computer vision, speech recognition, drug research, and source code generation.

More recent work has focused on bringing together multiple modalities. CLIP, for example, trains a model to explore the relationship between deep learning architecture, text and images, developed by researchers at OpenAI. Instead of the critically acclaimed images used in previous Deep Learning models, CLIP is trained on images and captions that are abundantly available on the Internet. This enables him to learn a wide range of vision and language functions. CLIP is the architecture used in OpenAI’s DALL-E 2, an AI system that can create stunning images from text descriptions. DALL-E 2 seems to have overcome some of the limitations of previous Generative DL models, including semantic compatibility (i.e., understanding the relationship between different objects in the image).

Geto takes the multimodal approach one step further by bringing DeepMind’s latest AI system, text, images, proprioceptive information and other types of data into a single transformer model. Geto uses a model to learn and perform many tasks, including playing on the balcony, captioning images, chatting, and stacking blocks with the hands of a real robot. The model has normal functioning in many functions, but researchers at DeepMind believe that it is only a matter of time before an AI system like Gato can do all this. Deepmind’s research director Recently tweeted, “It’s all about scale now! The game is over! “Suggests that creating larger versions of Geto will eventually reach common sense.

Is Deep Learning the ultimate answer to AGI?

Recent advances in deep learning seem to be in line with the vision of its main proponents. Geoffrey Hinton, Joshua Bengio and Yan Lacan, three Turing Award-winning scientists known for their pioneering contributions to deep learning, have suggested that better neural network architecture will eventually transcend the current limits of deep learning. LeCun, in particular, is an advocate of self-inspection education, which is now widely used in the training of transformers and CLIP models (although LeCun is working on a more sophisticated type of self-inspection education, and it is worth noting that LeCun is a Microscopic view AGI intelligence on the subject and prefers the term “human-level intelligence”).

On the other hand, some scientists point out that despite its progress, deep learning still lacks the most essential aspects of intelligence. Among them are Gary Marcus and Emily M. Bender, both have fully documented the limitations of large language models such as GPT-3 and text-to-image generators such as DALL-E2.

Marcus, who has written a book on the limitations of deep learning, is part of a group of scientists who support a hybrid approach that brings together different AI technologies. One hybrid approach that has recently gained traction is the neuro-symbolic AI, which connects artificial neural networks to symbolic systems, a branch of AI that fell by the wayside with the rise of deep learning.

There are many projects that demonstrate that neuro-symbolic systems address some of the limitations that current AI systems suffer from, including a lack of common sense and causality, creativity, and intuitive physics. Neuro-symbolic systems have also been shown to require far less information and computational resources than pure deep learning systems.

The role of big tech

Attempts to solve AI problems with larger deep learning models have increased the power of companies that can afford the rising cost of research.

In recent years, AI researchers and research laboratories have been attracted to large tech companies with deep pockets. The UK-based Deepmind was acquired by Google in 2014 for 600 million. OpenAI, which began as a nonprofit research laboratory in 2015, switched to capped-profit outfits in 2019 and received $ 1 billion in funding from Microsoft. Today, OpenAI no longer releases its AI models as open-source projects and is licensed only to Microsoft. Other big tech companies such as Facebook, Amazon, Apple and Nvidia have set up their own cash-burning AI research labs and are using lucrative salaries to snatch scientists from educational and small institutions.

This, in turn, has given these companies the power to drive AI research in directions that benefit them (i.e., larger and more expensive deep learning models that only they can fund). While the wealth of big tech has helped a lot in advancing in-depth learning, it has come at the expense of other areas of research such as neuro-symbolic AI.

However, for the moment, it seems that throwing more data and compute power on transformers and other deep learning models is still yielding results. It will be interesting to see how far this idea can go and how close it will bring us to the ever-evolving riddle of thought machines.

Venturebeat’s mission Digital Town Square is set to become a place for technical decision makers to gain knowledge about the changing enterprise technology and practices. Learn more about membership.

Similar Posts

Leave a Reply

Your email address will not be published.