Deep generative models could offer the most promising developments in AI

Did you miss the session at the Data Summit? See on-demand here.

This article is contributed by Rick Hao, Lead Deep Tech Partner of Pan-European VC Speedinvest,

With an annual growth rate of 44%, the market for AI and machine learning is constantly attracting interest from business leaders in every industry. With AI estimating that the GDP of some local economies will grow by 26% by 2030, it is easy to see the rationale for investment and publicity.

Among AI researchers and data scientists, one of the key steps in ensuring that AI delivers on the promises of enhanced growth and productivity is to expand the range of models and capabilities available for organizations to use. And at the top of the agenda is the development, training and deployment of Deep Generative Models (DGMs) – which I consider to be some of the most compelling models set for use in the industry. But why

What is DGM?

You’ve probably already seen the results of DGM – it’s actually the same type of AI model that produces deepfax or impressive art. The DGM has long stimulated educators and researchers in computer labs, as they bring together two very important technologies that represent the confluence of deep learning and potential modeling: the generative model paradigm and the neural network.

The generative model is one of the two main categories of AI models and, as its name suggests, it is a model that can take a dataset and generate new data points based on the input it has received so far. This contrasts with the more commonly used – and very easy to develop – discriminatory models that look at a data point in a dataset and then label or classify it.

“D” in “DGM” refers to the fact that, being a generative model, they take advantage of deep neural networks. Neural networks are computing architectures that give programs the ability to learn new patterns over time – what makes neural networks “deeper” is the increased level of complexity offered by the multiple hidden “layers” of inference between model input and model output. This depth enables deep neural networks to work with extremely complex datasets with many variables in the game.

Taken together, this means that DGM is a model that can generate new data points based on the data provided in it, and can handle particularly complex datasets and topics.

Opportunities for DGM

As mentioned above, DGM already has some significant creative and imaginative uses, such as Deepfax or Art Generation. However, the potential full range of commercial and industrial applications for DGM is wide and promises to up-end various sectors.

Consider, for example, the issue of protein folding. Protein folding – discovering the 3D structure of proteins – allows us to discover which drugs and compounds interact with different types of human tissue and how. This is necessary for drug discovery and medical innovation, but it is very difficult to find out how a protein folds. Scientists need to dissolve and crystallize a protein before analyzing it, which means that the whole process for a single protein can take weeks or months. Can run. Even traditional deep learning models are inadequate to help address the problem of protein folding, as their focus is primarily on classifying existing data sets rather than generating their own output.

In contrast, last year the Deepmind team’s alphafold model was able to reliably predict how proteins would fold based on data related to their chemical composition. By being able to generate results in hours or minutes, Alphafold has the potential to save months of laboratory work and greatly accelerate research in almost every field of biology.

We are also seeing DGM emerging in other domains. Last month, DeepMind released AlphaCode, a code-generating AI model that successfully surpassed the average developer in trials. And the applicability of DGMs can be seen in remote fields such as physics, financial modeling or logistics: being able to clearly learn the subtle and complex patterns that humans and other deep learning networks cannot find, DGM promises to be capable. To produce amazing and sensible results in almost every field.


DGM faces some significant technical challenges, such as difficulty training them optimally (especially with a limited data set) and ensuring that they can deliver consistently accurate output across real applications. This is a key driver of the need for more investment to ensure that DGMs can be widely deployed in the manufacturing environment and thus fulfill their economic and social commitments.

However, in addition to the technical hurdles, a major challenge for the DGM is ethics and compliance. Due to their complexity, it is very difficult for DGMs to understand or explain the decision making process, especially those who do not understand their architecture or operation. This lack of persuasiveness can put the AI ​​model at risk of developing unreasonable or unethical biases without the knowledge of its operators, and in turn generates outputs that are inaccurate or discriminatory.

Furthermore, the fact that DGMs operate at such a high level of complexity means that there is a risk that their results will be difficult to reproduce. This difficulty with fertility can make it difficult for researchers, regulators or the general public to trust the results provided by the model.

Ultimately, in order to minimize the risks surrounding persuasiveness and reproducibility, the teams and data scientists needed to take advantage of DGM need to make sure they are using best practices to format their models and that they use valid explanatory tools in their deployment.

While just beginning to enter the production environment, DGM represents some of the most promising developments in the AI ​​world. Ultimately, being able to see some of the most subtle and fundamental patterns of society and nature, these models will prove to be transformative in almost every industry. And despite the challenges of ensuring compliance and transparency, there is every reason to be optimistic and excited about the promise of a future DGM for technology, our economy and society as a whole.

Rick Hao is Lead Deep Tech Partner at Pan-European VC Speedinvest,


Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Similar Posts

Leave a Reply

Your email address will not be published.