Generative Modeling in Machine Learning: Examples

generative modeling using RNN

Machine learning has rapidly evolved over the past few years, with new techniques and methods emerging regularly. One of the most exciting and promising areas in this field is generative modeling. Generative modeling refers to the creation of new data samples that are similar to existing data sets. This technique has gained immense popularity in recent times due to its ability to generate highly realistic images, videos, and music.

As a data scientist, it is crucial to understand generative modeling and its various applications. This powerful tool has been used in a wide range of fields, including computer vision, natural language processing (NLP), and even drug discovery. By learning generative modeling, data scientists can develop cutting-edge models that can simulate complex systems, generate new content, and even discover new patterns and relationships in data.

In this blog post, we will dive into the world of generative modeling in machine learning and explore some of its most exciting examples. We will also discuss some of the popular techniques used in generative modeling, such as encoder-decoder architectures, autoencoders, variational autoencoders, and generative adversarial networks (GANs). By the end of this blog post, you will have a solid understanding of generative modeling and why it is an essential concept for any data scientist to learn. So let’s get started!

What is Generative Modeling?

Generative modeling is about creating a probabilistic model which learns from existing data sets and create new data sets similar to the data it learned. In other words, these models are probabilistic models that analyze the patterns in the data and generate new data based on the learned patterns.

For example, let’s consider the task of generating realistic-looking images of faces. A generative model can be trained on a large dataset of real images of faces, which it uses to learn the underlying patterns and features that define a face. The model then generates new images of faces that resemble the ones in the original dataset. The generative models capture the complex relationships between the various elements that make up an image of a face, such as the shape of the eyes, nose, mouth, and hair, as well as the lighting, shading, and other environmental factors.

This is how one can understand what generative models are and how it works:

  • Lets say we have a set of observations, say, images of faces.
  • These face images must have been created using some unknown data distribution, say, \(P_{data}\).
  • What is required is use the face images to train a model, say, \(P_{model}\) which can create samples of face images which look similar to faces it has been trained on.
  • The following will be some of the properties of this generative model, \(P_{model}\)
    • The generative model can be used to create new samples as desired.
    • The model would said to have high accuracy if the generated samples look like it has been drawn from the trained sample. On the other hand, the model would have lower accuracy if the generated sample does not look like it has been drawn from trained data set.

Generative Models & Real-life Examples

In this section, we will explore how generative modeling can be used in real-world scenarios associated with various business domains including art, music, healthcare, finance, procurement and more.

  • Procurement: Generative models can be used in the field of procurement to create contracts based on existing contract data. For example, suppose a procurement team wants to create a new contract for a specific type of product or service, such as IT consulting services. They can use generative models to analyze the patterns and language used in existing IT consulting contracts and generate a new contract that closely matches the requirements of the procurement team. To achieve this, the procurement team can provide the generative model with a large dataset of existing IT consulting contracts, which the model can then use to learn the common patterns and features of such contracts. For example, the generative model can learn that most IT consulting contracts include standard clauses related to deliverables, timelines, payment terms, and intellectual property rights. Based on this learning, the model can generate a new contract that includes these clauses, while also customizing them to reflect the specific requirements of the procurement team. The model can then be trained to generate new contracts that incorporate these patterns and features, while also customizing the language and terms to suit the specific needs of the procurement team.
  • Drug Discovery: One way generative models are used in drug discovery is through the generation of molecules that are optimized for a particular property or activity, such as potency or selectivity. For example, a generative model can be trained on a large dataset of known drugs and their corresponding properties to learn the patterns and features that are important for drug activity. The model can then be used to generate new molecules that are optimized for specific properties of interest, which can then be tested for their effectiveness in treating a particular disease.
  • Finance: Generative models can also be used in fraud detection by generating synthetic data that simulates fraudulent activities. This synthetic data can then be used to train machine learning models to detect and prevent fraud. The models can learn from the synthetic data to identify patterns and anomalies in real-world data that may indicate fraudulent behavior.
  • Music: Generative models have been used in the field of music to create original compositions, generate new sounds, and aid in music analysis. In music composition, generative models can be used to create original pieces of music by learning from existing music data sets. These models can be trained to generate music that follows certain stylistic or structural rules, or to produce music that is completely unique.

Generative Modeling using RNN Example

Recurrent Neural Networks (RNNs) can be used to create a generative model that can learn the patterns in a given text corpus and generate new text that is similar to the training data. The RNN is a type of neural network that can handle sequential data such as text. The basic idea behind an RNN is to use the output of a previous time step as input to the current time step, allowing the network to capture temporal dependencies in the input data.

The RNN-based generative model can be trained on a corpus of text data by breaking the text into sequences of fixed length. Each sequence is then fed to the RNN, which generates a prediction for the next character or word in the sequence. The predicted character or word is then fed back into the RNN as input for the next time step. This process is repeated for each sequence in the corpus, and the RNN is trained to minimize the difference between the predicted output and the actual output.

Once the RNN is trained, it can be used to generate new text by feeding in a seed sequence and generating the next character or word in the sequence. This process is repeated iteratively to generate a complete text.

Here are the steps which needs to be taken to build a generative model based on RNN network which can generate text when trained on a corpus of text. The library used is Tensorflow library:

  • First and foremost, training environment is set by importing tensorflow and other libraries such as numpy
  • The text is read from a file. Text can also be fed as constant input.
  • Text processing
    • As a first step for text processing, you need to convert text into its numerical representation.
    • Next step is to create dataset consisting of training examples and targets. The text is divided into input sequences. For each input sequence, the corresponding target sequence contain the same length of text, except shifted one character to the right.
    • Training batches are created by shuffling the dataset.
  • Build the model: The model architecture is created in this stage. The architecture can consist of an the following:
    • The input layer. The input layer can be a trainable lookup table that will map each character-id to a vector with embedding dimensions
    • the neural network layer representing neural network such as RNN or LSTM and
    • The output layer

      This is how it works. For each character fed into the model, the model looks up the embedding, runs the neural network (RNN or LSTM) one timestep with the embedding as input, and applies the output layer to generate logits predicting the log-likelihood of the next character. This is how it looks like:

      generative modeling using RNN
  • Train the model: The model is trained as a standard classification problem based on the above picture and explanation. Given the previous state of RNN network, and the input this time step, the model predict the class (logits) of the next character. During training, an optimizer and and a loss function is attached.
  • Generate the text

This has been described in a great manner with Tensorflow Python code example on this google page – Text generation with RNN.


In conclusion, generative modeling is a powerful technique in machine learning that allows us to generate new data from a given dataset. By understanding the underlying patterns and structures in the input data, we can use generative models to create new samples that closely resemble the original data. We have seen that generative modeling has a wide range of applications in various industries such as finance, healthcare, procurement, and music. In finance, generative models can be used for predicting stock prices and identifying fraud. In healthcare, generative models can be used to generate synthetic medical images for training machine learning models. In procurement, generative models can be used to manage contracts, optimize supply chain management and reduce costs. And in music, generative models can be used to generate new songs and improve music recommendation systems. One of the most popular approaches to generative modeling is using Recurrent Neural Networks (RNNs). RNNs are particularly well-suited for modeling sequential data such as text and music, and they have been used successfully in many applications such as language modeling and text generation. If you want to learn more, please drop a message and I will reach out to you.

Ajitesh Kumar
Follow me

Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. For latest updates and blogs, follow us on Twitter. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking
Posted in Data Science, Deep Learning, Machine Learning. Tagged with .

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.