Large Language Models

Completion Model vs Chat Model: Python Examples

In this blog, we will learn about the concepts of completion and chat large language models (LLMs) with the help of Python examples.

What’s the Completion Model in LLM?

A completion model is a type of LLM that takes a text input and generates a text output, which is called a completion. In other words, a completion model is a type of LLM that generates text that continues from a given prompt or partial input. When provided with an initial piece of text, the model uses its trained knowledge to predict and generate the most likely subsequent text. A completion model can generate summaries, translations, stories, code, lyrics, etc depending on the prompt.

An example of a completion model is OpenAI’s GPT-3. LLMs in LangChain refers to completion models. The code for the completion model given below uses LangChain APIs.

The following Python code represents the completion model using LangChain API that wraps OpenAPI APIs within. The code can be executed in Google Collab. Note that the completion models takes a single input text as a prompt. Get your OpenAI key from OpenAI platform website.

pip install langchain-openai

import os
os.environ['OPENAI_API_KEY'] = "sk-proj-gkQ5768CVBSA9383BCBlbkFJIDHld4jS5O"

from langchain_openai import OpenAI

llm = OpenAI()
text = "What is the capital of India?"

llm.invoke(text)

The output would look like the following:

The capital of India is New Delhi. 

Here is an example of conversation where the completion model is used:

Task: Write a story.
Prompt: “Once upon a time in a small village, there lived a young girl named Ella who dreamed of exploring the world beyond the mountains.”
Output: The model generates the next part of the story in one go.

What’s the Chat Model in LLM?

A chat model is a special kind of completion model that generates conversational responses. A chat model takes a list of messages as input (unlike pure text completion model). Each message in the list has a role (either system, user, or assistant) and associated content. The chat model tries to generate a new message for the assistant role, based on the previous messages and the system instruction.

The following Python code represents the completion model using LangChain API that wraps OpenAPI APIs within.

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage

chat_model = ChatOpenAI(model="gpt-3.5-turbo-0125")

text = "What is the capital of India?"
messages = [HumanMessage(content=text)]

chat_model.invoke(messages)

The output would look like the following:

AIMessage(content='The capital of India is New Delhi.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 14, 'total_tokens': 22}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-94edc8b6-2df1-4a40-8e03-43c8286b2548-0', usage_metadata={'input_tokens': 14, 'output_tokens': 8, 'total_tokens': 22})

Note that the completion model returns a string, while the chat model returns a message.

Here is an example of a conversation where the chat model is used:

Task: Customer support.
Prompt: “Hello, I need help with my order.”
Output:

  • User: “Hello, I need help with my order.”
  • Model: “Sure, I’d be happy to help. Can you please provide your order number?”
  • User: “It’s 12345.”
  • Model: “Thank you. Let me check the status of your order. One moment, please.”

Which one to use: Completion Model vs Chat Model?

Completion models are good for some of the following use cases:

  • Text generation (filling in blanks, writing different creative text formats)
  • Machine translation (predicting the most likely translation based on context)
  • Code completion (suggesting the next line of code based on the current code)

Chat model can be valuable for some of the following use cases:

  • Chatbots and virtual assistants
  • Question answering systems
  • Generating different conversational responses
Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

Recent Posts

LLM Hosting Strategy, Options & Cost: Examples

As part of laying down application architecture for LLM applications, one key focus area is…

3 days ago

Application Architecture for LLM Applications: Examples

Large language models (LLMs), also termed large foundation models (LFMs), in recent times have been…

1 week ago

Python Pickle Security Issues / Risk

Suppose your machine learning model is serialized as a Python pickle file and later loaded…

1 month ago

Pricing Analytics in Banking: Strategies, Examples

Last updated: 15th May, 2024 Have you ever wondered how your bank decides what to…

2 months ago

How to Learn Effectively: A Holistic Approach

In this fast-changing world, the ability to learn effectively is more valuable than ever. Whether…

2 months ago

How to Choose Right Statistical Tests: Examples

Last updated: 13th May, 2024 Whether you are a researcher, data analyst, or data scientist,…

2 months ago