In this post, you will learn about how to use Python BeautifulSoup and NLTK to extract words from HTML pages and perform text analysis such as frequency distribution. The example in this post is based on reading HTML pages directly from the website and performing text analysis. However, you could also download the web pages and then perform text analysis by loading pages from local storage.
Here is the Python code for extracting text from HTML pages and perform text analysis. Pay attention to some of the following in the code given below:
from __future__ import division
import nltk, re, pprint
from urllib import request
from bs4 import BeautifulSoup
from nltk.probability import FreqDist
#
# Assign URL of the web page to be processed
#
url = "https://edition.cnn.com/2020/10/06/politics/donald-trump-coronavirus-white-house-biden/index.html"
#
# Read the HTML from the URL
#
html = request.urlopen(url).read()
#
# Get text (clean html) using BeautifulSoup get_text method
#
raw = BeautifulSoup(html).get_text()
#
# Tokenize or get words
#
tokens = nltk.word_tokenize(raw)
#
# HTML Words
#
htmlwords = ['https', 'http', 'display', 'button', 'hover',
'color', 'background', 'height', 'none', 'target',
'WebPage', 'reload', 'fieldset', 'padding', 'input',
'select', 'textarea', 'html', 'form', 'cursor',
'overflow', 'format', 'italic', 'normal', 'truetype',
'before', 'name', 'label', 'float', 'title', 'arial', 'type',
'block', 'audio', 'inline', 'canvas', 'margin', 'serif', 'menu',
'woff', 'content', 'fixed', 'media', 'position', 'relative', 'hidden',
'width', 'clear', 'body', 'standard', 'expandable', 'helvetica',
'fullwidth', 'embed', 'expandfull', 'fullstandardwidth', 'left', 'middle',
'iframe', 'rgba', 'selected', 'scroll', 'opacity',
'center', 'false', 'right']
#
# Get words meeting criteria such as words having only alphabets,
# words of length > 4 and words not in htmlwords
#
words = [w for w in tokens if w.isalpha() and len(w) > 4 and w.lower() not in htmlwords]
#
# Create NLTK Text instance to use NLTK APIs
#
text = nltk.Text(words)
#
# Create Frequency distribution to see frequency of words
#
freqdist = FreqDist(text)
freqdist.plot(30)
Here is how the frequency distribution would look like for the HTML page retrieved from CNN website. Note that frequency distribution indicates that the page is about politics, Trump etc.
Here is how the cumulative frequency distribution plot would look like. All you need to do is pass cumulative = True to freqdist.plot method.
Here is the summary of what you learned in this post regarding extracting text from HTML pages using BeatiffulSoup and processing using NLTK APIs.
The combination of Retrieval-Augmented Generation (RAG) and powerful language models enables the development of sophisticated…
Have you ever wondered how to use OpenAI APIs to create custom chatbots? With advancements…
When building a Retrieval-Augmented Generation (RAG) application powered by Large Language Models (LLMs), which combine…
Last updated: 25th Jan, 2025 Have you ever wondered how to seamlessly integrate the vast…
Artificial Intelligence (AI) agents have started becoming an integral part of our lives. Imagine asking…
In the ever-evolving landscape of agentic AI workflows and applications, understanding and leveraging design patterns…