In this post, you will learn about how to use Sklearn SelectFromModel class for reducing the training / test data set to the new dataset which consists of features having feature importance value greater than a specified threshold value. This method is very important when one is using Sklearn pipeline for creating different stages and Sklearn RandomForest implementation (such as RandomForestClassifier) for feature selection. You may refer to this post to check out how RandomForestClassifier can be used for feature importance. The SelectFromModel usage is illustrated using Python code example.
Here are the steps and related python code for using SelectFromModel.
Here is the python code representing the above steps:
from sklearn.feature_selection import SelectFromModel
#
# Fit the estimator; forest is the instance of RandomForestClassifier
#
sfm = SelectFromModel(forest, threshold=0.1, prefit=True)
#
# Transform the training data set
#
X_training_selected = sfm.transform(X_train)
#
# Count of features whose importance value is greater than the threshold value
#
importantFeaturesCount = X_selected.shape[1]
#
# Here are the important features
#
X_train.columns[sorted_indices][:X_selected.shape[1]]
The above may give output such as the following as the important features whose importance is greater than threshold value:
Index(['proline', 'flavanoids', 'color_intensity', 'od_dilutedwines', 'alcohal'], dtype='object')
Here is the visualization plot for important features:
Last updated: 3rd May, 2024 Have you ever wondered why some machine learning models perform…
Last updated: 2nd May, 2024 The success of machine learning models often depends on the…
When working on a machine learning project, one of the key challenges faced by data…
Last updated: 1st May, 2024 The bias-variance trade-off is a fundamental concept in machine learning…
Last updated: 1st May, 2024 As a data scientist, understanding the nuances of various cost…
Last updated: 1st May, 2024 In this post, you will learn the concepts related to…