Purpose of Dashboard: Advantages & Disadvantages
A dashboard is a visual representation of the most important information needed to achieve a goal. Dashboards form an integral part of analytical solutions. As the demand for data analytics continues to grow, dashboards are well-positioned to become one of the most essential tools in any business toolkit. It can provide an overview of what is happening in your business and help you make better decisions. While there are many advantages to using a dashboard, there are also some disadvantages that you should be aware of. In this blog post, we will discuss the purpose of the dashboard and its advantages and disadvantages. As a product manager/business analyst and data …
Softmax Regression Explained with Python Example
In this post, you will learn about the concepts of what is Softmax regression/function with Python code examples and why do we need them? As data scientist/machine learning enthusiasts, it is very important to understand the concepts of Softmax regression as it helps in understanding the algorithms such as neural networks, multinomial logistic regression, etc in a better manner. Note that the Softmax function is used in various multiclass classification machine learning algorithms such as multinomial logistic regression (thus, also called softmax regression), neural networks, etc. Before getting into the concepts of softmax regression, let’s understand what is softmax function. What’s Softmax function? Simply speaking, the Softmax function converts raw …
Information Theory, Machine Learning & Cross-Entropy Loss
What is information theory? How is information theory related to machine learning? These are some of the questions that we will answer in this blog post. Information theory is the study of how much information is present in the signals or data we receive from our environment. AI / Machine learning (ML) is about extracting interesting representations/information from data which are then used for building the models. Thus, information theory fundamentals are key to processing information while building machine learning models. In this blog post, we will provide examples of information theory concepts and entropy concepts so that you can better understand them. We will also discuss how concepts of …
Normal Distribution Explained with Python Examples
What is normal distribution? It’s a probability distribution that occurs in many real world cases. In this blog post, you will learn about the concepts of Normal Distribution with the help of Python example. As a data scientist, you must get a good understanding of different probability distributions in statistics in order to understand the data in a better manner. Normal distribution is also called as Gaussian distribution or Laplace-Gauss distribution. Normal Distribution with Python Example Normal distribution is the default probability for many real-world scenarios. It represents a symmetric distribution where most of the observations cluster around the central peak called as mean of the distribution. A normal distribution can be thought of as a …
Tensor Broadcasting Explained with Examples
In this post, you will learn about the concepts of Tensor Broadcasting with the help of Python Numpy examples. Recall that Tensor is defined as the container of data (primarily numerical) most fundamental data structure used in Keras and Tensorflow. You may want to check out a related article on Tensor – Tensor explained with Python Numpy examples. Broadcasting of tensor is borrowed from Numpy broadcasting. Broadcasting is a technique used for performing arithmetic operations between Numpy arrays / Tensors having different shapes. In this technique, the following is done: As a first step, expand one or both arrays by copying elements appropriately so that after this transformation, the two tensors have the …
Regularization in Machine Learning: Concepts & Examples
In machine learning, regularization is a technique used to avoid overfitting. This occurs when a model learns the training data too well and therefore performs poorly on new data. Regularization helps to reduce overfitting by adding constraints to the model-building process. As data scientists, it is of utmost importance that we learn thoroughly about the regularization concepts to build better machine learning models. In this blog post, we will discuss the concept of regularization and provide examples of how it can be used in practice. What is regularization and how does it work? Regularization in machine learning represents strategies that are used to reduce the generalization or test error of …
Measure Code Quality using Cyclomatic Complexity
The article talks about how McCabe’s cyclomatic complexity could be used to measure several different aspects of code quality. The objective of this article is to help developers quickly assess code quality by looking at the code. However, let’s try and quickly understand what is cyclomatic complexity and how could it be measured? Thanks for reading it further. And, apologies for spelling mistakes. What is Cyclomatic Complexity? Cyclomatic complexity is a measure of code quality that takes into account the number of independent paths through a piece of code. A high cyclomatic complexity indicates that a piece of code is more difficult to understand and maintain, and is, therefore, more …
Frequentist vs Bayesian Probability: Difference, Examples
In this post, you will learn about the difference between Frequentist vs Bayesian Probability. It is of utmost importance to understand these concepts if you are getting started with Data Science. What is Frequentist Probability? Probability is used to represent and reason about uncertainty. It was originally developed to analyze the frequency of the events. In other words, the probability was developed as frequentist probability. The probability of occurrence of an event, when calculated as a function of the frequency of the occurrence of the event of that type, is called Frequentist Probability. Frequentist probability is a way of assigning probabilities to events that take into account how often those events actually occur. Frequentist …
Checklist for Effective Code Review
Are you involved in day-to-day code reviews? Would you like to suggest to your team members a checklist that can be used for code reviews? In this blog post, you will learn about key areas to focus on when doing code reviews. Following is a checklist that one could use while doing code review: Functional Suitability: Understand the requirement/use case/user story and ask whether the code you are reviewing meets the requirement or not. This includes the alternate and exception use case flows to be considered for review. Functional suitability is one aspect of code quality that refers to how well the code meets the needs of the user. In …
SVM Classifier using Sklearn: Code Examples
In this post, you will learn about how to train an SVM Classifier using Scikit Learn or SKLearn implementation with the help of code examples/samples. An SVM classifier, or support vector machine classifier, is a type of machine learning algorithm that can be used to analyze and classify data. A support vector machine is a supervised machine learning algorithm that can be used for both classification and regression tasks. The Support vector machine classifier works by finding the hyperplane that maximizes the margin between the two classes. The Support vector machine algorithm is also known as a max-margin classifier. Support vector machine is a powerful tool for machine learning and has been widely used …
NFT Use Cases & Applications Examples
What are NFTs? NFTs (non-fungible tokens) are a relatively new type of cryptocurrency that have a wide range of potential applications. They are different from traditional cryptocurrencies like Bitcoin because each individual NFT is unique and cannot be replaced by another token. This makes them perfect for use in a variety of applications, from digital collectibles to decentralized marketplaces. In this blog post, we will explore some of the most interesting NFT use cases and applications. What are some of the popular use cases for NFTs? The following are some of the most common use cases for NFTs: NFTs can be used to represent ownership of digital assets such as …
Non-fungible tokens (NFTs) & Real-world examples
You may have heard the term “non-fungible tokens (NFT)” but what do they mean? Basically, they are a type of cryptocurrency that is unique and not interchangeable. Unlike regular Bitcoin or Ethereum, which can be divided and traded like shares, non-fungible tokens are indivisible and have their own value. This makes them perfect for use in specific applications like digital art or collectibles. Here we’ll discuss what are NFTs and what are some real-world examples of where non-fungible tokens are being used today. What are Non-fungible tokens (NFT) and how do they work? Non-fungible tokens are unique digital assets. The word non-fungible means that each token is not interchangeable with …
Stochastic Gradient Descent Python Example
In this post, you will learn the concepts of Stochastic Gradient Descent (SGD) using a Python example. Stochastic gradient descent is an optimization algorithm that is used to optimize the cost function while training machine learning models. The most popular algorithm such as gradient descent takes a long time to converge for large datasets. This is where the variant of gradient descent such as stochastic gradient descent comes into the picture. In order to demonstrate Stochastic gradient descent concepts, the Perceptron machine learning algorithm is used. Recall that Perceptron is also called a single-layer neural network. Before getting into details, let’s quickly understand the concepts of Perceptron and the underlying learning …
Dummy Variables in Regression Models: Python, R
In linear regression, dummy variables are used to represent the categorical variables in the model. There are a few different ways that dummy variables can be created, and we will explore a few of them in this blog post. We will also take a look at some examples to help illustrate how dummy variables work. We will also understand concepts related to the dummy variable trap. By the end of this post, you should have a better understanding of how to use dummy variables in linear regression models. As a data scientist, it is important to understand how to use linear regression and dummy variables. What are dummy variables in …
Book: First principles thinking for building winning products
Can innovation be taught and learned in a methodical manner? Can there be an innovation playbook using which, given a need to create a thing, product, or solve a complex problem, a set of well-defined steps be followed? How has Elon Musk been super successful time and again to create game-changing innovative products that created tremendous value for end-users and society at large? The answers to these questions can be found with a reasoning technique called first principles thinking. First principles thinking is defined as a method of reasoning or a thought process in which you try and understand the fundamental truth regarding different aspects of the existence of a …
Linear regression hypothesis testing: Concepts, Examples
In relation to machine learning, linear regression is defined as a predictive modeling technique that allows us to build a model which can help predict continuous response variables as a function of a linear combination of explanatory or predictor variables. While training linear regression models, we need to rely on hypothesis testing in relation to determining the relationship between the response and predictor variables. In the case of the linear regression model, two types of hypothesis testing are done. They are T-tests and F-tests. In other words, there are two types of statistics that are used to assess whether linear regression models exist representing response and predictor variables. They are …
I found it very helpful. However the differences are not too understandable for me