{"id":2655559,"date":"2023-05-16T10:00:27","date_gmt":"2023-05-16T14:00:27","guid":{"rendered":"https:\/\/wordpress-1016567-4521551.cloudwaysapps.com\/plato-data\/principal-component-analysis-pca-with-scikit-learn-kdnuggets\/"},"modified":"2023-05-16T10:00:27","modified_gmt":"2023-05-16T14:00:27","slug":"principal-component-analysis-pca-with-scikit-learn-kdnuggets","status":"publish","type":"station","link":"https:\/\/platodata.io\/plato-data\/principal-component-analysis-pca-with-scikit-learn-kdnuggets\/","title":{"rendered":"Principal Component Analysis (PCA) with Scikit-Learn – KDnuggets"},"content":{"rendered":"

\"Principal
Image by Author
<\/span>
  <\/p>\n

If you\u2019re familiar with the unsupervised learning paradigm, you\u2019d have come across dimensionality reduction and the algorithms used for dimensionality reduction such as the principal component analysis<\/strong> (PCA). Datasets for machine learning typically contain a large number of features, but such high-dimensional feature spaces are not always helpful.<\/p>\n

In general, all the features are not<\/i> equally important and there are certain features that account for a large percentage of variance in the dataset. Dimensionality reduction algorithms aim to reduce the dimension of the feature space to a fraction of the original number of dimensions. In doing so, the features with high variance are still retained\u2014but are in the transformed feature space. And principal component analysis (PCA) is one of the most popular dimensionality reduction algorithms.<\/p>\n

In this tutorial, we\u2019ll learn how principal component analysis (PCA) works and how to implement it using the scikit-learn library.<\/p>\n

Before we go ahead and implement principal component analysis (PCA) in  scikit-learn, it\u2019s helpful to understand how PCA works.<\/p>\n

As mentioned, principal component analysis is a dimensionality reduction algorithm. Meaning it reduces the dimensionality of the feature space. But how does it achieve this reduction?<\/p>\n

The motivation behind the algorithm is that there are certain features that capture a large percentage of variance in the original dataset. So it’s important to find the directions of maximum variance<\/strong> in the dataset. These directions are called principal components<\/strong>. And PCA is essentially a projection of the dataset onto the principal components.<\/p>\n

So how do we find the principal components? <\/p>\n

Suppose the data matrix X is of dimensions num_observations x num_features<\/strong>, we perform eigenvalue decomposition<\/a> on the covariance matrix<\/a> of X.<\/p>\n

If the features are all zero mean, then the covariance matrix is given by X.T X. Here, X.T is the transpose of the matrix X. If the features are not all zero mean initially, we can subtract the mean of column i from each entry in that column and compute the covariance matrix. It\u2019s simple to see that the covariance matrix is a square matrix of order num_features<\/strong>.<\/p>\n

 <\/p>\n

\"Principal
Image by Author<\/span>
  <\/p>\n

The first k principal components are the eigenvectors<\/i> corresponding to the k largest eigenvalues<\/i>. <\/p>\n

So the steps in PCA can be summarized as follows:
 <\/p>\n

\"Principal
Image by Author<\/span>
  <\/p>\n

Because the covariance matrix is a symmetric and positive semi-definite, the eigendecomposition takes the following form:<\/p>\n

X.T X = D \u039b D.T<\/p>\n

Where, D is the matrix of eigenvectors and \u039b is a diagonal matrix of eigenvalues.<\/p>\n

Another matrix factorization technique that can be used to compute principal components is singular value decomposition or SVD. <\/p>\n

Singular value decomposition (SVD) is defined for all matrices. Given a matrix X, SVD of X gives: X = U \u03a3 V.T. Here, U, \u03a3, and V are the matrices of left singular vectors, singular values, and right singular vectors, respectively. V.T. is the transpose of V. <\/p>\n

So the SVD of the covariance matrix of X is given by:
 <\/p>\n

\"Principal
 
Comparing the equivalence of the two matrix decompositions:
 
\"Principal
  <\/p>\n

We have the following: <\/p>\n

 <\/p>\n

\"Principal
  <\/p>\n

There are computationally efficient algorithms for calculating the SVD of a matrix. The scikit-learn implementation of PCA also uses SVD under the hood to compute the principal components.<\/p>\n

Now that we\u2019ve learned the basics of principal component analysis, let\u2019s proceed with the scikit-learn implementation of the same.<\/p>\n

Step 1 \u2013 Load the Dataset<\/h2>\n

To understand how to implement principal component analysis, let\u2019s use a simple dataset. In this tutorial, we\u2019ll use the wine dataset available as part of scikit-learn’s datasets<\/strong> module.<\/p>\n

Let\u2019s start by loading and preprocessing the dataset:<\/p>\n

\n
from sklearn import datasets\nwine_data = datasets.load_wine(as_frame=True)\ndf = wine_data.data<\/code><\/pre>\n<\/div>\n

  <\/p>\n

It has 13 features and 178 records in all.<\/p>\n

\n
print(df.shape)\nOutput >> (178, 13)<\/code><\/pre>\n<\/div>\n

  <\/p>\n

\n
print(df.info())\nOutput >>\n\nRangeIndex: 178 entries, 0 to 177\nData columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 alcohol 178 non-null float64 1 malic_acid 178 non-null float64 2 ash 178 non-null float64 3 alcalinity_of_ash 178 non-null float64 4 magnesium 178 non-null float64 5 total_phenols 178 non-null float64 6 flavanoids 178 non-null float64 7 nonflavanoid_phenols 178 non-null float64 8 proanthocyanins 178 non-null float64 9 color_intensity 178 non-null float64 10 hue 178 non-null float64 11 od280\/od315_of_diluted_wines 178 non-null float64 12 proline 178 non-null float64\ndtypes: float64(13)\nmemory usage: 18.2 KB\nNone<\/code><\/pre>\n<\/div>\n

Step 2 \u2013 Preprocess the Dataset<\/h2>\n

As a next step, let’s preprocess the dataset. The features are all on different scales. To bring them all to a common scale, we\u2019ll use the StandardScaler<\/code> that transforms the features to have zero mean and unit variance:<\/p>\n

\n
from sklearn.preprocessing import StandardScaler\nstd_scaler = StandardScaler()\nscaled_df = std_scaler.fit_transform(df)<\/code><\/pre>\n<\/div>\n

Step 3 \u2013 Perform PCA on the Preprocessed Dataset<\/h2>\n

To find the principal components, we can use the PCA class from scikit-learn\u2019s decomposition<\/strong> module.<\/p>\n

Let\u2019s instantiate a PCA object by passing in the number of principal components n_components<\/code> to the constructor. <\/p>\n

The number of principal components is the number of dimensions that you\u2019d like to reduce the feature space to. Here, we set the number of components to 3.<\/p>\n

\n
from sklearn.decomposition import PCA\npca = PCA(n_components=3)\npca.fit_transform(scaled_df)<\/code><\/pre>\n<\/div>\n

  <\/p>\n

Instead of calling the fit_transform()<\/code> method, you can also call fit()<\/code> followed by the transform()<\/code> method.<\/p>\n

Notice how the steps in principal component analysis such as computing the covariance matrix, performing eigendecomposition or singular value decomposition on the covariance matrix to get the principal components have all been abstracted away when we use scikit-learn\u2019s implementation of PCA.<\/p>\n

Step 4 \u2013 Examining Some Useful Attributes of the PCA Object<\/h2>\n

The PCA instance pca<\/code> that we created has several useful attributes that help us understand what is going on under the hood.<\/p>\n

The attribute components_<\/code> stores the directions of maximum variance (the principal components).<\/p>\n

\n
print(pca.components_)<\/code><\/pre>\n<\/div>\n

  <\/p>\n

\n
Output >>\n[[ 0.1443294 -0.24518758 -0.00205106 -0.23932041 0.14199204 0.39466085 0.4229343 -0.2985331 0.31342949 -0.0886167 0.29671456 0.37616741 0.28675223] [-0.48365155 -0.22493093 -0.31606881 0.0105905 -0.299634 -0.06503951 0.00335981 -0.02877949 -0.03930172 -0.52999567 0.27923515 0.16449619 -0.36490283] [-0.20738262 0.08901289 0.6262239 0.61208035 0.13075693 0.14617896 0.1506819 0.17036816 0.14945431 -0.13730621 0.08522192 0.16600459 -0.12674592]]<\/code><\/pre>\n<\/div>\n

  <\/p>\n

We mentioned that the principal components are directions of maximum variance in the dataset. But how do we measure how much of the total variance<\/i> is captured in the number of principal components we just chose?<\/p>\n

The explained_variance_ratio_<\/code> attribute captures the ratio of the total variance each principal component captures. Sowe can sum up the ratios to get the total variance in the chosen number of components.<\/p>\n

\n
print(sum(pca.explained_variance_ratio_))<\/code><\/pre>\n<\/div>\n

  <\/p>\n

\n
Output >> 0.6652996889318527<\/code><\/pre>\n<\/div>\n

  <\/p>\n

Here, we see that three principal components capture over 66.5% of total variance in the dataset.<\/p>\n

Step 5 \u2013 Analyzing the Change in Explained Variance Ratio<\/h2>\n

We can try running principal component analysis by varying the number of components n_components<\/code>.<\/p>\n

\n
import numpy as np\nnums = np.arange(14)<\/code><\/pre>\n<\/div>\n

  <\/p>\n

\n
var_ratio = []\nfor num in nums: pca = PCA(n_components=num) pca.fit(scaled_df) var_ratio.append(np.sum(pca.explained_variance_ratio_))<\/code><\/pre>\n<\/div>\n

  <\/p>\n

To visualize the explained_variance_ratio_<\/code> for the number of components, let\u2019s plot the two quantities as shown:<\/p>\n

\n
import matplotlib.pyplot as plt plt.figure(figsize=(4,2),dpi=150)\nplt.grid()\nplt.plot(nums,var_ratio,marker='o')\nplt.xlabel('n_components')\nplt.ylabel('Explained variance ratio')\nplt.title('n_components vs. Explained Variance Ratio')<\/code><\/pre>\n<\/div>\n

  <\/p>\n

When we use all the 13 components, the explained_variance_ratio_<\/code> is 1.0 indicating that we\u2019ve captured 100% of the variance in the dataset. <\/p>\n

In this example, we see that with 6 principal components, we’ll be able to capture more than 80% of variance in the input dataset.
 <\/p>\n

\"Principal <\/p>\n

I hope you\u2019ve learned how to perform principal component analysis using built-in functionality in the scikit-learn library. Next, you can try to implement PCA on a dataset of your choice. If you\u2019re looking for good datasets to work with, check out this list of websites to find datasets for your data science projects<\/a>.<\/p>\n

[1] Computational Linear Algebra<\/a>, fast.ai
 
 
Bala Priya C<\/a><\/b> is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more.<\/p>\n