特征选择:11 种特征选择策略总结
![](https://filescdn.proginn.com/555dbd1638c6e676a1d752f4fd3267c9/36b88e26a885ee5076f71d87cbd50aed.webp)
来源:DeepHub IMBA 本文约4800字,建议阅读10+分钟
本文与你分享可应用于特征选择的各种技术的有用指南。
删除未使用的列 删除具有缺失值的列 不相关的特征 低方差特征 多重共线性 特征系数 p 值 方差膨胀因子 (VIF) 基于特征重要性的特征选择 使用 sci-kit learn 进行自动特征选择 主成分分析 (PCA)
import pandas as pddata = 'https://raw.githubusercontent.com/pycaret/pycaret/master/datasets/automobile.csv'
df = pd.read_csv(data)
df.sample(5)
![](https://filescdn.proginn.com/ebe3a992247e8177f7f1409648b4a00b/a9a4d77ff6d0d511a51900f15c995918.webp)
df.columns
>> Index(['symboling', 'normalized-losses', 'make', 'fuel-type', 'aspiration', 'num-of-doors', 'body-style', 'drive-wheels', 'engine-location','wheel-base', 'length', 'width', 'height', 'curb-weight', 'engine-type', 'num-of-cylinders', 'engine-size', 'fuel-system', 'bore', 'stroke', 'compression-ratio', 'horsepower', 'peak-rpm', 'city-mpg', 'highway-mpg', 'price'], dtype='object')
现在让我们深入研究特征选择的 11 种策略。
删除未使用的列
删除具有缺失值的列
# total null values per column
df.isnull().sum()
>>
symboling 0
normalized-losses 35
make 0
fuel-type 0
aspiration 0
num-of-doors 2
body-style 0
drive-wheels 0
engine-location 0
wheel-base 0
length 0
width 0
height 0
curb-weight 0
engine-type 0
num-of-cylinders 0
engine-size 0
fuel-system 0
bore 0
stroke 0
compression-ratio 0
horsepower 0
peak-rpm 0
city-mpg 0
highway-mpg 0
price 0
dtype: int64
不相关的特征
# correlation between target and features
(df.corr().loc['price']
.plot(kind='barh', figsize=(4,10)))
![](https://filescdn.proginn.com/d89fac76fb589ce56185a8a4d7f135fc/6f073b3c285e9e858e2c0ef1f7bf2ebf.webp)
# drop uncorrelated numeric features (threshold <0.2)
corr = abs(df.corr().loc['price'])
corr = corr[corr<0.2]
cols_to_drop = corr.index.to_list()
df = df.drop(cols_to_drop, axis=1)
import seaborn as sns
sns.boxplot(y = 'price', x = 'fuel-type', data=df)
![](https://filescdn.proginn.com/5c7fb3e3cd67792fdae167d4c45e624a/78d0e55cf9e8b297f7e8a0f4e8b510b8.webp)
低方差特征
import numpy as np
# variance of numeric features
(df
.select_dtypes(include=np.number)
.var()
.astype('str'))
![](https://filescdn.proginn.com/157bebaed41d931707c93a3d06a3267b/f1aac874828a7e84db73362f40deb75d.webp)
df['bore'].describe()
![](https://filescdn.proginn.com/80b42b62816e27d4c5a310ac55a66a04/471eebd229835aae7014a1db813a0748.webp)
多重共线性
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(16,10)})
sns.heatmap(df.corr(),
annot=True,
linewidths=.5,
center=0,
cbar=False,
cmap="PiYG")
plt.show()
![](https://filescdn.proginn.com/23705111d8def002c2393e7f46abd229/6cc9b342d06143686eda5ab29562a4e1.webp)
# drop correlated features
df = df.drop(['length', 'width', 'curb-weight', 'engine-size', 'city-mpg'], axis=1)
还可以使用称为方差膨胀因子 (VIF) 的方法来确定多重共线性并根据高 VIF 值删除特征。我稍后会展示这个例子。
df_cat = df[['fuel-type', 'body-style']]
df_cat.sample(5)
![](https://filescdn.proginn.com/f6a7dbcf7ca095927968fc71615e27aa/c195bb9ac624c3684e54f5efdb907eb8.webp)
crosstab = pd.crosstab(df_cat['fuel-type'], df_cat['body-style'])
crosstab
![](https://filescdn.proginn.com/547258afcb2a912b081666559c65eedf/6e772bb068e3ec6437b79b8246876766.webp)
from scipy.stats import chi2_contingency
chi2_contingency(crosstab)
![](https://filescdn.proginn.com/95f7881e0cb61c883229fa30c8340f65/baf4f177fd507d11c79d3f2afdaeeb1f.webp)
# drop columns with missing values
df = df.dropna()
from sklearn.model_selection import train_test_split
# get dummies for categorical features
df = pd.get_dummies(df, drop_first=True)
# X features
X = df.drop('price', axis=1)
# y target
y = df['price']
# split data into training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.linear_model import LinearRegression
# scaling
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.fit_transform(X_test)
# convert back to dataframe
X_train = pd.DataFrame(X_train, columns = X.columns.to_list())
X_test = pd.DataFrame(X_test, columns = X.columns.to_list())
# instantiate model
model = LinearRegression()# fit
model.fit(X_train, y_train)
特征系数
# feature coefficients
coeffs = model.coef_
# visualizing coefficients
index = X_train.columns.tolist()
(pd.DataFrame(coeffs, index = index, columns = ['coeff']).sort_values(by = 'coeff')
.plot(kind='barh', figsize=(4,10)))
![](https://filescdn.proginn.com/ad83435a1c66fa46cb2a64d14d5558b0/980ff4ba5f8e252a04e65306e28d66c3.webp)
# filter variables near zero coefficient value
temp = pd.DataFrame(coeffs, index = index, columns = ['coeff']).sort_values(by = 'coeff')
temp = temp[(temp['coeff']>1) | (temp['coeff']< -1)]
# drop those features
cols_coeff = temp.index.to_list()
X_train = X_train[cols_coeff]
X_test = X_test[cols_coeff]
p 值
import statsmodels.api as sm
ols = sm.OLS(y, X).fit()
print(ols.summary())
![](https://filescdn.proginn.com/ecfe8e20aeabd5bc75ce95dc8f30c958/c0344f3b4322107795d05fce6f7b81d0.webp)
方差膨胀因子 (VIF)
VIF = 1 表示无相关性 VIF = 1-5 中等相关性 VIF >5 高相关
from statsmodels.stats.outliers_influence import variance_inflation_factor
# calculate VIF
vif = pd.Series([variance_inflation_factor(X.values, i) for i in range(X.shape[1])], index=X.columns)
# display VIFs in a table
index = X_train.columns.tolist()
vif_df = pd.DataFrame(vif, index = index, columns = ['vif']).sort_values(by = 'vif', ascending=False)
vif_df[vif_df['vif']<10]
![](https://filescdn.proginn.com/5fcf46dd7e8b5021b59cad9f58c5d178/263b15ad9517a5c3afba280dcd73c36c.webp)
基于特征重要性选择
from sklearn.ensemble import RandomForestClassifier
# instantiate model
model = RandomForestClassifier(n_estimators=200, random_state=0)
# fit model
model.fit(X,y)
importances = model.feature_importances_
cols = X.columns
(pd.DataFrame(importances, cols, columns = ['importance'])
.sort_values(by='importance', ascending=True)
.plot(kind='barh', figsize=(4,10)))
![](https://filescdn.proginn.com/6ace61e3443085d4d0b8c910ba248732/4991a31840b01b89458eef2bbc7b3bc9.webp)
# calculate standard deviation of feature importances
std = np.std([i.feature_importances_ for i in model.estimators_], axis=0)
# visualization
feat_with_importance = pd.Series(importances, X.columns)
fig, ax = plt.subplots(figsize=(12,5))
feat_with_importance.plot.bar(yerr=std, ax=ax)
ax.set_title("Feature importances")
ax.set_ylabel("Mean decrease in impurity")
![](https://filescdn.proginn.com/1f6069748728b878df85235b16a46f9c/2cb5a6f243f8b2de6c560d4a53159bc2.webp)
使用 Scikit Learn 自动选择特征
# import modules
from sklearn.feature_selection import (SelectKBest, chi2, SelectPercentile, SelectFromModel, SequentialFeatureSelector, SequentialFeatureSelector)
基于卡方的技术
# select K best features
X_best = SelectKBest(chi2, k=10).fit_transform(X,y)
# number of best features
X_best.shape[1]
>> 10
# keep 75% top features
X_top = SelectPercentile(chi2, percentile = 75).fit_transform(X,y)
# number of best features
X_top.shape[1]
>> 36
# implement algorithm
from sklearn.svm import LinearSVC
model = LinearSVC(penalty= 'l1', C = 0.002, dual=False)
model.fit(X,y)
# select features using the meta transformer
selector = SelectFromModel(estimator = model, prefit=True)
X_new = selector.transform(X)
X_new.shape[1]
>> 2
# names of selected features
feature_names = np.array(X.columns)
feature_names[selector.get_support()]
>> array(['wheel-base', 'horsepower'], dtype=object)
# instantiate model
model = RandomForestClassifier(n_estimators=100, random_state=0)
# select features
selector = SequentialFeatureSelector(estimator=model, n_features_to_select=10, direction='backward', cv=2)
selector.fit_transform(X,y)
# check names of features selected
feature_names = np.array(X.columns)
feature_names[selector.get_support()]
>> array(['bore', 'make_mitsubishi', 'make_nissan', 'make_saab',
'aspiration_turbo', 'num-of-doors_two', 'body style_hatchback', 'engine-type_ohc', 'num-of-cylinders_twelve', 'fuel-system_spdi'], dtype=object)
主成分分析 (PCA)
# import PCA module
from sklearn.decomposition import PCA
# scaling data
X_scaled = scaler.fit_transform(X)
# fit PCA to data
pca = PCA()
pca.fit(X_scaled)
evr = pca.explained_variance_ratio_
# visualizing the variance explained by each principal components
plt.figure(figsize=(12, 5))
plt.plot(range(0, len(evr)), evr.cumsum(), marker="o", linestyle="--")
plt.xlabel("Number of components")
plt.ylabel("Cumulative explained variance")
![](https://filescdn.proginn.com/292e1c7ab2b66c8643ea2a4c48df2cbe/42088adf5fef0f4003aaef2617e0a433.webp)
总结
评论