决策树算法简易理解以及PYTHON实现
决策树算法
三种算法:1、信息增益 2、信息增益率 3、Gini系数
决策树:通俗理解 完成一件事,根据事情的难易程度进行决策先做哪一步
判断一个瓜的好坏:
瓜的特征:颜色,大小,味道
瓜的标注:好,坏
1、信息增益
步骤:
根据瓜的标注求出瓜的信息:好:12 坏:7
D= -(12/19)log(12/19)-(7/12)log(7/12)
计算特征信息:颜色···绿色 10(好:6 坏:4),浅绿色 5(好:3 坏:2),黄色 4(好:2 坏:2)
D(颜色=绿色)=-(6/10)log(6/10)-(4/10)log(4/10) D1
D(颜色=浅绿色)=-(3/5)log(3/5)-(2/5)log(2/5) D2
D(颜色=黄色)=-(2/4)log(2/4)-(2/4)log(2/4) D3
D(颜色信息增益)=D-[(10/19)*D1+(5/19)*D2+(4/19)*D3]
以此类推:计算每个特征的信息增益
ID3: 再添加一个特征 瓜的长度 在进行决策过程中,ID3算法会优先选择瓜的长度这一特征作为决策节点,显得不理智
2、信息增益率
V(颜色信息)=-(10/19)log(10/19)-(5/19)log(5/19)-(4/19)log(1/19)
D-Rate= D(颜色信息增益) / V(颜色信息)
3、Gini系数
G=1-(12/19)²-(7/19)²
G(颜色=绿色)=1-(6/10)²-(4/10)² G1
G(颜色=浅绿色)=1-(3/5)²-(2/5)² G2
G(颜色=黄色)=1-(2/4)²-(2/4)² G3
G(颜色)=(10/19)*G1+(5/19)*G2+(4/19)*G3
import pandas as pd
import numpy as np
import os
os.environ['PATH']+=os.pathsep+"D:\Software\PYTHON\Graphviz\bin"
import pydotplus
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier,export_graphviz
from sklearn.metrics import accuracy_score,recall_score,f1_score
df=pd.read_excel("d:\hr.xlsx")
label=df["left"]
#特征处理
feature_1=['satisfaction_level', 'last_evaluation', 'number_project','average_monthly_hours', 'time_spend_company', 'Work_accident','promotion_last_5years']
feature_2=['department']
feature_3=['salary']
for i in range(len(feature_1)):
df[feature_1[i]]=MinMaxScaler(feature_range=(0,1)).fit_transform(df[feature_1[i]].values.reshape(-1,1))
for i in range(len(feature_2)):
df[feature_2[i]]=LabelEncoder().fit_transform(df[feature_2[i]].values.reshape(-1,1))
df[feature_2[i]]=MinMaxScaler(feature_range=(0,1)).fit_transform(df[feature_2[i]].values.reshape(-1,1))
d=dict([('low',0),('medium',1),('high',2)])
def map_salary(s):
return d.get(s,0)
df['salary']=[map_salary(s) for s in df['salary'].values]
for i in range(len(feature_3)):
df[feature_3[i]]=MinMaxScaler(feature_range=(0,1)).fit_transform(df[feature_3[i]].values.reshape(-1,1))
#切割数据集
x1,x_cs,y1,y_cs=train_test_split(df,label,test_size=0.2)
x_xl,x_yz,y_xl,y_yz=train_test_split(x1,y1,test_size=0.25)
d_tree=DecisionTreeClassifier(criterion="entropy").fit(x_xl,y_xl)
y_cs_pre=d_tree.predict(x_cs)
#print("d_tree测试:accuracy_score",accuracy_score(y_cs,y_cs_pre))
#print("d_tree测试:recall_score",recall_score(y_cs,y_cs_pre))
#print("d_tree测试:f1_score",f1_score(y_cs,y_cs_pre))
#print("\n")
#y_xl_pre=d_tree.predict(x_xl)
#print("d_tree训练:accuracy_score",accuracy_score(y_xl,y_xl_pre))
#print("d_tree训练:recall_score",recall_score(y_xl,y_xl_pre))
#print("d_tree训练:f1_score",f1_score(y_xl,y_xl_pre))
#print("\n")
#y_yz_pre=d_tree.predict(x_yz)
#print("d_tree验证:accuracy_score",accuracy_score(y_yz,y_yz_pre))
#print("d_tree验证:recall_score",recall_score(y_yz,y_yz_pre))
#print("d_tree验证:f1_score",f1_score(y_yz,y_yz_pre))
columns=df.columns.values
dot_data=export_graphviz(d_tree,out_file=None,feature_names=columns,class_names=["N","Y"],
filled=True,rounded=True,special_characters=True)
graph=pydotplus.graph_from_dot_data(dot_data)
graph.write_pdf("d:/tree1.pdf")