-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathModel_hypertuned_parameter.html
84 lines (78 loc) · 10.7 KB
/
Model_hypertuned_parameter.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<title></title>
<meta name="generator" content="LibreOffice 6.3.4.2 (Linux)"/>
<meta name="created" content="2015-06-05T18:17:20"/>
<meta name="changed" content="2020-04-20T20:55:35"/>
<meta name="AppVersion" content="16.0300"/>
<meta name="DocSecurity" content="0"/>
<meta name="HyperlinksChanged" content="false"/>
<meta name="LinksUpToDate" content="false"/>
<meta name="ScaleCrop" content="false"/>
<meta name="ShareDoc" content="false"/>
<style type="text/css">
body,div,table,thead,tbody,tfoot,tr,th,td,p { font-family:"Calibri"; font-size:x-small }
a.comment-indicator:hover + comment { background:#ffd; position:absolute; display:block; border:1px solid black; padding:0.5em; }
a.comment-indicator { background:red; display:inline-block; border:1px solid black; width:0.5em; height:0.5em; }
comment { display:none; }
</style>
</head>
<body>
<table cellspacing="0" border="0">
<colgroup width="190"></colgroup>
<colgroup width="224"></colgroup>
<colgroup width="435"></colgroup>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" colspan=3 height="20" align="center" valign=bottom><b><font color="#000000">Machine Learning model Hyper tuned Parameter</font></b></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="20" align="center" valign=middle><b><font color="#000000">Classification Model </font></b></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="center" valign=bottom><b><font color="#000000">Parameter</font></b></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="center" valign=bottom><b><font color="#000000">Customized Feature Description</font></b></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="140" align="left" valign=middle><font color="#000000">Logistic Regression </font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">C=1.0<br>dual=False<br>fit_intercept=True<br>penalty='l2'<br>* rest of the parameters are default.<br></font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">C->Inverse of regularization strength; must be a positive float<br>dual->False since number of samples is greater than number of features<br>fit_intercept->True Constant is added to the decision function<br>penalty->L2 norm is used in penalization<br></font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="120" align="left" valign=middle><font color="#000000">Decision Tree Classifier</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">criterion='gini'<br>min_samples_leaf=1<br>min_samples_split=2<br>splitter='best'<br>*rest of the parameters are default</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">criterion-> measure the quality of a split.<br>min_samples_leaf->The minimum number of samples required to be at a leaf node<br>min_samples_split->minimum number of samples required to split an internal node<br>splitter->The strategy used to choose the split at each node.</font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="200" align="left" valign=middle><font color="#000000">Random Forest Classifier</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">bootstrap=True criterion='entropy'<br>max_depth=40<br>max_features='sqrt'<br>min_samples_leaf=1<br>min_samples_split=5<br>n_estimators=233</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">bootstrap->bootstrap samples are used when building trees.<br>criterion->The function to measure the quality of a split<br>max_depth->The maximum depth of the tree.<br>max_features->The number of features to consider when looking for the best split<br>min_samples_leaf->The minimum number of samples required to be at a leaf node<br>min_samples_split->The minimum number of samples required to split an internal node<br>n_estimators-> The number of trees in the forest.</font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="220" align="left" valign=middle><font color="#000000">Gradient Boost</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">criterion='friedman_mse', <br>learning_rate=0.1, loss='deviance', <br>max_depth=3,<br>min_samples_split=5,<br>n_estimators=100,<br>subsample=0.8, <br><br></font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">criterion->The function to measure the quality of a split<br>learning_rate->learning rate shrinks the contribution of each tree by learning_rate<br>loss->Loss function to be optimized. <br>max_depth->maximum depth of the individual regression <br>min_samples_split->The minimum number of samples required to split an internal node<br>n_estimators->The number of boosting stages to perform.<br>subsample->The fraction of samples to be used for fitting the individual base learners<br></font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="100" align="left" valign=middle><font color="#000000">Adaptive Boost</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">algorithm='SAMME.R'<br>base_estimator=DecisionTreeClassifier.<br>n_estimators=200</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">algorithm=Boosting Algorithm<br>base_estimator->The base estimator from which the boosted ensemble is built.<br>n_estimators->The maximum number of estimators at which boosting is terminated</font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="80" align="left" valign=middle><font color="#000000">KNN</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000"><br>metric='euclidean'<br>n_neighbors=5<br></font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">metric->The distance metric to use for the tree<br>n_neighbors->Number of neighbors to use by default for kneighbors queries</font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="100" align="left" valign=middle><font color="#000000">SVC</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">C=1.0,<br>cache_size=200<br>degree=3, <br>kernel='rbf'<br>probability=True</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">C->Regularization parameter<br>cache_size->Specify the size of the kernel cache.<br>degree->Degree of the polynomial kernel function.<br>Kernel->Specifies the kernel type to be used in the algorithm<br>probability->enable probability estimates</font></td>
</tr>
<tr>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" height="220" align="left" valign=middle><font color="#000000">XGBoost</font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=top><font color="#000000">base_score=0.5<br>booster='gbtree'<br>colsample_bytree=0.8<br>max_depth=5<br>n_estimators=29<br>objective='binary:logistic'<br>subsample=0.8<br></font></td>
<td style="border-top: 1px solid #000000; border-bottom: 1px solid #000000; border-left: 1px solid #000000; border-right: 1px solid #000000" align="left" valign=bottom><font color="#000000">base_score->The initial prediction score of all instances, global bias.<br>Booster->Specify which booster to use.<br>colsample_bytree->Subsample ratio of columns when constructing each tree.<br>max_depth->Maximum tree depth for base learners<br>n_estimators->Number of trees to fit.<br>objective->Specify the learning task and the corresponding learning objective<br>subsample->Subsample ratio of the training instance<br></font></td>
</tr>
</table>
<!-- ************************************************************************** -->
</body>
</html>