Skip to main content

kNNs and Decision Trees

kNNs and Decision Trees

kNNs (k-Nearest Neighbors)

先来个Classify的例子

kNN Classify-Dot Product

选D:
首先,设有两个向量A和B,它们的点积的定义是:

AB=A B cos(θ)A \cdot B = |A| \ |B| \ cos(\theta)

于是我们有:
Cal

So the max dot product is 56 for Green, which means that the Red triangle and the Green diamond point are most consistent in direction.

What is the value of k in kNN?

As k increases, more neightbors get a vote in the prediction.

  • more votes = more stability
  • more votes = more dis-similar neighbors = less accuracy

How to choose k?

  • Cross-validation

10-fold Cross Validation to pick k

kNN Cross Validation

大意就是:

  1. 把数据集分成10组
  2. 对于每个k,训练10次,每次用9组训练,1组测试【比如第一次是D1作验证集,D2-D10作训练集;第二次是D2作验证集,D1+D3-D10作训练集...】
  3. average the accuracy
  4. 选出使得accuracy最大的那个k

Regression vs. Classification

Regression 用在 Classification上

有的时候,我们也是会把Regression model用在Classification任务上的。

  1. 先预测数值
  2. 根据阈值将其binning(分组)成Classification结果

threshold

如图,这个大蓝点的prediction是在2.53.5之间,我们就把它归到3这个组里。

tip

所以自然,我们也能用回归模型来预测某一个类别的概率,如下Sigmoid/Logistic函数:

p(class=1x)=11+exp(class=1|x) = \frac{1}{1 + e^{-x}}

Classification 用在 Regression上

kNN也能做回归:

kNN 做回归

如图,x是feature,y是response.
所以当k=2时,我们计算最近的两个neighbor就是同样x=5的两个点。--> 对红三角的预测y值为(6+7)/2 = 6.5

Decision Trees

Build a Decision Tree

  1. 选择一个属性作为判定条件,将数据分裂成两部分
  2. 每个子集重复分裂,直到:
      1. 数据足够纯净
      1. 没有特征可分了
  3. 对每个叶子节点:
      1. 分类树 - 使用多数投票
      1. 回归树 - 使用平均值

Optimization of Classification Tree

目标:提高每层纯度。
Gini 不纯度 (Gini Impurity):

Gini=1i=1kpi2Gini = 1 - \sum_{i=1}^{k} p_i^2

越小表示越纯

Optimization of Regression Tree

目标:减少预测误差。 误差(MSE / variance):

MSE=1Ni=1N(yobsypred)2MSE = \frac{1}{N} \sum_{i=1}^{N} (y_{obs} - y_{pred})^2 Variance=1Ni=1N(yobsyˉ)2Variance = \frac{1}{N} \sum_{i=1}^{N} (y_{obs} - \bar{y})^2

Evaluation

Weighted  Impurity=lleavesScore(l)Weight(l)Weighted \ \ Impurity = \sum_{l \in leaves} Score(l) \cdot Weight(l)

其中,Socre就是Gini或Error

例题:
Evaluation

GiniLeft=1[(34)2+(14)2]=38Gini_{Left} = 1 - [(\frac{3}{4})^2 + (\frac{1}{4})^2] = \frac{3}{8} GiniRight=1[(02)2+(22)2]=0Gini_{Right} = 1 - [(\frac{0}{2})^2 + (\frac{2}{2})^2] = 0 TWGI(Total Weighted Gini Impurity)=4638+260=14TWGI(Total \ Weighted \ Gini \ Impurity) = \frac{4}{6} * \frac{3}{8} + \frac{2}{6} * 0 = \frac{1}{4}

剪枝 (Tree Pruning) 【估计可以忽略

Cα(T)=Error(T)+αTC_{\alpha}(T) = Error(T) + \alpha \cdot |T|

in which,

  • T|T| is the number of leaves in the tree.
  • α\alpha is a hyperparameter.

Pruning

Comparison

  1. Can predict the response / label for a new data?
  • kNN: Y
  • Decision Tree: Y
  1. Easy to interpret? (Using intuitive logic)
  • kNN: N. kNN只能说“找邻居”,没法直观解释内部机制
  • Decision Tree: Y
  1. Defines real relationships b/w features and response?
  • kNN: N. 没有明确建模特征间的关系,仅靠距离判断“像不像”
  • Decision Tree: Y. 通过树的分裂条件,能看出特征间的关系
  1. Model is fast / easy to store / share / optimize/ apply?
  • kNN: N. 需要存储所有数据点;预测时还得计算所有距离
  • Decision Tree: Y. 只需要存储树的结构(路径)和参数
  1. Few hyperparameters?
  • kNN: Y. 只有一个k,也算很少了
  • Decision Tree: Y. 可以一个都没有。当然剪枝啥的也可以有
  1. Use data with minimal preprocessing?
  • kNN: N. 需要标准化数据,避免距离计算时某个特征占主导地位
  • Decision Tree: Y.
  1. Potentially useful?
  • kNN: Y/N?. 准确率不错,但还是老问题 - 难以解释
  • Decision Tree: Y. 路径摆在那,随便提出“如果...就...”的建议
  1. Used to validate a hypothesis?
  2. Find novel interpretation of data?
  • kNN: N. 无结构无模型
  • Decision Tree: Y.