-
Notifications
You must be signed in to change notification settings - Fork 207
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
38 changed files
with
919 additions
and
97 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,5 @@ | ||
# [机器学习面试题](https://geekcircle.github.io/machine-learning-interview-qa) | ||
# Python&机器学习&深度学习面试笔试题 | ||
|
||
## 序言 | ||
|
||
持续整理、更新机器学习领域的面试笔试题,题目来源主要来源于[优达学城](http://cn.udacity.com/)、[七月在线](https://www.julyedu.com/)等。 | ||
|
||
## 地址 | ||
- Github地址: [https://github.com/geekcircle/machine-learning-interview-qa](https://geekcircle.github.io/machine-learning-interview-qa) | ||
- Gitbook地址: [https://geekcircle.github.io/machine-learning-interview-qa](https://geekcircle.github.io/machine-learning-interview-qa) | ||
|
||
## 友情链接 | ||
|
||
- [极客兔兔的博客](https://geektutu.com/series) | ||
- [StackOverflow上票数最高的100个Python问题](https://geekcircle.github.io/stackoverflow-python-top-qa/) | ||
|
||
## 贡献者 | ||
- [Dai Jie](https://github.com/gzdaijie) | ||
- [Xu Ri](https://github.com/xurisun) | ||
- [Tijing Wang](https://github.com/vitow) | ||
持续整理、更新Python、机器学习(Machine Learning)、深度学习(Deep Learning)领域的面试笔试题。 |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# 10. 支持向量机(SVM)中的支持向量是什么意思 | ||
|
||
<iframe src="https://ghbtns.com/github-btn.html?user=geektutu&repo=interview-questions&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> | ||
|
||
## 题目 | ||
|
||
我们在下面的二元标签的数据集上训练一个线性SVM模型 | ||
|
||
```python | ||
+:(−1,1),(1,−1),(−1,−1) | ||
−:(1,1),(2,0),(2,1) | ||
``` | ||
|
||
这个模型中的支持向量是哪些? | ||
|
||
A. (−1,1),(1,1),(2,1) | ||
B. (−1,1),(−1,−1),(2,1) | ||
C. (−1,1),(1,−1),(1,1),(2,0) | ||
|
||
## 解析 | ||
|
||
 | ||
|
||
在画分割线区分红绿两类点的时候,可以问自己一个问题,你认不认为所有的点对于分割线的位置都是起决定性作用的? | ||
|
||
其实在特别远的区域,哪怕你增加10000个样本点,对于分割线的位置,也是没有作用的,因为分割线是由几个关键点决定的(图上三个),这几个关键点支撑起了一个分割超平面,所以这些关键点,就是支持向量。 | ||
|
||
## 参考文档 | ||
|
||
- [支持向量机(SVM)里的支持向量是什么意思 - sofasofa](http://sofasofa.io/forum_main_post.php?postid=1000255) | ||
- [支持向量机(SVM)是什么意思?- 知乎](https://www.zhihu.com/question/21094489) | ||
|
||
## 答案 | ||
|
||
在坐标系中画一下,即可知道C是正确答案 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# 11. 精确率(precision)和召回率(recall) | ||
|
||
<iframe src="https://ghbtns.com/github-btn.html?user=geektutu&repo=interview-questions&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> | ||
|
||
混淆矩阵 | ||
|
||
- True Positive(真正, TP):将正类预测为正类数. | ||
- True Negative(真负 , TN):将负类预测为负类数. | ||
- False Positive(假正, FP):将负类预测为正类数 → 误报 (Type I error). | ||
- False Negative(假负 , FN):将正类预测为负类数 → 漏报 (Type II error). | ||
|
||
 | ||
|
||
精确率(precision)定义为: | ||
|
||
 | ||
|
||
需要注意的是精确率(precision)和准确率(accuracy)是不一样的, | ||
|
||
 | ||
|
||
在正负样本不平衡的情况下,准确率这个评价指标有很大的缺陷。比如在互联网广告里面,点击的数量是很少的,一般只有千分之几,如果用acc,即使全部预测成负类(不点击)acc 也有 99% 以上,没有意义。 | ||
|
||
召回率(recall,sensitivity,true positive rate)定义为: | ||
|
||
 | ||
|
||
此外,还有`F1`值,是精确率和召回率的调和均值, | ||
|
||
 | ||
|
||
精确率和准确率都高的情况下,`F1`值也会高。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# 12. 数据挖掘中如何判断关联规则有效性 | ||
|
||
<iframe src="https://ghbtns.com/github-btn.html?user=geektutu&repo=interview-questions&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> | ||
|
||
## 关联规则的三个度 | ||
|
||
### 1.支持度(Support) | ||
|
||
> Support(X→Y) = P(X,Y) / P(I) = P(X∪Y) / P(I) = num(XUY) / num(I) | ||
支持度表示项集{X,Y}在总项集里出现的概率。 | ||
|
||
其中,I表示总事务集。num()表示求事务集里特定项集出现的次数。 | ||
|
||
比如,num(I)表示总事务集的个数,num(X∪Y)表示含有{X,Y}的事务集的个数(个数也叫次数)。 | ||
|
||
### 2.置信度 (Confidence) | ||
|
||
> Confidence(X→Y) = P(Y|X) = P(X,Y) / P(X) = P(XUY) / P(X) | ||
置信度表示在先决条件X发生的情况下,由关联规则”X→Y“推出Y的概率。即在含有X的项集中,含有Y的可能性。 | ||
|
||
### 3.提升度(Lift) | ||
|
||
> Lift(X→Y) = P(Y|X) / P(Y) | ||
提升度表示含有X的条件下,同时含有Y的概率,与Y总体发生的概率之比。 | ||
|
||
满足最小支持度和最小置信度的规则,叫做“强关联规则”。 | ||
|
||
- Lift(X→Y)>1,“X→Y”是有效的强关联规则。 | ||
- Lift(X→Y) <=1,“X→Y”是无效的强关联规则。 | ||
- 特别地,Lift(X→Y) =1,X与Y相互独立。 | ||
|
||
## 判断规则的有效性 | ||
|
||
### 题目 | ||
|
||
已知有1000名顾客买年货,分为甲乙两组,每组各500人,其中甲组有500人买了茶叶,同时又有450人买了咖啡;乙组有450人买了咖啡,如表所示,**题目:茶叶→咖啡是一条有效的关联规则吗?** | ||
|
||
组次|买茶叶的人数|买咖啡的人数| | ||
---|---|---| | ||
甲组(500人)|500|450| | ||
已组(500人)|0|450| | ||
|
||
### 答案 | ||
|
||
- ”茶叶→咖啡“的支持度: Support(X→Y) = 450 / 500 = 90% | ||
- "茶叶→咖啡"的置信度为:Confidence(X→Y) = 450 / 500 = 90% | ||
- ”茶叶→咖啡“的提升度为:Lift(X→Y) = Confidence(X→Y) / P(Y) = 90% / ((450+450)/1000) = 1 | ||
|
||
由于提升度Lift(X→Y) =1,表示X与Y相互独立,即是否有X,对于Y的出现无影响。也就是说,是否购买咖啡,与有没有购买茶叶无关联。即规则”茶叶→咖啡“不成立,或者说关联性很小,几乎没有,虽然它的支持度和置信度都高达90%,但它不是一条有效的关联规则。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# 13. 朴素贝叶斯分类某个类别概率为0怎么办 | ||
|
||
<iframe src="https://ghbtns.com/github-btn.html?user=geektutu&repo=interview-questions&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> | ||
|
||
## 题目 | ||
|
||
A1,A2,A3是三个特征,Y是分类结果。A1,A2,A3和Y 均只有0和1两种情况。 | ||
|
||
|A1|A2|A3|Y| | ||
|:---:|:---:|:---:|:---:| | ||
|1|1|0|1| | ||
|0|1|1|1| | ||
|1|0|1|0| | ||
|0|1|0|0| | ||
|0|0|1|0| | ||
|
||
### 1. 朴素贝叶斯(Naive Bayes)为什么朴素? | ||
|
||
朴素贝叶斯中的“朴素”二字突出了这个算法的简易性。 | ||
|
||
朴素贝叶斯的简易性表现该算法基于一个很朴素的假设:所有的变量都是相互独立的。用贝叶斯定理可以写成 | ||
|
||
 | ||
|
||
但是在很多情况下,所有变量两两之间独立,这几乎是不可能的。 | ||
|
||
> 举个例子: | ||
> Y = 这个人是否是举重运动员。 | ||
> X1 = 性别,X2 = 这个人能否举起100公斤的箱子。 | ||
> 变量X1和X2显然不是独立的。 | ||
换句话说,朴素贝叶斯的独立性假设很傻很天真,所以预测精度往往不是很高。 | ||
|
||
### 2. `1,0,0` 的分类结果是什么? | ||
|
||
 | ||
|
||
> 分母都是相同,所以只计算分子即可。 | ||
```python | ||
P(Y=0) = 3/5 | ||
P(Y=1) = 2/5 | ||
P(Y=0|A1=1,A2=0,A3=0) = 3/5 * 1/3 * 2/3 * 1/3 = 2/45 | ||
P(Y=1|A1=1,A2=0,A3=0) = 2/5 * 1/2 * 1/4 * 1/2 = 1/40 | ||
``` | ||
|
||
> 答: **分类结果为0** | ||
## 结论 | ||
|
||
从上题可以看出,当每个类别未出现导致概率为0时,可以采用贝叶斯估计的方式来解决。当训练集较多的情况下,可以生成一个接近于0的概率代替0,接近于p的概率代替p,几乎不影响原有的先验概率分布。 | ||
|
||
贝叶斯估计公式中,常取λ为1,这时称之为拉普拉斯平滑(Laplace smoothing)。 | ||
|
||
上例仅对先验概率为0的特征采用了贝叶斯估计,一般情况下会对所有参与训练的特征都采用贝叶斯估计。 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
# 14. 决策树 | ||
|
||
<iframe src="https://ghbtns.com/github-btn.html?user=geektutu&repo=interview-questions&type=star&count=true&size=large" frameborder="0" scrolling="0" width="160px" height="30px"></iframe> | ||
|
||
## 什么是决策树 | ||
|
||
决策树(decision tree)是一种基本的分类与回归方法。决策树是用样本的属性作为结点,用属性的取值作为分支的树结构。 | ||
|
||
|
||
决策树的根结点是所有样本中信息量最大的属性。树的中间结点是该结点为根的子树所包含的样本子集中信息量最大的属性。决策树的叶结点是样本的类别值。决策树是一种知识表示形式,它是对所有样本数据的高度概括决策树能准确地识别所有样本的类别,也能有效地识别新样本的类别。 | ||
|
||
|
||
 | ||
|
||
## 特征选择 | ||
|
||
|ID|年龄|有工作|有自己的房子|信贷情况|类别(是否个给贷款)| | ||
|:---:|:---:|:---:|:---:|:---:|:---:| | ||
|1|青年|否|否|一般|否| | ||
|2|青年|否|否|好|否| | ||
|3|青年|是|否|好|是| | ||
|4|青年|是|是|一般|是| | ||
|5|青年|否|否|一般|否| | ||
|6|中年|否|否|一般|否| | ||
|7|中年|否|否|好|否| | ||
|8|中年|是|是|好|是| | ||
|9|中年|否|是|非常好|是| | ||
|10|中年|否|是|非常好|是| | ||
|11|老年|否|是|非常好|是| | ||
|12|老年|否|是|好|是| | ||
|13|老年|是|否|好|是| | ||
|14|老年|是|否|非常好|是| | ||
|15|老年|否|否|一般|否| | ||
|
||
信息熵(entropy)是用来衡量一个随机变量出现的期望值。如果信息的不确定性越大,熵的值也就越大,出现的各种情况也就越多。 | ||
 | ||
|
||
信息增益(information gain)是指信息划分前后的熵的变化,也就是说由于使用这个属性分割样例而导致的期望熵降低。也就是说,信息增益就是原有信息熵与属性划分后信息熵(需要对划分后的信息熵取期望值)的差值,具体计算法如下: | ||
 | ||
## ID3算法 | ||
|
||
决策树算法ID3的基本思想: | ||
|
||
首先找出最有判别力的属性,把样例分成多个子集,每个子集又选择最有判别力的属性进行划分,一直进行到所有子集仅包含同一类型的数据为止。最后得到一棵决策树。 | ||
|
||
J.R.Quinlan的工作主要是引进了信息论中的信息增益,他将其称为信息增益(information gain),作为属性判别能力的度量,设计了构造决策树的递归算法。 | ||
|
||
|
||
ID3算法: | ||
|
||
1.对当前例子集合,计算各属性的信息增益; | ||
2.选择信息增益最大的属性Ak; | ||
3.把在Ak处取值相同的例子归于同一子集,Ak取几个值就得几个子集; | ||
4.对既含正例又含反例的子集,递归调用建树算法; | ||
5.若子集仅含正例或反例,对应分枝标上P或N,返回调用处。 | ||
|
||
## C4.5算法 | ||
C4.5算法是ID3算法的一种改进。 | ||
|
||
改进 | ||
- 用信息增益率来选择属性,克服了用信息增益选择属性偏向选择多值属性的不足 | ||
- 在构造树的过程中进行剪枝 | ||
- 对连续属性进行离散化 | ||
- 能够对不完整的数据进行处理 | ||
|
||
信息增益率 | ||
|
||
设样本集S按离散属性F的c个不同的取值划分为c个子集,则这c个子集的信息熵为: | ||
 | ||
信息增益率是信息增益与信息熵的比例,如下: | ||
 | ||
## CART |
Oops, something went wrong.