fix show article img

This commit is contained in:
2025-02-22 13:16:41 +08:00
parent 14a299f13f
commit 7cfd111895
54 changed files with 210 additions and 213 deletions

View File

@@ -4,7 +4,8 @@ tags: decisiontree
categories: machinelearning
abbrlink: 95
date: 2025-01-24 12:39:59
cover: https://th.bing.com/th/id/OIP.XaPUn6eccfS_z_wTLQNFzgHaEK?w=240&h=180&c=7&r=0&o=5&dpr=1.9&pid=1.7
cover: /img/machinelearning/decision-tree.png
---
### C4.5

View File

@@ -1,4 +0,0 @@
[ViewState]
Mode=
Vid=
FolderType=Generic

View File

@@ -4,7 +4,7 @@ tags: ensemble-learning
categories: machinelearning
abbrlink: 8816
date: 2025-01-25 15:12:08
cover: https://th.bing.com/th/id/OIP.SZA5W6cF-tYiiZ08KZ7l7wHaEm?w=250&h=180&c=7&r=0&o=5&dpr=1.3&pid=1.7
cover: /img/machinelearning/ensemble-learning.png
---
### Bagging

View File

@@ -17,10 +17,10 @@ $$y = w_1x_1 + w_2x_2 + \cdot\cdot\cdot+w_nx_n$$
### 损失函数
为了找到最佳的线性模型,我们需要通过最小化损失函数来优化模型参数。在线性回归中,常用的损失函数是 **均方误差MSE**
$$MSE = \frac{1}{m} \sum_{i=1}^{m} (y_i - \hat{y}_i)^2$$
- m 是样本的数量。
$$J(\theta) = \frac{1}{2N} \sum_{i=1}^{N} (y_i - f_\theta(x_i))^2$$
- N 是样本的数量。
- $y_i$ 是第 i 个样本的真实值。
- $\hat{y}_i$ 是模型预测的第 i 个样本的值。
- $f_\theta(x_i)$ 是模型预测的第 i 个样本的值。
### 线性回归优化