fix(training): patch lightgbm sklearn compatibility
This commit is contained in:
362
README.md
362
README.md
@@ -1,253 +1,279 @@
|
||||
# 基于多维特征挖掘的员工缺勤分析与预测系统
|
||||
# 中国企业员工缺勤分析与预测系统
|
||||
|
||||
## 项目简介
|
||||
|
||||
本系统基于 UCI Absenteeism 数据集,利用机器学习算法对员工考勤数据进行深度分析,挖掘影响缺勤的多维度特征,构建缺勤预测模型,为企业人力资源管理提供科学、客观的决策支持。
|
||||
本项目面向企业人力资源管理与运营分析场景,围绕员工缺勤事件构建了一个集数据分析、风险预测、群体画像与可视化展示于一体的毕业设计系统。系统支持缺勤趋势分析、影响因素挖掘、单次缺勤时长预测、多模型对比以及员工群体聚类展示。
|
||||
|
||||
## 功能特性
|
||||
后端采用 `Flask + scikit-learn + PyTorch`,前端采用 `Vue 3 + Element Plus + ECharts`。当前版本同时支持传统机器学习模型和 `LSTM+MLP` 深度学习模型。
|
||||
|
||||
### F01 数据概览与全局统计
|
||||
- 基础统计指标展示(样本总数、员工总数、缺勤总时长等)
|
||||
## 功能模块
|
||||
|
||||
### 1. 数据概览
|
||||
|
||||
- 基础统计指标展示
|
||||
- 月度缺勤趋势分析
|
||||
- 星期分布分析
|
||||
- 缺勤原因分布分析
|
||||
- 请假类型与原因分布分析
|
||||
- 季节分布分析
|
||||
|
||||
### F02 多维特征挖掘与影响因素分析
|
||||
- 特征重要性排序(基于随机森林)
|
||||
- 相关性热力图分析
|
||||
- 群体对比分析(饮酒/吸烟/学历/子女等维度)
|
||||
### 2. 影响因素分析
|
||||
|
||||
### F03 员工缺勤风险预测
|
||||
- 单次缺勤预测
|
||||
- 风险等级评估(低/中/高)
|
||||
- 模型性能展示(R²、MSE、RMSE、MAE)
|
||||
- 特征重要性排序
|
||||
- 相关性热力图
|
||||
- 多维群体对比分析
|
||||
|
||||
### F04 员工画像与群体聚类
|
||||
- K-Means 聚类结果展示
|
||||
- 员工群体雷达图
|
||||
- 聚类散点图可视化
|
||||
### 3. 缺勤预测
|
||||
|
||||
- 单次缺勤时长预测
|
||||
- 风险等级评估
|
||||
- 多模型结果对比
|
||||
- 传统模型与深度学习模型切换
|
||||
|
||||
### 4. 员工画像
|
||||
|
||||
- 聚类结果展示
|
||||
- 群体画像分析
|
||||
- 群体散点图可视化
|
||||
|
||||
## 技术栈
|
||||
|
||||
### 后端
|
||||
|
||||
- Python 3.11
|
||||
- Flask 2.3.3
|
||||
- scikit-learn 1.3.0
|
||||
- XGBoost 1.7.6
|
||||
- LightGBM 4.1.0
|
||||
- Flask-CORS 4.0.0
|
||||
- pandas 2.0.3
|
||||
- numpy 1.24.3
|
||||
- scikit-learn 1.3.0
|
||||
- xgboost 1.7.6
|
||||
- lightgbm 4.1.0
|
||||
- PyTorch 2.6.0
|
||||
|
||||
### 前端
|
||||
- Vue 3.4
|
||||
- Element Plus 2.4
|
||||
- ECharts 5.4
|
||||
- Axios 1.6
|
||||
- Vue Router 4.2
|
||||
- Vite 5.0
|
||||
|
||||
- Vue 3
|
||||
- Vite
|
||||
- Element Plus
|
||||
- ECharts
|
||||
- Axios
|
||||
- Vue Router
|
||||
|
||||
## 项目结构
|
||||
|
||||
```text
|
||||
forsetsystem/
|
||||
├── backend/
|
||||
│ ├── api/ # 接口层
|
||||
│ ├── core/ # 数据生成、特征工程、训练、聚类、深度学习
|
||||
│ ├── services/ # 业务服务层
|
||||
│ ├── data/
|
||||
│ │ └── raw/
|
||||
│ │ └── china_enterprise_absence_events.csv
|
||||
│ ├── models/ # 模型文件与训练工件
|
||||
│ ├── app.py # 后端入口
|
||||
│ ├── config.py # 项目配置
|
||||
│ └── requirements.txt
|
||||
├── frontend/
|
||||
│ ├── src/
|
||||
│ │ ├── api/
|
||||
│ │ ├── router/
|
||||
│ │ ├── styles/
|
||||
│ │ ├── views/
|
||||
│ │ ├── App.vue
|
||||
│ │ └── main.js
|
||||
│ ├── package.json
|
||||
│ └── vite.config.js
|
||||
├── docs/ # 系统文档、论文文档与安装说明
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## 环境要求
|
||||
|
||||
| 项目 | 要求 |
|
||||
|------|------|
|
||||
| 操作系统 | Windows 10/11、Linux、macOS |
|
||||
| 操作系统 | Windows 10 / Windows 11 |
|
||||
| Python | 3.11 |
|
||||
| Node.js | 16.0+ |
|
||||
| Conda | Anaconda 或 Miniconda |
|
||||
| pnpm | 8.0+ |
|
||||
| Node.js | 16+ |
|
||||
| pnpm | 8+ |
|
||||
| CUDA | 建议与 PyTorch `cu124` 轮子匹配 |
|
||||
|
||||
## 安装部署
|
||||
|
||||
### 1. 克隆项目
|
||||
推荐使用 `conda` 虚拟环境,并优先安装官方 GPU 版 `PyTorch`。
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd forsetsystem
|
||||
```
|
||||
|
||||
### 2. 后端环境配置
|
||||
|
||||
#### 创建 Conda 环境
|
||||
### 1. 创建并激活 conda 环境
|
||||
|
||||
```powershell
|
||||
conda create -n forsetenv python=3.11 -y
|
||||
conda activate forsetenv
|
||||
```
|
||||
|
||||
#### 安装机器学习库(使用 conda-forge)
|
||||
### 2. 安装 PyTorch GPU 版
|
||||
|
||||
```powershell
|
||||
conda install -c conda-forge pandas=2.0.3 numpy=1.24.3 scikit-learn=1.3.0 xgboost=1.7.6 lightgbm=4.1.0 joblib=1.3.1 -y
|
||||
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
|
||||
```
|
||||
|
||||
#### 安装 Web 框架
|
||||
### 3. 安装其余后端依赖
|
||||
|
||||
```powershell
|
||||
pip install Flask==2.3.3 Flask-CORS==4.0.0 python-dotenv==1.0.0
|
||||
pip install pandas==2.0.3 numpy==1.24.3 scikit-learn==1.3.0 joblib==1.3.1
|
||||
pip install xgboost==1.7.6 lightgbm==4.1.0
|
||||
```
|
||||
|
||||
#### 验证安装
|
||||
如需直接使用依赖文件,可在安装 GPU 版 `PyTorch` 后执行:
|
||||
|
||||
```powershell
|
||||
python -c "import pandas,numpy,sklearn,xgboost,lightgbm,flask;print('All libraries installed successfully')"
|
||||
pip install -r backend/requirements.txt
|
||||
```
|
||||
|
||||
#### 训练模型
|
||||
### 4. 安装前端依赖
|
||||
|
||||
```powershell
|
||||
cd backend
|
||||
python core/train_model.py
|
||||
```
|
||||
|
||||
### 3. 前端环境配置
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
pnpm install
|
||||
```
|
||||
|
||||
## 运行说明
|
||||
## 启动方式
|
||||
|
||||
### 启动后端服务
|
||||
### 1. 生成数据集
|
||||
|
||||
```powershell
|
||||
conda activate forsetenv
|
||||
cd backend
|
||||
python core/generate_dataset.py
|
||||
```
|
||||
|
||||
### 2. 训练模型
|
||||
|
||||
```powershell
|
||||
python core/train_model.py
|
||||
```
|
||||
|
||||
### 3. 启动后端
|
||||
|
||||
```powershell
|
||||
python app.py
|
||||
```
|
||||
|
||||
后端服务运行在 http://localhost:5000
|
||||
后端默认地址:
|
||||
|
||||
### 启动前端服务
|
||||
```text
|
||||
http://127.0.0.1:5000
|
||||
```
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
### 4. 启动前端
|
||||
|
||||
```powershell
|
||||
cd ..\frontend
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
前端服务运行在 http://localhost:5173
|
||||
前端默认地址:
|
||||
|
||||
### 访问系统
|
||||
|
||||
打开浏览器访问 http://localhost:5173
|
||||
|
||||
## 项目结构
|
||||
|
||||
```
|
||||
forsetsystem/
|
||||
├── backend/ # 后端项目
|
||||
│ ├── api/ # API 接口层
|
||||
│ │ ├── overview_routes.py # 数据概览接口
|
||||
│ │ ├── analysis_routes.py # 影响因素分析接口
|
||||
│ │ ├── predict_routes.py # 预测接口
|
||||
│ │ └── cluster_routes.py # 聚类接口
|
||||
│ ├── services/ # 业务逻辑层
|
||||
│ ├── core/ # 核心算法层
|
||||
│ │ ├── preprocessing.py # 数据预处理
|
||||
│ │ ├── feature_mining.py # 特征挖掘
|
||||
│ │ ├── train_model.py # 模型训练
|
||||
│ │ └── clustering.py # 聚类分析
|
||||
│ ├── data/ # 数据存储
|
||||
│ ├── models/ # 模型存储
|
||||
│ ├── utils/ # 工具函数
|
||||
│ ├── app.py # 应用入口
|
||||
│ ├── config.py # 配置文件
|
||||
│ └── requirements.txt # 依赖清单
|
||||
│
|
||||
├── frontend/ # 前端项目
|
||||
│ ├── src/
|
||||
│ │ ├── api/ # API 调用
|
||||
│ │ ├── views/ # 页面组件
|
||||
│ │ ├── router/ # 路由配置
|
||||
│ │ ├── App.vue # 根组件
|
||||
│ │ └── main.js # 入口文件
|
||||
│ ├── index.html
|
||||
│ ├── package.json
|
||||
│ └── vite.config.js
|
||||
│
|
||||
├── data/ # 原始数据
|
||||
│ └── Absenteeism_at_work.csv
|
||||
│
|
||||
├── docs/ # 项目文档
|
||||
│ ├── 00_需求规格说明书.md
|
||||
│ ├── 01_系统架构设计.md
|
||||
│ ├── 02_接口设计文档.md
|
||||
│ ├── 03_数据设计文档.md
|
||||
│ └── 04_UI原型设计.md
|
||||
│
|
||||
└── README.md
|
||||
```text
|
||||
http://127.0.0.1:5173
|
||||
```
|
||||
|
||||
## API 接口
|
||||
## 模型说明
|
||||
|
||||
### 数据概览模块
|
||||
| 接口 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| /api/overview/stats | GET | 基础统计指标 |
|
||||
| /api/overview/trend | GET | 月度缺勤趋势 |
|
||||
| /api/overview/weekday | GET | 星期分布 |
|
||||
| /api/overview/reasons | GET | 缺勤原因分布 |
|
||||
| /api/overview/seasons | GET | 季节分布 |
|
||||
当前系统支持以下模型类型:
|
||||
|
||||
### 影响因素分析模块
|
||||
| 接口 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| /api/analysis/importance | GET | 特征重要性 |
|
||||
| /api/analysis/correlation | GET | 相关性矩阵 |
|
||||
| /api/analysis/compare | GET | 群体对比分析 |
|
||||
- `random_forest`
|
||||
- `gradient_boosting`
|
||||
- `extra_trees`
|
||||
- `xgboost`
|
||||
- `lightgbm`
|
||||
- `lstm_mlp`
|
||||
|
||||
### 预测模块
|
||||
| 接口 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| /api/predict/single | POST | 单次预测 |
|
||||
| /api/predict/model-info | GET | 模型信息 |
|
||||
其中:
|
||||
|
||||
### 聚类模块
|
||||
| 接口 | 方法 | 说明 |
|
||||
|------|------|------|
|
||||
| /api/cluster/result | GET | 聚类结果 |
|
||||
| /api/cluster/profile | GET | 群体画像 |
|
||||
| /api/cluster/scatter | GET | 散点数据 |
|
||||
- 传统模型适合结构化特征解释与特征重要性分析
|
||||
- `LSTM+MLP` 适合结合事件序列与静态特征进行预测
|
||||
|
||||
## 作者信息
|
||||
## 数据与训练文件
|
||||
|
||||
- **作者**:张硕
|
||||
- **学校**:河南农业大学软件学院
|
||||
- **项目类型**:本科毕业设计
|
||||
- **完成时间**:2026年3月
|
||||
常用路径如下:
|
||||
|
||||
## 后续改进计划
|
||||
- 数据集文件:[china_enterprise_absence_events.csv](D:/VScodeProject/forsetsystem/backend/data/raw/china_enterprise_absence_events.csv)
|
||||
- 配置文件:[config.py](D:/VScodeProject/forsetsystem/backend/config.py)
|
||||
- 数据生成脚本:[generate_dataset.py](D:/VScodeProject/forsetsystem/backend/core/generate_dataset.py)
|
||||
- 模型训练脚本:[train_model.py](D:/VScodeProject/forsetsystem/backend/core/train_model.py)
|
||||
- 深度学习脚本:[deep_learning_model.py](D:/VScodeProject/forsetsystem/backend/core/deep_learning_model.py)
|
||||
|
||||
### 模型优化
|
||||
- [ ] 引入深度学习模型(如 LSTM)处理时序特征
|
||||
- [ ] 增加模型解释性分析(SHAP 值可视化)
|
||||
- [ ] 实现模型自动调参(Optuna/Hyperopt)
|
||||
- [ ] 支持多模型集成预测
|
||||
## 接口概览
|
||||
|
||||
### 功能扩展
|
||||
- [ ] 增加用户认证与权限管理
|
||||
- [ ] 支持自定义数据集上传与分析
|
||||
- [ ] 增加数据导出功能(Excel/PDF 报告)
|
||||
- [ ] 实现预测结果的批量导出
|
||||
- [ ] 增加数据可视化大屏展示
|
||||
### 数据概览
|
||||
|
||||
### 技术改进
|
||||
- [ ] 后端迁移至 FastAPI 提升性能
|
||||
- [ ] 引入 Redis 缓存常用查询结果
|
||||
- [ ] 使用 Docker 容器化部署
|
||||
- [ ] 增加 CI/CD 自动化测试与部署
|
||||
- [ ] 前端状态管理迁移至 Pinia
|
||||
- `GET /api/overview/stats`
|
||||
- `GET /api/overview/trend`
|
||||
- `GET /api/overview/weekday`
|
||||
- `GET /api/overview/reasons`
|
||||
- `GET /api/overview/seasons`
|
||||
|
||||
### 数据层面
|
||||
- [ ] 支持数据库存储(MySQL/PostgreSQL)
|
||||
- [ ] 实现数据增量更新机制
|
||||
- [ ] 增加数据质量检测与清洗功能
|
||||
### 影响因素分析
|
||||
|
||||
## 参考资料
|
||||
- `GET /api/analysis/importance`
|
||||
- `GET /api/analysis/correlation`
|
||||
- `GET /api/analysis/compare`
|
||||
|
||||
- [UCI Machine Learning Repository - Absenteeism at work Data Set](https://archive.ics.uci.edu/ml/datasets/Absenteeism+at+work)
|
||||
- [Flask 官方文档](https://flask.palletsprojects.com/)
|
||||
- [Vue 3 官方文档](https://vuejs.org/)
|
||||
- [Element Plus 组件库](https://element-plus.org/)
|
||||
- [ECharts 图表库](https://echarts.apache.org/)
|
||||
### 缺勤预测
|
||||
|
||||
- `GET /api/predict/models`
|
||||
- `GET /api/predict/model-info`
|
||||
- `POST /api/predict/single`
|
||||
- `POST /api/predict/compare`
|
||||
|
||||
### 员工画像
|
||||
|
||||
- `GET /api/cluster/result`
|
||||
- `GET /api/cluster/profile`
|
||||
- `GET /api/cluster/scatter`
|
||||
|
||||
## 文档目录
|
||||
|
||||
详细设计文档见:
|
||||
|
||||
- [docs/README.md](D:/VScodeProject/forsetsystem/docs/README.md)
|
||||
- [09_环境配置与安装说明.md](D:/VScodeProject/forsetsystem/docs/09_环境配置与安装说明.md)
|
||||
|
||||
## 常见问题
|
||||
|
||||
### 1. `flask_cors` 缺失
|
||||
|
||||
执行:
|
||||
|
||||
```powershell
|
||||
pip install Flask-CORS
|
||||
```
|
||||
|
||||
### 2. `xgboost` 或 `lightgbm` 缺失
|
||||
|
||||
执行:
|
||||
|
||||
```powershell
|
||||
pip install xgboost==1.7.6 lightgbm==4.1.0
|
||||
```
|
||||
|
||||
### 3. PyTorch 被安装成 CPU 版
|
||||
|
||||
请重新执行官方 GPU 安装命令:
|
||||
|
||||
```powershell
|
||||
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
|
||||
```
|
||||
|
||||
### 4. 如何确认当前使用的是 conda 环境
|
||||
|
||||
```powershell
|
||||
conda info --envs
|
||||
where python
|
||||
```
|
||||
|
||||
## 项目信息
|
||||
|
||||
- 作者:张硕
|
||||
- 学校:河南农业大学软件学院
|
||||
- 项目类型:本科毕业设计
|
||||
- 完成时间:2026 年 3 月
|
||||
|
||||
@@ -25,6 +25,8 @@ TEST_SIZE = 0.2
|
||||
TARGET_COLUMN = '缺勤时长(小时)'
|
||||
EMPLOYEE_ID_COLUMN = '员工编号'
|
||||
COMPANY_ID_COLUMN = '企业编号'
|
||||
EVENT_SEQUENCE_COLUMN = '事件序号'
|
||||
EVENT_DATE_INDEX_COLUMN = '事件日期索引'
|
||||
|
||||
WEEKDAY_NAMES = {
|
||||
1: '周一',
|
||||
@@ -127,6 +129,10 @@ FEATURE_NAME_CN = {
|
||||
'是否临时请假': '临时请假',
|
||||
'是否连续缺勤': '连续缺勤',
|
||||
'前一工作日是否加班': '前一工作日加班',
|
||||
'事件日期': '事件日期',
|
||||
'事件日期索引': '事件日期索引',
|
||||
'事件序号': '事件序号',
|
||||
'员工历史事件数': '员工历史事件数',
|
||||
'缺勤时长(小时)': '缺勤时长',
|
||||
'加班通勤压力指数': '加班通勤压力指数',
|
||||
'家庭负担指数': '家庭负担指数',
|
||||
|
||||
299
backend/core/deep_learning_model.py
Normal file
299
backend/core/deep_learning_model.py
Normal file
@@ -0,0 +1,299 @@
|
||||
import os
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
|
||||
import numpy as np
|
||||
import pandas as pd
|
||||
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
|
||||
|
||||
import config
|
||||
from core.model_features import engineer_features
|
||||
|
||||
try:
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch.utils.data import DataLoader, TensorDataset
|
||||
except ImportError:
|
||||
torch = None
|
||||
nn = None
|
||||
DataLoader = None
|
||||
TensorDataset = None
|
||||
|
||||
|
||||
WINDOW_SIZE = 5
|
||||
SEQUENCE_FEATURES = [
|
||||
'缺勤月份',
|
||||
'星期几',
|
||||
'是否节假日前后',
|
||||
'请假类型',
|
||||
'请假原因大类',
|
||||
'是否提供医院证明',
|
||||
'是否临时请假',
|
||||
'是否连续缺勤',
|
||||
'前一工作日是否加班',
|
||||
'月均加班时长',
|
||||
'通勤时长分钟',
|
||||
'是否夜班岗位',
|
||||
'是否慢性病史',
|
||||
'加班通勤压力指数',
|
||||
'缺勤历史强度',
|
||||
]
|
||||
STATIC_FEATURES = [
|
||||
'所属行业',
|
||||
'婚姻状态',
|
||||
'岗位序列',
|
||||
'岗位级别',
|
||||
'年龄',
|
||||
'司龄年数',
|
||||
'子女数量',
|
||||
'班次类型',
|
||||
'绩效等级',
|
||||
'BMI',
|
||||
'健康风险指数',
|
||||
'家庭负担指数',
|
||||
'岗位稳定性指数',
|
||||
]
|
||||
|
||||
|
||||
class LSTMMLPRegressor(nn.Module):
|
||||
def __init__(self, seq_input_dim: int, static_input_dim: int):
|
||||
super().__init__()
|
||||
self.lstm = nn.LSTM(
|
||||
input_size=seq_input_dim,
|
||||
hidden_size=48,
|
||||
num_layers=1,
|
||||
batch_first=True,
|
||||
dropout=0.0,
|
||||
)
|
||||
self.static_net = nn.Sequential(
|
||||
nn.Linear(static_input_dim, 32),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(0.1),
|
||||
)
|
||||
self.fusion = nn.Sequential(
|
||||
nn.Linear(48 + 32, 48),
|
||||
nn.ReLU(),
|
||||
nn.Dropout(0.1),
|
||||
nn.Linear(48, 1),
|
||||
)
|
||||
|
||||
def forward(self, sequence_x, static_x):
|
||||
lstm_output, _ = self.lstm(sequence_x)
|
||||
sequence_repr = lstm_output[:, -1, :]
|
||||
static_repr = self.static_net(static_x)
|
||||
fused = torch.cat([sequence_repr, static_repr], dim=1)
|
||||
return self.fusion(fused).squeeze(1)
|
||||
|
||||
|
||||
def is_available() -> bool:
|
||||
return torch is not None
|
||||
|
||||
|
||||
def _fit_category_maps(df: pd.DataFrame, features: List[str]) -> Dict[str, Dict[str, int]]:
|
||||
category_maps = {}
|
||||
for feature in features:
|
||||
if feature not in df.columns:
|
||||
continue
|
||||
if pd.api.types.is_numeric_dtype(df[feature]):
|
||||
continue
|
||||
values = sorted(df[feature].astype(str).unique().tolist())
|
||||
category_maps[feature] = {value: idx for idx, value in enumerate(values)}
|
||||
return category_maps
|
||||
|
||||
|
||||
def _apply_category_maps(df: pd.DataFrame, features: List[str], category_maps: Dict[str, Dict[str, int]]) -> pd.DataFrame:
|
||||
encoded = df.copy()
|
||||
for feature in features:
|
||||
if feature not in encoded.columns:
|
||||
encoded[feature] = 0
|
||||
continue
|
||||
if feature in category_maps:
|
||||
mapper = category_maps[feature]
|
||||
encoded[feature] = encoded[feature].astype(str).map(lambda value: mapper.get(value, 0))
|
||||
return encoded
|
||||
|
||||
|
||||
def _safe_standardize(values: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
|
||||
mean = values.mean(axis=0)
|
||||
std = values.std(axis=0)
|
||||
std = np.where(std < 1e-6, 1.0, std)
|
||||
return mean.astype(np.float32), std.astype(np.float32)
|
||||
|
||||
|
||||
def _build_sequence_arrays(
|
||||
df: pd.DataFrame,
|
||||
category_maps: Dict[str, Dict[str, int]],
|
||||
target_transform: str,
|
||||
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
|
||||
df = engineer_features(df.copy())
|
||||
features = sorted(set(SEQUENCE_FEATURES + STATIC_FEATURES))
|
||||
df = _apply_category_maps(df, features, category_maps)
|
||||
df = df.sort_values(
|
||||
[config.EMPLOYEE_ID_COLUMN, config.EVENT_DATE_INDEX_COLUMN, config.EVENT_SEQUENCE_COLUMN]
|
||||
).reset_index(drop=True)
|
||||
|
||||
sequence_samples = []
|
||||
static_samples = []
|
||||
targets = []
|
||||
|
||||
for _, group in df.groupby(config.EMPLOYEE_ID_COLUMN, sort=False):
|
||||
seq_values = group[SEQUENCE_FEATURES].astype(float).values
|
||||
static_values = group[STATIC_FEATURES].astype(float).values
|
||||
target_values = group[config.TARGET_COLUMN].astype(float).values
|
||||
|
||||
for index in range(len(group)):
|
||||
window_slice = seq_values[max(0, index - WINDOW_SIZE + 1): index + 1]
|
||||
sequence_window = np.zeros((WINDOW_SIZE, len(SEQUENCE_FEATURES)), dtype=np.float32)
|
||||
sequence_window[-len(window_slice):] = window_slice
|
||||
sequence_samples.append(sequence_window)
|
||||
static_samples.append(static_values[index].astype(np.float32))
|
||||
targets.append(float(target_values[index]))
|
||||
|
||||
targets = np.array(targets, dtype=np.float32)
|
||||
if target_transform == 'log1p':
|
||||
targets = np.log1p(np.clip(targets, a_min=0, a_max=None)).astype(np.float32)
|
||||
|
||||
return (
|
||||
np.array(sequence_samples, dtype=np.float32),
|
||||
np.array(static_samples, dtype=np.float32),
|
||||
targets,
|
||||
)
|
||||
|
||||
|
||||
def train_lstm_mlp(
|
||||
train_df: pd.DataFrame,
|
||||
test_df: pd.DataFrame,
|
||||
model_path: str,
|
||||
target_transform: str = 'log1p',
|
||||
epochs: int = 24,
|
||||
batch_size: int = 128,
|
||||
) -> Optional[Dict]:
|
||||
if torch is None:
|
||||
return None
|
||||
|
||||
used_features = sorted(set(SEQUENCE_FEATURES + STATIC_FEATURES))
|
||||
category_maps = _fit_category_maps(train_df, used_features)
|
||||
train_seq, train_static, y_train = _build_sequence_arrays(train_df, category_maps, target_transform)
|
||||
test_seq, test_static, y_test_transformed = _build_sequence_arrays(test_df, category_maps, target_transform)
|
||||
|
||||
seq_mean, seq_std = _safe_standardize(train_seq.reshape(-1, train_seq.shape[-1]))
|
||||
static_mean, static_std = _safe_standardize(train_static)
|
||||
|
||||
train_seq = ((train_seq - seq_mean) / seq_std).astype(np.float32)
|
||||
test_seq = ((test_seq - seq_mean) / seq_std).astype(np.float32)
|
||||
train_static = ((train_static - static_mean) / static_std).astype(np.float32)
|
||||
test_static = ((test_static - static_mean) / static_std).astype(np.float32)
|
||||
|
||||
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
|
||||
if device.type == 'cuda':
|
||||
device_name = torch.cuda.get_device_name(device)
|
||||
print(f'[lstm_mlp] Training device: CUDA ({device_name})')
|
||||
else:
|
||||
print('[lstm_mlp] Training device: CPU')
|
||||
model = LSTMMLPRegressor(seq_input_dim=train_seq.shape[-1], static_input_dim=train_static.shape[-1]).to(device)
|
||||
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
|
||||
criterion = nn.MSELoss()
|
||||
|
||||
train_dataset = TensorDataset(
|
||||
torch.tensor(train_seq),
|
||||
torch.tensor(train_static),
|
||||
torch.tensor(y_train),
|
||||
)
|
||||
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
|
||||
|
||||
model.train()
|
||||
for _ in range(epochs):
|
||||
for batch_seq, batch_static, batch_target in train_loader:
|
||||
batch_seq = batch_seq.to(device)
|
||||
batch_static = batch_static.to(device)
|
||||
batch_target = batch_target.to(device)
|
||||
|
||||
optimizer.zero_grad()
|
||||
predictions = model(batch_seq, batch_static)
|
||||
loss = criterion(predictions, batch_target)
|
||||
loss.backward()
|
||||
optimizer.step()
|
||||
|
||||
model.eval()
|
||||
with torch.no_grad():
|
||||
predictions = model(
|
||||
torch.tensor(test_seq).to(device),
|
||||
torch.tensor(test_static).to(device),
|
||||
).cpu().numpy()
|
||||
|
||||
if target_transform == 'log1p':
|
||||
y_pred = np.expm1(predictions)
|
||||
else:
|
||||
y_pred = predictions
|
||||
y_true = test_df[config.TARGET_COLUMN].astype(float).values
|
||||
y_pred = np.clip(y_pred, a_min=0, a_max=None)
|
||||
mse = mean_squared_error(y_true, y_pred)
|
||||
|
||||
default_prefix = train_seq[:, :-1, :].mean(axis=0).astype(np.float32)
|
||||
bundle = {
|
||||
'state_dict': model.state_dict(),
|
||||
'sequence_features': SEQUENCE_FEATURES,
|
||||
'static_features': STATIC_FEATURES,
|
||||
'category_maps': category_maps,
|
||||
'seq_mean': seq_mean,
|
||||
'seq_std': seq_std,
|
||||
'static_mean': static_mean,
|
||||
'static_std': static_std,
|
||||
'default_sequence_prefix': default_prefix,
|
||||
'window_size': WINDOW_SIZE,
|
||||
'target_transform': target_transform,
|
||||
'sequence_input_dim': train_seq.shape[-1],
|
||||
'static_input_dim': train_static.shape[-1],
|
||||
}
|
||||
torch.save(bundle, model_path)
|
||||
|
||||
return {
|
||||
'metrics': {
|
||||
'r2': round(r2_score(y_true, y_pred), 4),
|
||||
'mse': round(mse, 4),
|
||||
'rmse': round(float(np.sqrt(mse)), 4),
|
||||
'mae': round(mean_absolute_error(y_true, y_pred), 4),
|
||||
},
|
||||
'metadata': {
|
||||
'sequence_window_size': WINDOW_SIZE,
|
||||
'sequence_feature_names': SEQUENCE_FEATURES,
|
||||
'static_feature_names': STATIC_FEATURES,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def load_lstm_mlp_bundle(model_path: str) -> Optional[Dict]:
|
||||
if torch is None or not os.path.exists(model_path):
|
||||
return None
|
||||
bundle = torch.load(model_path, map_location='cpu')
|
||||
model = LSTMMLPRegressor(
|
||||
seq_input_dim=bundle['sequence_input_dim'],
|
||||
static_input_dim=bundle['static_input_dim'],
|
||||
)
|
||||
model.load_state_dict(bundle['state_dict'])
|
||||
model.eval()
|
||||
bundle['model'] = model
|
||||
return bundle
|
||||
|
||||
|
||||
def predict_lstm_mlp(bundle: Dict, current_df: pd.DataFrame) -> float:
|
||||
df = engineer_features(current_df.copy())
|
||||
used_features = sorted(set(bundle['sequence_features'] + bundle['static_features']))
|
||||
df = _apply_category_maps(df, used_features, bundle['category_maps'])
|
||||
|
||||
sequence_row = df[bundle['sequence_features']].astype(float).values[0].astype(np.float32)
|
||||
static_row = df[bundle['static_features']].astype(float).values[0].astype(np.float32)
|
||||
|
||||
prefix = bundle['default_sequence_prefix']
|
||||
sequence_window = np.vstack([prefix, sequence_row.reshape(1, -1)]).astype(np.float32)
|
||||
sequence_window = (sequence_window - bundle['seq_mean']) / bundle['seq_std']
|
||||
static_row = ((static_row - bundle['static_mean']) / bundle['static_std']).astype(np.float32)
|
||||
|
||||
with torch.no_grad():
|
||||
prediction = bundle['model'](
|
||||
torch.tensor(sequence_window).unsqueeze(0),
|
||||
torch.tensor(static_row).unsqueeze(0),
|
||||
).cpu().numpy()[0]
|
||||
|
||||
if bundle.get('target_transform') == 'log1p':
|
||||
prediction = np.expm1(prediction)
|
||||
return float(max(0.5, prediction))
|
||||
@@ -264,6 +264,28 @@ def sample_event(rng, employee):
|
||||
return event
|
||||
|
||||
|
||||
def attach_event_timeline(df):
|
||||
df = df.copy()
|
||||
rng = np.random.default_rng(config.RANDOM_STATE)
|
||||
base_date = np.datetime64('2025-01-01')
|
||||
timelines = []
|
||||
|
||||
for employee_id, group in df.groupby('员工编号', sort=False):
|
||||
group = group.copy().reset_index(drop=True)
|
||||
event_count = len(group)
|
||||
offsets = np.sort(rng.integers(0, 365, size=event_count))
|
||||
group['事件日期'] = [
|
||||
str(pd.Timestamp(base_date + np.timedelta64(int(offset), 'D')).date())
|
||||
for offset in offsets
|
||||
]
|
||||
group['事件日期索引'] = offsets.astype(int)
|
||||
group['事件序号'] = np.arange(1, event_count + 1)
|
||||
group['员工历史事件数'] = event_count
|
||||
timelines.append(group)
|
||||
|
||||
return pd.concat(timelines, ignore_index=True)
|
||||
|
||||
|
||||
def validate_dataset(df):
|
||||
required_columns = [
|
||||
'员工编号',
|
||||
@@ -273,6 +295,9 @@ def validate_dataset(df):
|
||||
'通勤时长分钟',
|
||||
'是否慢性病史',
|
||||
'请假类型',
|
||||
'事件序号',
|
||||
'事件日期索引',
|
||||
'员工历史事件数',
|
||||
'缺勤时长(小时)',
|
||||
]
|
||||
for column in required_columns:
|
||||
@@ -309,7 +334,7 @@ def generate_dataset(output_path=None, sample_count=12000, random_state=None):
|
||||
for idx in employee_idx:
|
||||
events.append(sample_event(rng, employees[int(idx)]))
|
||||
|
||||
df = pd.DataFrame(events)
|
||||
df = attach_event_timeline(pd.DataFrame(events))
|
||||
validate_dataset(df)
|
||||
|
||||
if output_path:
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import inspect
|
||||
from datetime import datetime
|
||||
|
||||
import joblib
|
||||
@@ -14,6 +15,8 @@ from sklearn.preprocessing import RobustScaler
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
import config
|
||||
from core.deep_learning_model import is_available as deep_learning_available
|
||||
from core.deep_learning_model import train_lstm_mlp
|
||||
from core.model_features import (
|
||||
NUMERICAL_OUTLIER_COLUMNS,
|
||||
ORDINAL_COLUMNS,
|
||||
@@ -43,6 +46,37 @@ except ImportError:
|
||||
xgb = None
|
||||
|
||||
|
||||
def patch_lightgbm_sklearn_compatibility():
|
||||
if lgb is None:
|
||||
return
|
||||
|
||||
try:
|
||||
from sklearn.utils.validation import check_X_y
|
||||
except Exception:
|
||||
return
|
||||
|
||||
params = inspect.signature(check_X_y).parameters
|
||||
if 'force_all_finite' in params or 'ensure_all_finite' not in params:
|
||||
return
|
||||
|
||||
def wrapped_check_X_y(*args, force_all_finite=None, **kwargs):
|
||||
if force_all_finite is not None and 'ensure_all_finite' not in kwargs:
|
||||
kwargs['ensure_all_finite'] = force_all_finite
|
||||
return check_X_y(*args, **kwargs)
|
||||
|
||||
try:
|
||||
import lightgbm.compat as lgb_compat
|
||||
import lightgbm.sklearn as lgb_sklearn
|
||||
|
||||
lgb_compat._LGBMCheckXY = wrapped_check_X_y
|
||||
lgb_sklearn._LGBMCheckXY = wrapped_check_X_y
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
patch_lightgbm_sklearn_compatibility()
|
||||
|
||||
|
||||
def print_training_log(model_name, start_time, best_score, best_params, n_iter, cv_folds):
|
||||
elapsed = time.time() - start_time
|
||||
print(f' {"-" * 50}')
|
||||
@@ -68,6 +102,10 @@ class OptimizedModelTrainer:
|
||||
self.feature_k = 22
|
||||
self.target_transform = 'log1p'
|
||||
self.enabled_models = ['random_forest', 'gradient_boosting', 'extra_trees', 'lightgbm', 'xgboost']
|
||||
if deep_learning_available():
|
||||
self.enabled_models.append('lstm_mlp')
|
||||
self.raw_train_df = None
|
||||
self.raw_test_df = None
|
||||
|
||||
def analyze_data(self, df):
|
||||
y = df[TARGET_COLUMN]
|
||||
@@ -96,19 +134,21 @@ class OptimizedModelTrainer:
|
||||
return self.feature_selector.transform(X) if self.feature_selector else X
|
||||
|
||||
def prepare_data(self):
|
||||
df = normalize_columns(get_clean_data())
|
||||
df = prepare_modeling_dataframe(df)
|
||||
self.analyze_data(df)
|
||||
raw_df = normalize_columns(get_clean_data())
|
||||
self.analyze_data(prepare_modeling_dataframe(raw_df.copy()))
|
||||
|
||||
target_bins = make_target_bins(df[TARGET_COLUMN].values)
|
||||
train_df, test_df = train_test_split(
|
||||
df,
|
||||
target_bins = make_target_bins(raw_df[TARGET_COLUMN].values)
|
||||
raw_train_df, raw_test_df = train_test_split(
|
||||
raw_df,
|
||||
test_size=config.TEST_SIZE,
|
||||
random_state=config.RANDOM_STATE,
|
||||
stratify=target_bins,
|
||||
)
|
||||
train_df = train_df.reset_index(drop=True)
|
||||
test_df = test_df.reset_index(drop=True)
|
||||
self.raw_train_df = raw_train_df.reset_index(drop=True)
|
||||
self.raw_test_df = raw_test_df.reset_index(drop=True)
|
||||
|
||||
train_df = prepare_modeling_dataframe(self.raw_train_df)
|
||||
test_df = prepare_modeling_dataframe(self.raw_test_df)
|
||||
|
||||
self.outlier_bounds = fit_outlier_bounds(train_df, NUMERICAL_OUTLIER_COLUMNS)
|
||||
train_df = apply_outlier_bounds(train_df, self.outlier_bounds)
|
||||
@@ -138,7 +178,8 @@ class OptimizedModelTrainer:
|
||||
'feature_count_after_selection': int(X_train.shape[1]),
|
||||
'training_date': datetime.now().strftime('%Y-%m-%d'),
|
||||
'target_transform': self.target_transform,
|
||||
'available_models': list(self.enabled_models),
|
||||
'available_models': [],
|
||||
'deep_learning_available': False,
|
||||
}
|
||||
return X_train, X_test, y_train, y_test
|
||||
|
||||
@@ -206,20 +247,25 @@ class OptimizedModelTrainer:
|
||||
def train_lightgbm(self, X_train, y_train):
|
||||
if lgb is None:
|
||||
return
|
||||
self._run_search(
|
||||
'lightgbm',
|
||||
lgb.LGBMRegressor(random_state=config.RANDOM_STATE, n_jobs=-1, verbose=-1),
|
||||
{
|
||||
'n_estimators': [180, 260, 340],
|
||||
'max_depth': [7, 9, -1],
|
||||
'learning_rate': [0.03, 0.05, 0.08],
|
||||
'subsample': [0.7, 0.85, 1.0],
|
||||
'colsample_bytree': [0.7, 0.85, 1.0],
|
||||
'num_leaves': [31, 50, 70],
|
||||
},
|
||||
X_train,
|
||||
y_train,
|
||||
)
|
||||
try:
|
||||
self._run_search(
|
||||
'lightgbm',
|
||||
lgb.LGBMRegressor(random_state=config.RANDOM_STATE, n_jobs=-1, verbose=-1),
|
||||
{
|
||||
'n_estimators': [180, 260, 340],
|
||||
'max_depth': [7, 9, -1],
|
||||
'learning_rate': [0.03, 0.05, 0.08],
|
||||
'subsample': [0.7, 0.85, 1.0],
|
||||
'colsample_bytree': [0.7, 0.85, 1.0],
|
||||
'num_leaves': [31, 50, 70],
|
||||
},
|
||||
X_train,
|
||||
y_train,
|
||||
)
|
||||
except Exception as exc:
|
||||
print(f' {"-" * 50}')
|
||||
print(' Model: lightgbm')
|
||||
print(f' Skipped: {exc}')
|
||||
|
||||
def train_xgboost(self, X_train, y_train):
|
||||
if xgb is None:
|
||||
@@ -254,6 +300,7 @@ class OptimizedModelTrainer:
|
||||
os.makedirs(config.MODELS_DIR, exist_ok=True)
|
||||
for name, model in self.models.items():
|
||||
joblib.dump(model, os.path.join(config.MODELS_DIR, f'{name}_model.pkl'))
|
||||
self.training_metadata['available_models'] = list(self.model_metrics.keys())
|
||||
joblib.dump(self.scaler, config.SCALER_PATH)
|
||||
joblib.dump(self.feature_names, os.path.join(config.MODELS_DIR, 'feature_names.pkl'))
|
||||
joblib.dump(self.selected_features, os.path.join(config.MODELS_DIR, 'selected_features.pkl'))
|
||||
@@ -282,6 +329,23 @@ class OptimizedModelTrainer:
|
||||
self.model_metrics[name] = metrics
|
||||
print(f' {name:20s} R2={metrics["r2"]:.4f} RMSE={metrics["rmse"]:.4f} MAE={metrics["mae"]:.4f}')
|
||||
|
||||
if 'lstm_mlp' in self.enabled_models and self.raw_train_df is not None and self.raw_test_df is not None:
|
||||
deep_model_path = os.path.join(config.MODELS_DIR, 'lstm_mlp_model.pt')
|
||||
deep_result = train_lstm_mlp(
|
||||
self.raw_train_df,
|
||||
self.raw_test_df,
|
||||
deep_model_path,
|
||||
target_transform=self.target_transform,
|
||||
)
|
||||
if deep_result:
|
||||
self.model_metrics['lstm_mlp'] = deep_result['metrics']
|
||||
self.training_metadata['deep_learning_available'] = True
|
||||
self.training_metadata.update(deep_result['metadata'])
|
||||
print(
|
||||
f' {"lstm_mlp":20s} R2={deep_result["metrics"]["r2"]:.4f} '
|
||||
f'RMSE={deep_result["metrics"]["rmse"]:.4f} MAE={deep_result["metrics"]["mae"]:.4f}'
|
||||
)
|
||||
|
||||
self.save_models()
|
||||
return self.model_metrics
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ numpy==1.24.3
|
||||
scikit-learn==1.3.0
|
||||
xgboost==1.7.6
|
||||
lightgbm==4.1.0
|
||||
torch==2.6.0
|
||||
joblib==1.3.1
|
||||
|
||||
# Utilities
|
||||
|
||||
@@ -4,6 +4,7 @@ import joblib
|
||||
import numpy as np
|
||||
|
||||
import config
|
||||
from core.deep_learning_model import load_lstm_mlp_bundle, predict_lstm_mlp
|
||||
from core.model_features import (
|
||||
align_feature_frame,
|
||||
apply_label_encoders,
|
||||
@@ -20,6 +21,7 @@ MODEL_INFO = {
|
||||
'gradient_boosting': {'name': 'gradient_boosting', 'name_cn': 'GBDT', 'description': '梯度提升决策树'},
|
||||
'extra_trees': {'name': 'extra_trees', 'name_cn': '极端随机树', 'description': '高随机性的树模型'},
|
||||
'stacking': {'name': 'stacking', 'name_cn': 'Stacking集成', 'description': '多模型融合'},
|
||||
'lstm_mlp': {'name': 'lstm_mlp', 'name_cn': 'LSTM+MLP', 'description': '时序与静态特征融合的深度学习模型'},
|
||||
}
|
||||
|
||||
|
||||
@@ -50,6 +52,7 @@ class PredictService:
|
||||
'gradient_boosting': 'gradient_boosting_model.pkl',
|
||||
'extra_trees': 'extra_trees_model.pkl',
|
||||
'stacking': 'stacking_model.pkl',
|
||||
'lstm_mlp': 'lstm_mlp_model.pt',
|
||||
}
|
||||
allowed_models = self.training_metadata.get('available_models')
|
||||
if allowed_models:
|
||||
@@ -59,7 +62,12 @@ class PredictService:
|
||||
path = os.path.join(config.MODELS_DIR, filename)
|
||||
if os.path.exists(path):
|
||||
try:
|
||||
self.models[name] = joblib.load(path)
|
||||
if name == 'lstm_mlp':
|
||||
bundle = load_lstm_mlp_bundle(path)
|
||||
if bundle is not None:
|
||||
self.models[name] = bundle
|
||||
else:
|
||||
self.models[name] = joblib.load(path)
|
||||
except Exception as exc:
|
||||
print(f'Failed to load model {name}: {exc}')
|
||||
|
||||
@@ -107,8 +115,12 @@ class PredictService:
|
||||
|
||||
features = self._prepare_features(data)
|
||||
try:
|
||||
predicted_hours = self.models[model_type].predict([features])[0]
|
||||
predicted_hours = self._inverse_transform_prediction(predicted_hours)
|
||||
if model_type == 'lstm_mlp':
|
||||
current_df = build_prediction_dataframe(data)
|
||||
predicted_hours = predict_lstm_mlp(self.models[model_type], current_df)
|
||||
else:
|
||||
predicted_hours = self.models[model_type].predict([features])[0]
|
||||
predicted_hours = self._inverse_transform_prediction(predicted_hours)
|
||||
predicted_hours = max(0.5, float(predicted_hours))
|
||||
except Exception:
|
||||
return self._get_default_prediction(data)
|
||||
@@ -196,6 +208,8 @@ class PredictService:
|
||||
'test_samples': self.training_metadata.get('test_samples', 0),
|
||||
'feature_count': self.training_metadata.get('feature_count_after_selection', 0),
|
||||
'training_date': self.training_metadata.get('training_date', ''),
|
||||
'sequence_window_size': self.training_metadata.get('sequence_window_size', 0),
|
||||
'deep_learning_available': self.training_metadata.get('deep_learning_available', False),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
系统采用前后端分离架构:
|
||||
|
||||
- 前端:Vue 3 + Vue Router + Element Plus + ECharts
|
||||
- 后端:Flask + Pandas + Scikit-learn + Joblib
|
||||
- 后端:Flask + Pandas + Scikit-learn + PyTorch + Joblib
|
||||
- 数据层:CSV 数据文件 + 模型文件
|
||||
|
||||
整体架构分为四层:
|
||||
@@ -53,6 +53,7 @@
|
||||
- `preprocessing.py`:数据清洗与预处理
|
||||
- `model_features.py`:特征构建与预测输入映射
|
||||
- `train_model.py`:模型训练与评估
|
||||
- `deep_learning_model.py`:LSTM+MLP 深度学习训练与推理
|
||||
- `feature_mining.py`:相关性分析与群体对比
|
||||
- `clustering.py`:K-Means 聚类分析
|
||||
|
||||
@@ -67,6 +68,12 @@
|
||||
5. 训练多种模型并评估性能
|
||||
6. 保存模型、特征信息和训练元数据
|
||||
|
||||
其中深度学习路径采用:
|
||||
|
||||
- `LSTM` 处理员工最近多次缺勤事件构成的时间窗口序列
|
||||
- `MLP` 处理员工静态属性特征
|
||||
- 融合层输出缺勤时长回归结果
|
||||
|
||||
### 4.2 预测流程
|
||||
|
||||
1. 前端输入核心预测字段
|
||||
@@ -121,6 +128,12 @@ frontend/
|
||||
- 适合传统机器学习建模
|
||||
- 提供随机森林、GBDT、Extra Trees 等成熟算法
|
||||
|
||||
### 6.4 PyTorch
|
||||
|
||||
- 用于实现 LSTM+MLP 深度学习模型
|
||||
- 支持将时序特征与静态特征进行融合建模
|
||||
- 便于在论文中增加深度学习对比实验内容
|
||||
|
||||
## 7. 部署方式
|
||||
|
||||
- 本地前端开发服务器:Vite
|
||||
@@ -133,3 +146,4 @@ frontend/
|
||||
- 前后端职责明确
|
||||
- 支持快速展示图表与预测效果
|
||||
- 支持后续扩展为数据库或更复杂模型架构
|
||||
- 同时支持传统机器学习模型与深度学习模型的实验对比
|
||||
|
||||
@@ -136,13 +136,18 @@
|
||||
|
||||
- URL:`/api/predict/models`
|
||||
- 方法:`GET`
|
||||
- 说明:返回可用模型及其性能指标
|
||||
- 说明:返回可用模型及其性能指标,包含传统模型与 `LSTM+MLP` 深度学习模型
|
||||
|
||||
### 4.4 获取模型信息
|
||||
|
||||
- URL:`/api/predict/model-info`
|
||||
- 方法:`GET`
|
||||
- 说明:返回训练样本量、特征数量和训练日期
|
||||
- 说明:返回训练样本量、特征数量、训练日期以及深度学习窗口信息
|
||||
|
||||
新增返回字段示例:
|
||||
|
||||
- `sequence_window_size`
|
||||
- `deep_learning_available`
|
||||
|
||||
## 5. 员工画像接口
|
||||
|
||||
|
||||
@@ -74,6 +74,10 @@
|
||||
- 星期几
|
||||
- 是否节假日前后
|
||||
- 季节
|
||||
- 事件日期
|
||||
- 事件日期索引
|
||||
- 事件序号
|
||||
- 员工历史事件数
|
||||
- 请假申请渠道
|
||||
- 请假类型
|
||||
- 请假原因大类
|
||||
@@ -129,6 +133,23 @@
|
||||
- 慢性病史和健康异常会提升缺勤时长
|
||||
- 年假和调休通常对应较短缺勤时长
|
||||
|
||||
### 6.3 时序样本构造
|
||||
|
||||
为支持 LSTM+MLP 深度学习模型,数据集在事件层面额外补充了时序字段:
|
||||
|
||||
- `事件日期`:缺勤事件发生日期
|
||||
- `事件日期索引`:便于排序和窗口切片的数值型时间索引
|
||||
- `事件序号`:同一员工内部的事件顺序
|
||||
- `员工历史事件数`:该员工在数据集中对应的事件总数
|
||||
|
||||
深度学习样本构造规则如下:
|
||||
|
||||
- 以员工为单位按 `事件日期索引` 和 `事件序号` 排序
|
||||
- 取最近 `5` 次缺勤事件作为时间窗口输入
|
||||
- 序列不足时使用前向零填充
|
||||
- 当前事件作为窗口最后一个时间步
|
||||
- 静态特征单独输入 MLP 分支,与 LSTM 输出融合后进行回归预测
|
||||
|
||||
## 7. 数据质量要求
|
||||
|
||||
- 无大量缺失值
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
|
||||
随着企业管理数字化水平的提升,员工缺勤行为分析逐渐成为人力资源管理中的重要研究内容。针对传统缺勤管理方式依赖人工统计、分析效率较低、风险预警能力不足等问题,本文设计并实现了一套基于中国企业员工缺勤事件分析与预测系统。系统围绕缺勤事件数据,构建了数据概览、影响因素分析、缺勤风险预测和员工群体画像四个核心模块,实现了缺勤时长统计分析、关键因素挖掘、多模型预测与聚类画像展示等功能。
|
||||
|
||||
在系统实现过程中,后端采用 Flask 框架构建接口服务,结合 Pandas 与 Scikit-learn 完成数据处理、特征工程、模型训练与预测;前端采用 Vue 3、Element Plus 与 ECharts 实现交互式可视化界面。针对毕业设计场景,系统构建了一套符合中国企业特征的员工缺勤事件数据集,并设计了请假类型、医院证明、加班通勤压力、健康风险等关键影响因素。实验结果表明,系统能够较好地完成缺勤时长预测任务,并通过可视化方式直观展现缺勤趋势、影响因素和员工群体特征。
|
||||
在系统实现过程中,后端采用 Flask 框架构建接口服务,结合 Pandas、Scikit-learn 与 PyTorch 完成数据处理、特征工程、模型训练与预测;前端采用 Vue 3、Element Plus 与 ECharts 实现交互式可视化界面。针对毕业设计场景,系统构建了一套符合中国企业特征的员工缺勤事件数据集,并设计了请假类型、医院证明、加班通勤压力、健康风险等关键影响因素。同时,为增强论文的算法研究内容,系统引入了 LSTM+MLP 深度学习模型,将员工历史缺勤事件序列与静态属性特征进行融合建模。实验结果表明,系统能够较好地完成缺勤时长预测任务,并通过可视化方式直观展现缺勤趋势、影响因素和员工群体特征。
|
||||
|
||||
本文的研究工作对企业缺勤行为分析与管理辅助决策具有一定参考价值,同时也为后续扩展员工行为分析、离职预警和绩效管理等方向提供了基础。
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
- 风险预测
|
||||
- 特征挖掘
|
||||
- 机器学习
|
||||
- 深度学习
|
||||
- 可视化系统
|
||||
- Vue
|
||||
- Flask
|
||||
|
||||
@@ -19,7 +19,8 @@
|
||||
- 2.2 Vue 3 前端框架
|
||||
- 2.3 ECharts 可视化技术
|
||||
- 2.4 机器学习相关算法
|
||||
- 2.5 K-Means 聚类方法
|
||||
- 2.5 深度学习相关算法
|
||||
- 2.6 K-Means 聚类方法
|
||||
|
||||
### 第3章 系统需求分析
|
||||
|
||||
@@ -41,15 +42,16 @@
|
||||
- 5.1 数据概览模块实现
|
||||
- 5.2 影响因素分析模块实现
|
||||
- 5.3 缺勤预测模块实现
|
||||
- 5.4 员工画像模块实现
|
||||
- 5.5 前端界面实现
|
||||
- 5.4 LSTM+MLP 深度学习模型实现
|
||||
- 5.5 员工画像模块实现
|
||||
- 5.6 前端界面实现
|
||||
|
||||
### 第6章 系统测试与结果分析
|
||||
|
||||
- 6.1 测试环境
|
||||
- 6.2 功能测试
|
||||
- 6.3 接口测试
|
||||
- 6.4 模型效果分析
|
||||
- 6.4 传统模型与深度学习模型对比
|
||||
- 6.5 系统展示效果分析
|
||||
|
||||
### 第7章 总结与展望
|
||||
|
||||
@@ -27,6 +27,8 @@
|
||||
- Vue 3 的组件化优势
|
||||
- Element Plus 和 ECharts 的可视化能力
|
||||
- 随机森林、GBDT、Extra Trees 的基本原理
|
||||
- LSTM 与 MLP 的基本原理
|
||||
- 时序序列建模与多输入融合思想
|
||||
- K-Means 聚类思想
|
||||
|
||||
## 第3章 系统需求分析
|
||||
@@ -71,6 +73,7 @@
|
||||
- 数据生成与预处理实现
|
||||
- 特征工程实现
|
||||
- 模型训练与保存实现
|
||||
- LSTM+MLP 深度学习训练流程
|
||||
- 后端接口实现
|
||||
- 前端页面实现
|
||||
- 预测页卡片布局与交互实现
|
||||
@@ -89,6 +92,7 @@
|
||||
- 预测功能测试
|
||||
- 聚类与分析结果测试
|
||||
- 模型性能指标分析
|
||||
- 传统模型与深度学习模型对比分析
|
||||
|
||||
## 第7章 总结与展望
|
||||
|
||||
|
||||
@@ -45,6 +45,7 @@
|
||||
|
||||
- 前后端分离结构清晰
|
||||
- 采用多模型训练与比较
|
||||
- 引入 LSTM+MLP 深度学习模型,支持时序行为建模
|
||||
- 融合特征工程与聚类分析
|
||||
- 前端页面采用卡片式可视化布局,适合展示
|
||||
|
||||
|
||||
193
docs/09_环境配置与安装说明.md
Normal file
193
docs/09_环境配置与安装说明.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# 环境配置与安装说明
|
||||
|
||||
## 1. 推荐环境
|
||||
|
||||
为保证传统机器学习模型和 `LSTM+MLP` 深度学习模型均可正常训练,推荐使用 **conda 虚拟环境** 管理本项目依赖。
|
||||
|
||||
推荐环境:
|
||||
|
||||
- 操作系统:Windows 10 / Windows 11
|
||||
- Python:3.11
|
||||
- Conda:Anaconda 或 Miniconda
|
||||
- Node.js:16+
|
||||
- pnpm:8+
|
||||
- CUDA:建议与 PyTorch GPU 轮子版本匹配
|
||||
|
||||
## 2. 创建 conda 虚拟环境
|
||||
|
||||
```powershell
|
||||
conda create -n forsetenv python=3.11 -y
|
||||
conda activate forsetenv
|
||||
```
|
||||
|
||||
说明:
|
||||
|
||||
- 后续所有 Python 依赖安装、数据生成、模型训练和后端启动,均建议在 `forsetenv` 环境中进行。
|
||||
|
||||
## 3. 推荐安装顺序
|
||||
|
||||
推荐严格按下面顺序执行:
|
||||
|
||||
1. 创建并激活 `conda` 虚拟环境
|
||||
2. 单独安装 `PyTorch GPU` 版
|
||||
3. 安装其余后端依赖
|
||||
4. 安装前端依赖
|
||||
|
||||
说明:
|
||||
|
||||
- `backend/requirements.txt` 中包含 `torch==2.6.0`
|
||||
- 如果在 Windows 下先直接执行 `pip install -r backend/requirements.txt`,可能安装成非预期构建
|
||||
- 因此深度学习环境建议先执行官方 `cu124` 安装命令,再补齐其余依赖
|
||||
|
||||
## 4. 安装 PyTorch GPU 版
|
||||
|
||||
本项目的 hybrid 深度学习模型要求:
|
||||
|
||||
- `torch >= 2.6`
|
||||
|
||||
推荐安装方式:
|
||||
|
||||
- 使用 **pip 官方 cu124 轮子**
|
||||
- 避免在 Windows 上由 conda 自动解析成 `cpu_mkl` 构建
|
||||
|
||||
安装命令如下:
|
||||
|
||||
```powershell
|
||||
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
|
||||
```
|
||||
|
||||
## 5. 安装其余后端依赖
|
||||
|
||||
如果你已经按上一步安装了 GPU 版 `torch`,推荐补装其余后端依赖:
|
||||
|
||||
```powershell
|
||||
pip install Flask==2.3.3 Flask-CORS==4.0.0 python-dotenv==1.0.0
|
||||
pip install pandas==2.0.3 numpy==1.24.3 scikit-learn==1.3.0 joblib==1.3.1
|
||||
pip install xgboost==1.7.6 lightgbm==4.1.0
|
||||
```
|
||||
|
||||
如果你仍然希望直接使用依赖文件,可以在完成 GPU 版 `torch` 安装后执行:
|
||||
|
||||
```powershell
|
||||
pip install -r backend/requirements.txt
|
||||
```
|
||||
|
||||
这一步通常不会影响已经安装好的 `cu124` 版本;如有覆盖风险,可在执行后再次运行上一节的 GPU 安装命令。
|
||||
|
||||
## 6. 安装前端依赖
|
||||
|
||||
```powershell
|
||||
cd frontend
|
||||
pnpm install
|
||||
```
|
||||
|
||||
## 7. 一键执行示例
|
||||
|
||||
下面是一套推荐的 `conda` 环境安装流程:
|
||||
|
||||
```powershell
|
||||
conda create -n forsetenv python=3.11 -y
|
||||
conda activate forsetenv
|
||||
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
|
||||
pip install Flask==2.3.3 Flask-CORS==4.0.0 python-dotenv==1.0.0
|
||||
pip install pandas==2.0.3 numpy==1.24.3 scikit-learn==1.3.0 joblib==1.3.1
|
||||
pip install xgboost==1.7.6 lightgbm==4.1.0
|
||||
cd frontend
|
||||
pnpm install
|
||||
```
|
||||
|
||||
## 8. 验证安装
|
||||
|
||||
### 8.1 验证基础依赖
|
||||
|
||||
```powershell
|
||||
python -c "import pandas,numpy,sklearn,flask;print('base ok')"
|
||||
```
|
||||
|
||||
### 8.2 验证传统模型依赖
|
||||
|
||||
```powershell
|
||||
python -c "import xgboost,lightgbm;print('ml ok')"
|
||||
```
|
||||
|
||||
### 8.3 验证 PyTorch GPU
|
||||
|
||||
```powershell
|
||||
python -c "import torch;print(torch.__version__);print(torch.cuda.is_available())"
|
||||
```
|
||||
|
||||
如果输出为 `True`,说明 GPU 版本 PyTorch 可正常使用。
|
||||
|
||||
## 9. 项目启动顺序
|
||||
|
||||
### 9.1 生成数据集
|
||||
|
||||
```powershell
|
||||
cd backend
|
||||
python core/generate_dataset.py
|
||||
```
|
||||
|
||||
### 9.2 训练模型
|
||||
|
||||
```powershell
|
||||
python core/train_model.py
|
||||
```
|
||||
|
||||
### 9.3 启动后端
|
||||
|
||||
```powershell
|
||||
python app.py
|
||||
```
|
||||
|
||||
### 9.4 启动前端
|
||||
|
||||
```powershell
|
||||
cd ..\frontend
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
## 10. 常见问题
|
||||
|
||||
### 10.1 PyTorch 被安装成 CPU 版
|
||||
|
||||
原因:
|
||||
|
||||
- 使用了默认 `pip install torch`
|
||||
- 或使用 conda 在 Windows 上自动解析成 CPU 构建
|
||||
|
||||
建议:
|
||||
|
||||
- 直接使用本文提供的官方 `cu124` 安装命令
|
||||
|
||||
### 10.2 训练过程中无法加载深度学习模型
|
||||
|
||||
检查项:
|
||||
|
||||
- 当前是否处于 `forsetenv` conda 环境
|
||||
- `torch` 是否成功安装
|
||||
- `torch.cuda.is_available()` 是否为 `True`
|
||||
|
||||
### 10.3 xgboost / lightgbm 缺失
|
||||
|
||||
可执行:
|
||||
|
||||
```powershell
|
||||
pip install xgboost==1.7.6 lightgbm==4.1.0
|
||||
```
|
||||
|
||||
### 10.4 如何确认当前使用的是 conda 环境
|
||||
|
||||
可执行:
|
||||
|
||||
```powershell
|
||||
conda info --envs
|
||||
where python
|
||||
```
|
||||
|
||||
如果当前环境为 `forsetenv`,且 `python` 指向对应环境目录,说明切换成功。
|
||||
|
||||
## 11. 建议
|
||||
|
||||
- 毕设演示或论文实验时,统一使用 `conda activate forsetenv`
|
||||
- 深度学习模型训练时优先使用 GPU 环境
|
||||
- 若仅进行页面展示,可先训练传统模型,再补充深度学习实验结果
|
||||
@@ -17,8 +17,14 @@
|
||||
- [07_毕业论文写作提纲.md](D:/VScodeProject/forsetsystem/docs/07_毕业论文写作提纲.md)
|
||||
- [08_答辩汇报提纲.md](D:/VScodeProject/forsetsystem/docs/08_答辩汇报提纲.md)
|
||||
|
||||
## 环境配置文档
|
||||
|
||||
- [09_环境配置与安装说明.md](D:/VScodeProject/forsetsystem/docs/09_环境配置与安装说明.md)
|
||||
|
||||
## 说明
|
||||
|
||||
- 系统文档以当前项目实现为准,围绕中国企业员工缺勤分析、风险预测与群体画像展开。
|
||||
- 论文文档采用本科毕业设计常用结构,便于后续继续扩写为正式论文。
|
||||
- 若后续系统功能或字段发生变化,应同步更新本目录下相关文档。
|
||||
- 深度学习部分推荐使用 `conda` 虚拟环境配合 `pip` 安装 PyTorch GPU 版。
|
||||
- 推荐安装顺序为:创建 `conda` 环境、安装官方 `cu124` 的 PyTorch、再补充其余后端依赖。
|
||||
|
||||
Reference in New Issue
Block a user