PaddleSeg基于机器学习的图像分割

PyCharm安装PaddleSeg

  1. 在PyCharm欢迎页通过Get from VCS拉取github上PaddlePaddle/PaddleSeg项目

  2. 拉取成功后,PyCharm会提示创建venv虚拟环境,点击确定即可,在PyCharm打开终端命令行前面有venv代表创建成功,没有关闭终端点击Add new Interpreter,添加后,重新在PyCharm打开终端

    1
    2
    例如:
    (venv) ➜ PaddleSeg git:(release/2.9)
  3. 安装paddle,在PyCharm中Terminal命令行中执行pip install paddlepaddle

  4. 安装setuptools,在PyCharm中Terminal命令行中执行pip install -U pip setuptools

  5. [可选],检查paddle是否安装成功,在PyCharm中Python console执行import paddlepaddle.utils.run_check(),检查版本执行print(paddle.__version__)

  6. 安装paddleseg,在PyCharm中Terminal命令行中执行pip install paddleseg,(这里采用的直接安装发布的版本,本地编译未实验通过)

  7. 验证安装是否成功,在在PyCharm中Terminal命令行中执行sh tests/install/check_predict.sh

PaddleSeg使用

准备自定义数据集

标注工具(标注数据)

PddleSeg已支持2种标注工具:LabelMe、精灵数据标注工具

  1. 下载安装labelme

  2. 根据文档操作,进行标注数据,简单流程打开图片目录->创建多边形->框选要标注的数据->保存会在图片目录生成一个json文件,标注目录下的所有图片

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    #标注前目录结构
    paddle
    |--image1.jpg
    |--image2.jpg
    |--...
    #标注后目录结构
    paddle
    |--image1.jpg
    |--image2.jpg
    |--...
    |--image1.json
    |--image2.json
    |--...
  3. 将标注的数据转换为模型训练时所需的数据格式

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    #python tools/data/labelme2seg.py [-h] 图片目录 输出目录
    python tools/data/labelme2seg.py /Users/x/Downloads/paddle /Users/xuanleung/Downloads/paddle_pr
    #转换后目录结构
    paddle
    paddle_pr
    |--annotations #红色背景,绿色标注的图片
    | |--image1.png
    | |--image2.png
    | |--...
    |--images
    | |--image1.jpg
    | |--image2.jpg
    | |--...
    |--class_names.txt #是数据集中所有标注类别的名称

切分数据

  1. 切分数据,对于所有原始图像和标注图像,需要按照比例划分为训练集、验证集、测试集。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    #python tools/data/split_dataset_list.py <dataset_root:原始图像目录名> <images_dir_name:原始图像目录名> <labels_dir_name:标注图像目录名> ${FLAGS}
    python tools/data/split_dataset_list.py /Users/x/Downloads/paddle_pr images annotations --split 0.6 0.2 0.2 --format jpg png
    #执行后目录结构
    paddle_pr
    |--test.txt #测试集
    |--val.txt #验证集
    |--train.txt #训练集
    |--annotations #红色背景,绿色标注的图片
    | |--image1.png
    | |--image2.png
    | |--...
    |--images
    | |--image1.jpg
    | |--image2.jpg
    | |--...
    |--class_names.txt #是数据集中所有标注类别的名称
    #txt文件内容格式
    images/image1.jpg annotations/image1.png
    images/image2.jpg annotations/image2.png
    ....

    FLAGS说明:

    FLAG 含义 默认值 参数数目
    –split 训练集、验证集和测试集的切分比例 0.7 0.3 0 3
    –separator txt文件列表分隔符 “ “ 1
    –format 原始图像和标注图像的图片后缀 “jpg” “png” 2
    –postfix 按文件主名(无扩展名)是否包含指定后缀对图片和标签集进行筛选 “” “”(2个空字符) 2

准备配置文件

拷贝paddle_pr数据集目录到项目根目录,在项目下面的目录configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml,修改这两个路径配置:

1
2
3
train_dataset:
dataset_root: paddle_pr #数据集路径
train_path: paddle_pr/train.txt #数据集中用于训练的标识文件

配置文件说明:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
batch_size: 4  #设定batch_size的值即为迭代一次送入网络的图片数量,一般显卡显存越大,batch_size的值可以越大。如果使用多卡训练,总得batch size等于该batch size乘以卡数。
iters: 1000 #模型训练迭代的轮数

train_dataset: #训练数据设置
type: Dataset #指定加载数据集的类。数据集类的代码在`PaddleSeg/paddleseg/datasets`目录下。
dataset_root: data/optic_disc_seg #数据集路径
train_path: data/optic_disc_seg/train_list.txt #数据集中用于训练的标识文件
num_classes: 2 #指定类别个数(背景也算为一类)
mode: train #表示用于训练
transforms: #模型训练的数据预处理方式。
- type: ResizeStepScaling #将原始图像和标注图像随机缩放为0.5~2.0倍
min_scale_factor: 0.5
max_scale_factor: 2.0
scale_step_size: 0.25
- type: RandomPaddingCrop #从原始图像和标注图像中随机裁剪512x512大小
crop_size: [512, 512]
- type: RandomHorizontalFlip #对原始图像和标注图像随机进行水平反转
- type: RandomDistort #对原始图像进行亮度、对比度、饱和度随机变动,标注图像不变
brightness_range: 0.5
contrast_range: 0.5
saturation_range: 0.5
- type: Normalize #对原始图像进行归一化,标注图像保持不变

val_dataset: #验证数据设置
type: Dataset #指定加载数据集的类。数据集类的代码在`PaddleSeg/paddleseg/datasets`目录下。
dataset_root: data/optic_disc_seg #数据集路径
val_path: data/optic_disc_seg/val_list.txt #数据集中用于验证的标识文件
num_classes: 2 #指定类别个数(背景也算为一类)
mode: val #表示用于验证
transforms: #模型验证的数据预处理的方式
- type: Normalize #对原始图像进行归一化,标注图像保持不变

optimizer: #设定优化器的类型
type: SGD #采用SGD(Stochastic Gradient Descent)随机梯度下降方法为优化器
momentum: 0.9 #设置SGD的动量
weight_decay: 4.0e-5 #权值衰减,使用的目的是防止过拟合

lr_scheduler: # 学习率的相关设置
type: PolynomialDecay # 一种学习率类型。共支持12种策略
learning_rate: 0.01 # 初始学习率
power: 0.9
end_lr: 0

loss: #设定损失函数的类型
types:
- type: CrossEntropyLoss #CE损失
coef: [1, 1, 1] # PP-LiteSeg有一个主loss和两个辅助loss,coef表示权重,所以 total_loss = coef_1 * loss_1 + .... + coef_n * loss_n

model: #模型说明
type: PPLiteSeg #设定模型类别
backbone: # 设定模型的backbone,包括名字和预训练权重
type: STDC2
pretrained: https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz

训练模型

使用pyCharm添加一个python run config的启动配置,script设置为tools/train.py,script parameters设置为--config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --save_interval 500 --do_eval --use_vdl --save_dir output,然后点击右上角的run
运行相关的日志如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
/Users/x/workspace/PaddleSeg/venv/bin/python -X pycache_prefix=/Users/x/Library/Caches/JetBrains/PyCharm2023.3/cpython-cache /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 64923 --file /Users/x/workspace/PaddleSeg/tools/train.py --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --save_interval 500 --do_eval --use_vdl --save_dir output 
Connected to pydev debugger (build 233.15026.15)
2024-04-02 18:25:03 [WARNING] Add the `num_classes` in train_dataset and val_dataset config to model config. We suggest you manually set `num_classes` in model config.
2024-04-02 18:25:03 [INFO]
------------Environment Information-------------
platform: macOS-13.6.6-x86_64-i386-64bit
Python: 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Paddle compiled with cuda: False
GCC: Apple clang version 14.0.3 (clang-1403.0.22.14.1)
PaddleSeg: 2.8.1
PaddlePaddle: 2.6.1
OpenCV: 4.5.5
------------------------------------------------
2024-04-02 18:25:03 [INFO]
---------------Config Information---------------
batch_size: 4
iters: 1000
train_dataset:
dataset_root: paddle_pr
mode: train
num_classes: 2
train_path: paddle_pr/train.txt
transforms:
- max_scale_factor: 2.0
min_scale_factor: 0.5
scale_step_size: 0.25
type: ResizeStepScaling
- crop_size:
- 512
- 512
type: RandomPaddingCrop
- type: RandomHorizontalFlip
- brightness_range: 0.5
contrast_range: 0.5
saturation_range: 0.5
type: RandomDistort
- type: Normalize
type: Dataset
val_dataset:
dataset_root: data/optic_disc_seg
mode: val
num_classes: 2
transforms:
- type: Normalize
type: Dataset
val_path: data/optic_disc_seg/val_list.txt
optimizer:
momentum: 0.9
type: SGD
weight_decay: 4.0e-05
lr_scheduler:
end_lr: 0
learning_rate: 0.01
power: 0.9
type: PolynomialDecay
loss:
coef:
- 1
- 1
- 1
types:
- type: CrossEntropyLoss
- type: CrossEntropyLoss
- type: CrossEntropyLoss
model:
backbone:
pretrained: https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz
type: STDC2
num_classes: 2
type: PPLiteSeg
------------------------------------------------

2024-04-02 18:25:03 [INFO] Set device: cpu
2024-04-02 18:25:03 [INFO] Use the following config to build model
model:
backbone:
pretrained: https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz
type: STDC2
num_classes: 2
type: PPLiteSeg
2024-04-02 18:25:03 [INFO] Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz
2024-04-02 18:25:04 [INFO] There are 265/265 variables loaded into STDCNet.
2024-04-02 18:25:04 [INFO] Use the following config to build train_dataset
train_dataset:
dataset_root: paddle_pr
mode: train
num_classes: 2
train_path: paddle_pr/train.txt
transforms:
- max_scale_factor: 2.0
min_scale_factor: 0.5
scale_step_size: 0.25
type: ResizeStepScaling
- crop_size:
- 512
- 512
type: RandomPaddingCrop
- type: RandomHorizontalFlip
- brightness_range: 0.5
contrast_range: 0.5
saturation_range: 0.5
type: RandomDistort
- type: Normalize
type: Dataset
2024-04-02 18:25:04 [INFO] Use the following config to build val_dataset
val_dataset:
dataset_root: data/optic_disc_seg
mode: val
num_classes: 2
transforms:
- type: Normalize
type: Dataset
val_path: data/optic_disc_seg/val_list.txt
2024-04-02 18:25:04 [INFO] If the type is SGD and momentum in optimizer config, the type is changed to Momentum.
2024-04-02 18:25:04 [INFO] Use the following config to build optimizer
optimizer:
momentum: 0.9
type: Momentum
weight_decay: 4.0e-05
2024-04-02 18:25:04 [INFO] Use the following config to build loss
loss:
coef:
- 1
- 1
- 1
types:
- type: CrossEntropyLoss
- type: CrossEntropyLoss
- type: CrossEntropyLoss
/Users/xuanleung/workspace/PaddleSeg/venv/lib/python3.12/site-packages/paddle/nn/layer/norm.py:824: UserWarning: When training, we now always track global mean and variance.
warnings.warn(
2024-04-02 18:26:25 [INFO] [TRAIN] epoch: 10, iter: 10/1000, loss: 0.8485, lr: 0.009919, batch_cost: 8.0518, reader_cost: 1.45770, ips: 0.4968 samples/sec | ETA 02:12:51
.......
2024-04-02 19:26:58 [INFO] [TRAIN] epoch: 500, iter: 500/1000, loss: 0.1028, lr: 0.005369, batch_cost: 7.3678, reader_cost: 1.30898, ips: 0.5429 samples/sec | ETA 01:01:23
2024-04-02 19:26:58 [INFO] Start evaluating (total_samples: 76, total_iters: 76)...
76/76 [==============================] - 39s 510ms/step - batch_cost: 0.5100 - reader cost: 8.5753e-04
2024-04-02 19:27:37 [INFO] [EVAL] #Images: 76 mIoU: 0.4908 Acc: 0.9816 Kappa: 0.0000 Dice: 0.4954
2024-04-02 19:27:37 [INFO] [EVAL] Class IoU:
[0.9816 0. ]
2024-04-02 19:27:37 [INFO] [EVAL] Class Precision:
[0.9816 0. ]
2024-04-02 19:27:37 [INFO] [EVAL] Class Recall:
[1. 0.]
2024-04-02 19:27:38 [INFO] [EVAL] The model with the best validation mIoU (0.4908) was saved at iter 500.
2024-04-02 19:28:51 [INFO] [TRAIN] epoch: 510, iter: 510/1000, loss: 0.2417, lr: 0.005272, batch_cost: 7.3453, reader_cost: 1.31461, ips: 0.5446 samples/sec | ETA 00:59:59
.....
2024-04-02 20:29:09 [INFO] [TRAIN] epoch: 1000, iter: 1000/1000, loss: 0.1945, lr: 0.000020, batch_cost: 7.4149, reader_cost: 1.36141, ips: 0.5395 samples/sec | ETA 00:00:00
2024-04-02 20:29:09 [INFO] Start evaluating (total_samples: 76, total_iters: 76)...
76/76 [==============================] - 38s 503ms/step - batch_cost: 0.5028 - reader cost: 7.8113e-04
2024-04-02 20:29:47 [INFO] [EVAL] #Images: 76 mIoU: 0.4908 Acc: 0.9816 Kappa: 0.0000 Dice: 0.4954
2024-04-02 20:29:47 [INFO] [EVAL] Class IoU:
[0.9816 0. ]
2024-04-02 20:29:47 [INFO] [EVAL] Class Precision:
[0.9816 0. ]
2024-04-02 20:29:47 [INFO] [EVAL] Class Recall:
[1. 0.]
2024-04-02 20:29:48 [INFO] [EVAL] The model with the best validation mIoU (0.4908) was saved at iter 500.
<class 'paddle.nn.layer.conv.Conv2D'>'s flops has been counted
<class 'paddle.nn.layer.norm.BatchNorm2D'>'s flops has been counted
<class 'paddle.nn.layer.activation.ReLU'>'s flops has been counted
<class 'paddle.nn.layer.pooling.AvgPool2D'>'s flops has been counted
<class 'paddle.nn.layer.pooling.AdaptiveAvgPool2D'>'s flops has been counted
Total Flops: 9643807616 Total Params: 12251410

Process finished with exit code 0

生成的文件:

1
2
3
4
5
6
7
8
9
output
├── iter_500 #表示在500步保存一次模型
├── model.pdparams #模型参数
└── model.pdopt #训练阶段的优化器参数
├── iter_1000 #表示在1000步保存一次模型
├── model.pdparams #模型参数
└── model.pdopt #训练阶段的优化器参数
└── best_model #精度最高的模型权重
└── model.pdparams

模型评估

拷贝paddle_pr数据集目录到项目根目录,在项目下面的目录configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml,修改这两个路径配置:

1
2
3
val_dataset:
dataset_root: paddle_pr #数据集路径
train_path: paddle_pr/val.txt #数据集中用于训练的标识文件

使用pyCharm添加一个python run config的启动配置,script设置为tools/val.py,script parameters设置为--config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/iter_1000/model.pdparams,然后点击右上角的run

输出结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/Users/x/workspace/PaddleSeg/venv/bin/python tools/val.py --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/iter_1000/model.pdparams 
2024-04-03 11:18:24 [WARNING] Add the `num_classes` in train_dataset and val_dataset config to model config. We suggest you manually set `num_classes` in model config.
....
2024-04-03 11:18:25 [INFO] Start evaluating (total_samples: 2, total_iters: 2)...
2/2 [==============================] - 40s 20s/step - batch_cost: 20.0147 - reader cost: 0.3020
#数据解释
#Acc(准确率):指类别预测正确的像素占总像素的比例,准确率越高模型质量越好。
#mIoU(平均交并比):对每个类别数据集单独进行推理计算,计算出的预测区域和实际区域交集除以预测区域和实际区域的并集,然后将所有类别得到的结果取平均。
#Kappa系数:一个用于一致性检验的指标,可以用于衡量分类的效果。kappa系数的计算是基于混淆矩阵的,取值为-1到1之间,通常大于0。其公式如下所示,P0P_0P0为分类器的准确率,PeP_eP**e为随机分类器的准确率。Kappa系数越高模型质量越好。
2024-04-03 11:19:05 [INFO] [EVAL] #Images: 2 mIoU: 0.9224 Acc: 0.9958 Kappa: 0.9162 Dice: 0.9581
2024-04-03 11:19:05 [INFO] [EVAL] Class IoU:
[0.9957 0.8491]
2024-04-03 11:19:05 [INFO] [EVAL] Class Precision:
[0.9972 0.94 ]
2024-04-03 11:19:05 [INFO] [EVAL] Class Recall:
[0.9984 0.8977]
Process finished with exit code 0

模型预测

使用pyCharm添加一个python run config的启动配置,script设置为tools/predict.py,script parameters设置为--config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/iter_1000/model.pdparams --image_path paddle_pr/images/image1.jpg --save_dir output/result,然后点击右上角的run

输出结果:

1
2
3
4
5
6
7
8
9
10
11
12
/Users/x/workspace/PaddleSeg/venv/bin/python tools/predict.py --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/iter_1000/model.pdparams --image_path paddle_pr/images/image1.jpg --save_dir output/result 
2024-04-03 11:45:21 [WARNING] Add the `num_classes` in train_dataset and val_dataset config to model config. We suggest you manually set `num_classes` in model config.
2024-04-03 11:45:23 [INFO] Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz
2024-04-03 11:45:23 [INFO] There are 265/265 variables loaded into STDCNet.
2024-04-03 11:45:23 [INFO] The number of images: 1
2024-04-03 11:45:23 [INFO] Loading pretrained model from output/iter_1000/model.pdparams
2024-04-03 11:45:23 [INFO] There are 370/370 variables loaded into PPLiteSeg.
2024-04-03 11:45:23 [INFO] Start to predict...
1/1 [==============================] - 21s 21s/step
2024-04-03 11:45:44 [INFO] Predicted images are saved in output/result/added_prediction and output/result/pseudo_color_prediction .

Process finished with exit code 0

生成的文件:

1
2
3
4
5
6
7
8
9
10
11
output/result
|
|--added_prediction #叠加效果图
| |--image1.jpg
| |--image2.jpg
| |--...
|
|--pseudo_color_prediction #预测mask
| |--image1.jpg
| |--image2.jpg
| |--...

导出模型

使用pyCharm添加一个python run config的启动配置,script设置为tools/export.py,script parameters设置为--config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/best_model/model.pdparams --save_dir output/inference_model,然后点击右上角的run

输出结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/Users/x/workspace/PaddleSeg/venv/bin/python tools/export.py --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --model_path output/best_model/model.pdparams --save_dir output/inference_model 
2024-04-03 12:07:23 [WARNING] Add the `num_classes` in train_dataset and val_dataset config to model config. We suggest you manually set `num_classes` in model config.
2024-04-03 12:07:23 [INFO]
2024-04-03 12:07:24 [INFO] Loaded trained params successfully.
/Users/xuanleung/workspace/PaddleSeg/venv/lib/python3.12/site-packages/paddle/jit/api.py:310: UserWarning: full_graph=False is not supported in Python 3.12+. Set full_graph=True automatically
warnings.warn(
I0403 12:07:27.211655 1341450176 program_interpreter.cc:212] New Executor is Running.
2024-04-03 12:07:27 [INFO]
---------------Deploy Information---------------
Deploy:
input_shape:
- -1
- 3
- -1
- -1
model: model.pdmodel
output_dtype: int32
output_op: argmax
params: model.pdiparams
transforms:
- type: Normalize

2024-04-03 12:07:27 [INFO] The inference model is saved in output/inference_model
Process finished with exit code 0

生成的文件:

1
2
3
4
5
output/inference_model
├── deploy.yaml # 部署相关的配置文件,主要说明数据预处理方式等信息
├── model.pdmodel # 预测模型的拓扑结构文件
├── model.pdiparams # 预测模型的权重文件
└── model.pdiparams.info # 参数额外信息,一般无需关注

部署模型(Paddle Inference部署python)

使用pyCharm添加一个python run config的启动配置,script设置为deploy/python/infer.py,script parameters设置为-config output/inference_model/deploy.yaml --image_path /Users/xuanleung/Downloads/test.jpg --device cpu,然后点击右上角的run

生产的文件:output/test.png

MacOS安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
#检查环境
#要求 pip 版本为 20.2.2 或更高版本
➜ ~ python3 -m pip --version
pip 24.0 from /usr/local/lib/python3.12/site-packages/pip (python 3.12)
#需要确认 python 的版本是否满足要求3.8/3.9/3.10/3.11/3.12
➜ ~ python3 --version
Python 3.12.2
#需要确认 Python 和 pip 是 64bit,并且处理器架构是 x86_64
➜ ~ python3 -c "import platform;print(platform.architecture()[0]);print(platform.machine())"
64bit
x86_64
#检查是否支持avx,结果输出avx即表示支持
➜ ~ sysctl machdep.cpu.features | grep -i avx
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
➜ ~ sysctl machdep.cpu.leaf7_features | grep -i avx
machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 HLE AVX2 SMEP BMI2 ERMS INVPCID RTM FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS STIBP L1DF ACAPMSR SSBD

#安装unrar,官方命令brew install unrar,unrar找不到,新安装方法
➜ ~ brew install rar
==> Downloading https://www.rarlab.com/rar/rarmacos-x64-700.tar.gz
######################################################################### 100.0%
==> Installing Cask rar
==> Moving Generic Artifact 'default.sfx' to '/usr/local/lib/default.sfx'
==> Moving Generic Artifact 'rarfiles.lst' to '/usr/local/etc/rarfiles.lst'
==> Linking Binary 'rar' to '/usr/local/bin/rar'
==> Linking Binary 'unrar' to '/usr/local/bin/unrar'
🍺 rar was successfully installed!


#创建个虚拟环境目录
mkdir pytho work_python3
#Python 3创建虚拟环境
➜ paddleseg_python3 python3 -m venv .
#查看当前目录,可以发现有几个文件夹和一个pyvenv.cfg文件:
➜ paddleseg_python3 ls
bin include lib pyvenv.cfg
#继续进入bin目录
➜ paddleseg_python3 cd bin
#激活该venv环境
➜ bin source activate
#查看当前目录,里面有python3、pip3等可执行文件,实际上是链接到Python系统目录的软链接。
(paddleseg_python3) ➜ bin ls
Activate.ps1 activate.csh pip pip3.12 python3
activate activate.fish pip3 python python3.12
#下面正常安装paddlepaddle
(paddleseg_python3) ➜ bin python3 -m pip install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple
#安装依赖
(paddleseg_python3) ➜ bin pip install -U pip setuptools
#验证安装
(paddleseg_python3) ➜ bin python3
Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import paddle
>>> paddle.utils.run_check()
Running verify PaddlePaddle program ...
I0326 15:14:46.340384 1206859520 program_interpreter.cc:212] New Executor is Running.
I0326 15:14:46.356616 1206859520 interpreter_util.cc:624] Standalone Executor is Used.
PaddlePaddle works well on 1 CPU.
PaddlePaddle is installed successfully! Let's start deep learning with PaddlePaddle now.
>>> print(paddle.__version__)
2.6.0

#安装PaddleSeg https://github.com/PaddlePaddle/PaddleSeg/blob/release/2.9/docs/install_cn.md#22-%E5%AE%89%E8%A3%85paddleseg 失败

## 回退所有安装
(paddleseg_python3) ➜ PaddleSeg git:(release/2.6) deactivate
➜ PaddleSeg git:(release/2.6)

macOS docker 安装

1
2
3
4
5
6
7
mkdir paddle
cd paddle
#-v $PWD:/paddle:指定将当前路径(PWD 变量会展开为当前路径的绝对路径)挂载到容器内部的 /home/paddle 目录;
docker run -d -p 80:80 --env USER_PASSWD="123456" --name paddle -it -v $PWD:/home/paddle paddlepaddle/paddle:2.6.0-jupyter
#进入容器
docker exec -it paddle /bin/bash
#访问127.0.0.1 输入用户名/密码 jovyan/123456 进入jovyan

常见问题

  1. 安装python3 -m pip install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple提示如下错误

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    ➜  ~ python3 -m pip install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple
    error: externally-managed-environment

    × This environment is externally managed
    ╰─> To install Python packages system-wide, try brew install
    xyz, where xyz is the package you are trying to
    install.

    If you wish to install a non-brew-packaged Python package,
    create a virtual environment using python3 -m venv path/to/venv.
    Then use path/to/venv/bin/python and path/to/venv/bin/pip.

    If you wish to install a non-brew packaged Python application,
    it may be easiest to use pipx install xyz, which will manage a
    virtual environment for you. Make sure you have pipx installed.

    note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
    hint: See PEP 668 for the detailed specification.

    解决:参考pip(3) install,完美解决 externally-managed-environment

    方案一:添加参数–break-system-packages,这种直接安装到系统,可能会影响系统环境。

    1
    python3 -m pip install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple --break-system-packages

    方案二:pipx,安装完成后,无法导入paddle,初步判断无法进入虚拟环境

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    # pipx会为每个安装的应用创建一个独立的虚拟环境,避免不同应用之间的依赖冲突。
    ➜ ~ brew install pipx
    #当你首次安装pipx时,运行pipx ensurepath会自动检查并修改你的环境变量设置(如需要的话),以确保你可以轻松运行pipx安装的任何程序。这个步骤通常只需要执行一次。
    ➜ ~ pipx ensurepath
    #替换官方的安装命令,使用pipx安装
    # 2.9.0找不到
    ➜ ~ pipx install paddlepaddle==2.9.0 -i https://mirror.baidu.com/pypi/simple
    Fatal error from pip prevented installation. Full pip output in file:
    /Users/xuanleung/Library/Logs/pipx/cmd_2024-03-25_15.41.49_pip_errors.log

    Some possibly relevant errors from pip install:
    ERROR: Could not find a version that satisfies the requirement paddlepaddle==2.9.0 (from versions: 2.6.0)
    ERROR: No matching distribution found for paddlepaddle==2.9.0

    Error installing paddlepaddle from spec 'paddlepaddle==2.9.0'.
    # 换成安装2.6.0
    ➜ ~ pipx install paddlepaddle==2.6.0 -i https://mirror.baidu.com/pypi/simple
    installed package paddlepaddle 2.6.0, installed using Python 3.12.2
    These apps are now globally available
    - fleetrun
    - paddle
    ⚠️ Note: '/Users/xuanleung/.local/bin' is not on your PATH environment
    variable. These apps will not be globally accessible until your PATH is
    updated. Run `pipx ensurepath` to automatically add it, or manually modify
    your PATH in your shell's config file (i.e. ~/.bashrc).
    done! ✨ 🌟 ✨
    # 再次执行更新环境变量
    ➜ ~ pipx ensurepath

    【采用】方案三:使用venv

  2. 执行import paddle提示Python 3: ImportError “No Module named Setuptools”

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    >>> import paddle
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/__init__.py", line 28, in <module>
    from .base import core # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/base/__init__.py", line 77, in <module>
    from . import dataset
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/base/dataset.py", line 20, in <module>
    from ..utils import deprecated
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/utils/__init__.py", line 16, in <module>
    from . import ( # noqa: F401
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/utils/cpp_extension/__init__.py", line 15, in <module>
    from .cpp_extension import (
    File "/Users/xx/workspace/paddleseg_python3/lib/python3.12/site-packages/paddle/utils/cpp_extension/cpp_extension.py", line 21, in <module>
    import setuptools
    ModuleNotFoundError: No module named 'setuptools'
    >>> paddle.utils.run_check()
    Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
    NameError: name 'paddle' is not defined

    解决:执行pip install -U pip setuptools

  3. 执行

    1
    2
    3
    4
    5
    6
    python tools/train.py \
    --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml \
    --do_eval \
    --use_vdl \
    --save_interval 500 \
    --save_dir output

    提示如下错误:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    (venv) ➜  PaddleSeg git:(release/2.9) ✗ python tools/train.py \
    --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml \
    --save_interval 500 \
    --do_eval \
    --use_vdl \
    --save_dir output
    Traceback (most recent call last):
    File "/Users/x/workspace/PaddleSeg/tools/train.py", line 213, in <module>
    main(args)
    File "/Users/x/workspace/PaddleSeg/tools/train.py", line 145, in main
    cfg = Config(
    ^^^^^^^
    TypeError: Config.__init__() got an unexpected keyword argument 'to_static_training'

    原因:tools/train.py文件里面缺少参数to_static_training
    解决:切换到release/2.8.1分支,再次执行命令

  4. 执行

    1
    2
    3
    4
    5
    6
    python tools/train.py \
    --config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml \
    --do_eval \
    --use_vdl \
    --save_interval 500 \
    --save_dir output

    提示如下错误:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
      - type: CrossEntropyLoss
    - type: CrossEntropyLoss
    - type: CrossEntropyLoss
    /Users/xuanleung/workspace/PaddleSeg/venv/lib/python3.12/site-packages/paddle/nn/layer/norm.py:824: UserWarning: When training, we now always track global mean and variance.
    warnings.warn(
    Traceback (most recent call last):
    File "/Users/x/workspace/PaddleSeg/tools/train.py", line 195, in <module>
    main(args)
    File "/Users/x/workspace/PaddleSeg/tools/train.py", line 170, in main
    train(
    File "/Users/x/workspace/PaddleSeg/venv/lib/python3.12/site-packages/paddleseg/core/train.py", line 273, in train
    avg_loss_list = [l[0] / log_iters for l in avg_loss_list]
    ~^^^
    IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

    解决:使用pyCharm添加一个python run config的启动配置,script设置为tools/train.py,script parameters设置为--config configs/quick_start/pp_liteseg_optic_disc_512x512_1k.yml --save_interval 500 --do_eval --use_vdl --save_dir output,然后点击右上角的run