Baseline
¶作者:欧新宇(Xinyu OU)
开发平台:Paddle 2.1
运行环境:Intel Core i7-7700K CPU 4.2GHz, nVidia GeForce GTX 1080 Ti
本教案所涉及的数据集仅用于教学和交流使用,请勿用作商用。
最后更新:2021年8月23日
所有作业均在AIStudio上进行提交,提交时包含源代码和运行结果
近年来,随着人工智能的发展,其在语音识别、自然语言处理、图像与视频分析等诸多领域取得了巨大成功。随着政府对环境保护的呼吁,垃圾分类成为一个亟待解决的问题,本次竞赛将聚焦在垃圾图片的分类,利用人工智能技术,对居民生活垃圾图片进行检测,找出图片中有哪些类别的垃圾。 要求参赛者基于Paddle,给出一个算法或模型,对于给定的图片,检测出图片中的垃圾类别。给定图片数据,选手据此训练模型,为每张测试数据预测出最正确的类别。
本竞赛所用训练和测试图片均来自生活场景。总共四十个类别,类别和标签对应关系在训练集中的dict文件里。图片中垃圾的类别,格式是“一级类别/二级类别”,二级类别是具体的垃圾物体类别,也就是训练数据中标注的类别,比如一次性快餐盒、果皮果肉、旧衣服等。一级类别有四种类别:可回收物、厨余垃圾、有害垃圾和其他垃圾。
数据文件包括训练集(有标注)和测试集(无标注),训练集的所有图片分别保存在train文件夹下面的0-39个文件夹中,文件名即类别标签,测试集共有400张待分类的垃圾图片在test文件夹下。
.txt
文件格式,命名为model_result.txt,文件内的字段需要按照指定格式写入。 ···
35
3
2
37
10
##################################################################################
# 数据集预处理
# 作者: Xinyu Ou (http://ouxinyu.cn)
# 数据集名称:垃圾分类数据集Garbage
# 数据集简介: 数据集包含40个类别,其中训练样本14402张,按照1:9的比例划分为验证集和训练集,此外包含测试样本400张。
# 本程序功能:
# 1. 在训练集中抽取10%的样本作为验证数据集
# 2. 代码将生成4个带标注的列表文件:train, val, trainval, test
# 3. 数据集基本信息:数据集的基本信息使用json格式进行输出,包括数据库名称、数据样本的数量、类别数以及类别标签
###################################################################################
import os
import json
import codecs
# 初始化参数
num_train = 0
num_val = 0
num_trainval = 0
num_test = 0
class_dim = 0
dataset_info = {
'dataset_name': '',
'num_trainval': -1,
'num_train': -1,
'num_val': -1,
'num_test': -1,
'class_dim': -1,
'label_dict': {}
}
# 本地运行时,需要修改数据集的名称和绝对路径,注意和文件夹名称一致
dataset_name = 'Garbage'
dataset_path = 'D:\\Workspace\\ExpDatasets\\'
dataset_root_path = os.path.join(dataset_path, dataset_name)
excluded_folder = ['.DS_Store', '.ipynb_checkpoints'] # 被排除的文件夹
# 定义生成文件的路径
trainval_list = os.path.join(dataset_root_path, 'trainval.txt')
train_list = os.path.join(dataset_root_path, 'train.txt')
val_list = os.path.join(dataset_root_path, 'val.txt')
test_list = os.path.join(dataset_root_path, 'test.txt')
dataset_info_list = os.path.join(dataset_root_path, 'dataset_info.json')
garbage_dict_path = os.path.join(dataset_root_path, 'garbage_dict.json')
# 设置图片的前缀,即图像根目录下的子目录路径
train_prefix = val_prefix = trainval_prefix = 'train'
test_prefix = 'test'
# 检测数据集列表是否存在,如果存在则先删除。其中测试集列表是一次写入,因此可以通过'w'参数进行覆盖写入,而不用进行手动删除。
if os.path.exists(train_list):
os.remove(train_list)
if os.path.exists(val_list):
os.remove(val_list)
if os.path.exists(trainval_list):
os.remove(trainval_list)
if os.path.exists(dataset_info_list):
os.remove(dataset_info_list)
# 生成测试集列表
# 特别注意,因测试集的输出结果需要进行输送到AIStudio进行评估,因此测试集样本列表需要对文件严格按照数字从小到大进行排序
testImg_list = os.listdir(os.path.join(dataset_root_path, test_prefix))
test_numbers = []
for testImg in testImg_list:
if testImg not in excluded_folder:
test_name = testImg.split('.') # 拆分文件名和扩展名
test_numbers.append(int(test_name[0][4:])) # 获取文件名中的数字部分
test_numbers.sort()
num_test = len(test_numbers)
with codecs.open(test_list, 'w', 'utf-8') as f_test:
for i in range(num_test):
f_test.write('{}\n'.format(os.path.join(test_prefix, 'test' + str(test_numbers[i]) + '.jpg')))
# 生成训练集、验证集和训练验证集列表
trainImg_classID_list = os.listdir(os.path.join(dataset_root_path, train_prefix))
for trainImg_classID in trainImg_classID_list:
with codecs.open(train_list, 'a', 'utf-8') as f_train:
with codecs.open(val_list, 'a', 'utf-8') as f_val:
with codecs.open(trainval_list, 'a', 'utf-8') as f_trainval:
if trainImg_classID not in excluded_folder:
class_dim += 1
trainImgs = os.listdir(os.path.join(dataset_root_path, train_prefix, trainImg_classID))
count = 0
for trainImg in trainImgs:
if trainImg not in excluded_folder:
if count % 10 == 0: # 抽取大约10%的样本作为验证数据
f_val.write("{}\t{}\n".format(os.path.join(val_prefix, trainImg_classID, trainImg), trainImg_classID))
f_trainval.write("{}\t{}\n".format(os.path.join(trainval_prefix, trainImg_classID, trainImg), trainImg_classID))
num_val += 1
num_trainval += 1
else:
f_train.write("{}\t{}\n".format(os.path.join(train_prefix, trainImg_classID, trainImg), trainImg_classID))
f_trainval.write("{}\t{}\n".format(os.path.join(trainval_prefix, trainImg_classID, trainImg), trainImg_classID))
num_train += 1
num_trainval += 1
count += 1
# 处理标签信息,存入dataset_info
with open(garbage_dict_path, 'r', encoding='utf-8') as f_dict:
garbage_dict = json.load(f_dict)
# 将数据集信息保存到json文件中供训练时使用
dataset_info['dataset_name'] = dataset_name
dataset_info['num_trainval'] = num_trainval
dataset_info['num_train'] = num_train
dataset_info['num_val'] = num_val
dataset_info['num_test'] = num_test
dataset_info['class_dim'] = class_dim
dataset_info['label_dict'] = garbage_dict
# 输出数据集信息json和统计情况
with codecs.open(dataset_info_list, 'w', encoding='utf-8') as f_dataset_info:
json.dump(dataset_info, f_dataset_info, ensure_ascii=False, indent=4, separators=(',', ':')) # 格式化字典格式的参数列表
print("图像列表已生成, 其中训练验证集样本{},训练集样本{}个, 验证集样本{}个, 测试集样本{}个, 共计{}个。".format(num_trainval, num_train, num_val, num_test, num_train+num_val+num_test))
###### 展示数据集列表信息 ###################3
if __name__ == '__main__':
from pprint import pprint
with open(dataset_info_list, 'r', encoding='utf-8') as f_info:
dataset_info = json.load(f_info)
pprint(dataset_info)
图像列表已生成, 其中训练验证集样本14402,训练集样本12944个, 验证集样本1458个, 测试集样本400个, 共计14802个。 {'class_dim': 40, 'dataset_name': 'Garbage', 'label_dict': {'0': '其他垃圾/一次性快餐盒', '1': '其他垃圾/污损塑料', '10': '厨余垃圾/茶叶渣', '11': '厨余垃圾/菜叶菜根', '12': '厨余垃圾/蛋壳', '13': '厨余垃圾/鱼骨', '14': '可回收物/充电宝', '15': '可回收物/包', '16': '可回收物/化妆品瓶', '17': '可回收物/塑料玩具', '18': '可回收物/塑料碗盆', '19': '可回收物/塑料衣架', '2': '其他垃圾/烟蒂', '20': '可回收物/快递纸袋', '21': '可回收物/插头电线', '22': '可回收物/旧衣服', '23': '可回收物/易拉罐', '24': '可回收物/枕头', '25': '可回收物/毛绒玩具', '26': '可回收物/洗发水瓶', '27': '可回收物/玻璃杯', '28': '可回收物/皮鞋', '29': '可回收物/砧板', '3': '其他垃圾/牙签', '30': '可回收物/纸板箱', '31': '可回收物/调料瓶', '32': '可回收物/酒瓶', '33': '可回收物/金属食品罐', '34': '可回收物/锅', '35': '可回收物/食用油桶', '36': '可回收物/饮料瓶', '37': '有害垃圾/干电池', '38': '有害垃圾/软膏', '39': '有害垃圾/过期药物', '4': '其他垃圾/破碎花盆及碟碗', '5': '其他垃圾/竹筷', '6': '厨余垃圾/剩饭剩菜', '7': '厨余垃圾/大骨头', '8': '厨余垃圾/水果果皮', '9': '厨余垃圾/水果果肉'}, 'num_test': 400, 'num_train': 12944, 'num_trainval': 14402, 'num_val': 1458}
#################导入依赖库##################################################
import os
import json
import codecs
import numpy as np
import time # 载入time时间库,用于计算训练时间
import paddle
import matplotlib.pyplot as plt # 载入python的第三方图像处理库
from pprint import pprint
################全局参数配置###################################################
#### 1. 训练超参数定义
train_parameters = {
'course_name': 'DeepLearning',
'project_name': 'Comp01GarbageClassification',
'dataset_name': 'Garbage',
'architecture': 'Resnet50',
'training_data': 'trainval',
'postfix': '',
'starting_time': time.strftime("%Y%m%d%H%M", time.localtime()), # 全局启动时间
'input_size': [3, 227, 227], # 输入样本的尺度
'mean_value': [0.485, 0.456, 0.406], # Imagenet均值
'std_value': [0.229, 0.224, 0.225], # Imagenet标准差
'num_trainval': -1,
'num_train': -1,
'num_val': -1,
'num_test': -1,
'class_dim': -1,
'label_dict': {},
'total_epoch': 10, # 总迭代次数, 代码调试好后考虑
'batch_size': 64, # 设置每个批次的数据大小,同时对训练提供器和测试
'log_interval': 10, # 设置训练过程中,每隔多少个batch显示一次
'eval_interval': 1, # 设置每个多少个epoch测试一次
'checkpointed': False, # 是否保存checkpoint模型
'checkpoint_train': False, # 是否接着上一次保存的参数接着训练,优先级高于预训练模型
'checkpoint_model':'Garbage_Mobilenetv2', # 设置恢复训练时载入的模型参数
'checkpoint_time': '202102182058', # 恢复训练时所指向的指定时间戳
'pretrained': True, # 是否使用预训练的模型
'pretrained_model':'API', # 设置预训练模型, API|Butterflies_AlexNet_final
'dataset_root_path': 'D:\\Workspace\\ExpDatasets\\',
'result_root_path': 'D:\\Workspace\\ExpResults\\',
'deployment_root_path': 'D:\\Workspace\\ExpDeployments\\',
'useGPU': True, # True | Flase
'learning_strategy': { # 学习率和优化器相关参数
'optimizer_strategy': 'Momentum', # 优化器:Momentum, RMS, SGD, Adam
'learning_rate_strategy': 'CosineAnnealingDecay', # 学习率策略: 固定fixed, 分段衰减PiecewiseDecay, 余弦退火CosineAnnealingDecay, 指数ExponentialDecay, 多项式PolynomialDecay
'learning_rate': 0.001, # 固定学习率
'momentum': 0.9, # 动量
'Piecewise_boundaries': [60, 80, 90], # 分段衰减:变换边界,每当运行到epoch时调整一次
'Piecewise_values': [0.01, 0.001, 0.0001, 0.00001], # 分段衰减:步进学习率,每次调节的具体值
'Exponential_gamma': 0.9, # 指数衰减:衰减指数
'Polynomial_decay_steps': 10, # 多项式衰减:衰减周期,每个多少个epoch衰减一次
'verbose': False
},
'augmentation_strategy': {
'withAugmentation': True, # 数据扩展相关参数
'augmentation_prob': 0.5, # 设置数据增广的概率
'rotate_angle': 15, # 随机旋转的角度
'Hflip_prob': 0.5, # 随机翻转的概率
'brightness': 0.4,
'contrast': 0.4,
'saturation': 0.4,
'hue': 0.4,
},
}
#### 2. 设置简化参数名
args = train_parameters
argsAS = args['augmentation_strategy']
argsLS = train_parameters['learning_strategy']
if not args['pretrained']:
model_name = args['dataset_name'] + '_' + args['architecture'] + '_withoutPretrained'
else:
model_name = args['dataset_name'] + '_' + args['architecture']
if args['training_data'] == 'trainval':
model_name = model_name + '_trainval'
if args['postfix'] != '':
model_name = model_name + '_' + args['postfix']
#### 3. 定义设备工作模式 [GPU|CPU]
# 定义使用CPU还是GPU,使用CPU时use_cuda = False,使用GPU时use_cuda = True
def init_device(useGPU=args['useGPU']):
paddle.set_device('gpu:0') if useGPU else paddle.set_device('cpu')
init_device()
#### 4.定义各种路径:模型、训练、日志结果图
# 4.1 数据集路径
dataset_root_path = os.path.join(args['dataset_root_path'], args['dataset_name'])
json_dataset_info = os.path.join(dataset_root_path, 'dataset_info.json')
# 4.2 训练过程涉及的相关路径
result_root_path = os.path.join(args['result_root_path'], args['project_name'], model_name)
checkpoint_models_path = os.path.join(result_root_path, 'checkpoint_models') # 迭代训练模型保存路径
final_figures_path = os.path.join(result_root_path, 'final_figures') # 训练过程曲线图
final_models_path = os.path.join(result_root_path, 'final_models') # 最终用于部署和推理的模型
logs_path = os.path.join(result_root_path, 'logs') # 训练过程日志
# 4.3 checkpoint_ 路径用于定义恢复训练所用的模型保存
checkpoint_model = os.path.join(checkpoint_models_path, args['checkpoint_model'])
# 4.4 验证和测试时的相关路径(文件)
deployment_root_path = os.path.join(args['deployment_root_path'], args['course_name'], args['project_name'], model_name)
deployment_checkpoint_model = os.path.join(deployment_root_path, 'checkpoint_models', model_name + '_final')
deployment_final_model = os.path.join(deployment_root_path, 'final_models', model_name + '_final')
deployment_final_figures_path = os.path.join(deployment_root_path, 'final_figures')
deployment_logs_path = os.path.join(deployment_root_path, 'logs')
deployment_pretrained_model = os.path.join(deployment_root_path, 'pretrained_dir', args['pretrained_model'])
# 4.5 初始化结果目录
def init_result_path():
if not os.path.exists(final_models_path):
os.makedirs(final_models_path)
if not os.path.exists(final_figures_path):
os.makedirs(final_figures_path)
if not os.path.exists(logs_path):
os.makedirs(logs_path)
if not os.path.exists(checkpoint_models_path):
os.makedirs(checkpoint_models_path)
init_result_path()
#### 5. 初始化参数
def init_train_parameters():
dataset_info = json.loads(open(json_dataset_info, 'r', encoding='utf-8').read())
train_parameters['num_trainval'] = dataset_info['num_trainval']
train_parameters['num_train'] = dataset_info['num_train']
train_parameters['num_val'] = dataset_info['num_val']
train_parameters['num_test'] = dataset_info['num_test']
train_parameters['class_dim'] = dataset_info['class_dim']
train_parameters['label_dict'] = dataset_info['label_dict']
init_train_parameters()
# 输出训练参数 train_parameters
if __name__ == '__main__':
pprint(args)
{'architecture': 'Resnet50', 'augmentation_strategy': {'Hflip_prob': 0.5, 'augmentation_prob': 0.5, 'brightness': 0.4, 'contrast': 0.4, 'hue': 0.4, 'rotate_angle': 15, 'saturation': 0.4, 'withAugmentation': True}, 'batch_size': 64, 'checkpoint_model': 'Garbage_Mobilenetv2', 'checkpoint_time': '202102182058', 'checkpoint_train': False, 'checkpointed': False, 'class_dim': 40, 'course_name': 'DeepLearning', 'dataset_name': 'Garbage', 'dataset_root_path': 'D:\\Workspace\\ExpDatasets\\', 'deployment_root_path': 'D:\\Workspace\\ExpDeployments\\', 'eval_interval': 1, 'input_size': [3, 227, 227], 'label_dict': {'0': '其他垃圾/一次性快餐盒', '1': '其他垃圾/污损塑料', '10': '厨余垃圾/茶叶渣', '11': '厨余垃圾/菜叶菜根', '12': '厨余垃圾/蛋壳', '13': '厨余垃圾/鱼骨', '14': '可回收物/充电宝', '15': '可回收物/包', '16': '可回收物/化妆品瓶', '17': '可回收物/塑料玩具', '18': '可回收物/塑料碗盆', '19': '可回收物/塑料衣架', '2': '其他垃圾/烟蒂', '20': '可回收物/快递纸袋', '21': '可回收物/插头电线', '22': '可回收物/旧衣服', '23': '可回收物/易拉罐', '24': '可回收物/枕头', '25': '可回收物/毛绒玩具', '26': '可回收物/洗发水瓶', '27': '可回收物/玻璃杯', '28': '可回收物/皮鞋', '29': '可回收物/砧板', '3': '其他垃圾/牙签', '30': '可回收物/纸板箱', '31': '可回收物/调料瓶', '32': '可回收物/酒瓶', '33': '可回收物/金属食品罐', '34': '可回收物/锅', '35': '可回收物/食用油桶', '36': '可回收物/饮料瓶', '37': '有害垃圾/干电池', '38': '有害垃圾/软膏', '39': '有害垃圾/过期药物', '4': '其他垃圾/破碎花盆及碟碗', '5': '其他垃圾/竹筷', '6': '厨余垃圾/剩饭剩菜', '7': '厨余垃圾/大骨头', '8': '厨余垃圾/水果果皮', '9': '厨余垃圾/水果果肉'}, 'learning_strategy': {'Exponential_gamma': 0.9, 'Piecewise_boundaries': [60, 80, 90], 'Piecewise_values': [0.01, 0.001, 0.0001, 1e-05], 'Polynomial_decay_steps': 10, 'learning_rate': 0.001, 'learning_rate_strategy': 'CosineAnnealingDecay', 'momentum': 0.9, 'optimizer_strategy': 'Momentum', 'verbose': False}, 'log_interval': 10, 'mean_value': [0.485, 0.456, 0.406], 'num_test': 400, 'num_train': 12944, 'num_trainval': 14402, 'num_val': 1458, 'postfix': '', 'pretrained': True, 'pretrained_model': 'API', 'project_name': 'Comp01GarbageClassification', 'result_root_path': 'D:\\Workspace\\ExpResults\\', 'starting_time': '202108221727', 'std_value': [0.229, 0.224, 0.225], 'total_epoch': 10, 'training_data': 'trainval', 'useGPU': True}
在Paddle 2.0+ 中,我们可使用paddle.io来构造标准的数据集类,用于通过数据列表读取样本,并对样本进行预处理。全新的paddle.vision.transforms可以轻松的实现样本的多种预处理功能,而不用手动去写数据预处理的函数,这大简化了代码的复杂性。
代码最后给出了该了简单测试,输出两种不同模式的样本,进行数据预处理和不进行数据预处理。
import os
import sys
import cv2
import numpy as np
import paddle
import paddle.vision.transforms as T
from paddle.io import DataLoader
paddle.vision.set_image_backend('cv2')
input_size = (args['input_size'][1], args['input_size'][2])
# 1. 数据集的定义
class Dataset(paddle.io.Dataset):
def __init__(self, dataset_root_path, mode='test', withAugmentation=argsAS['withAugmentation']):
assert mode in ['train', 'val', 'test', 'trainval']
self.data = []
self.withAugmentation = withAugmentation
with open(os.path.join(dataset_root_path, mode+'.txt')) as f:
for line in f.readlines():
info = line.strip().split('\t')
image_path = os.path.join(dataset_root_path, info[0].strip())
if len(info) == 2:
self.data.append([image_path, info[1].strip()])
elif len(info) == 1:
self.data.append([image_path, -1])
prob = np.random.random()
if mode in ['train', 'trainval'] and prob >= argsAS['augmentation_prob']:
self.transforms = T.Compose([
T.RandomResizedCrop(input_size),
T.RandomHorizontalFlip(argsAS['Hflip_prob']),
T.RandomRotation(argsAS['rotate_angle']),
T.ColorJitter(brightness=argsAS['brightness'], contrast=argsAS['contrast'], saturation=argsAS['saturation'], hue=argsAS['hue']),
T.ToTensor(),
T.Normalize(mean=args['mean_value'], std=args['std_value'])
])
elif mode in ['val', 'test'] or prob < argsAS['augmentation_prob']:
self.transforms = T.Compose([
T.Resize(input_size),
T.ToTensor(),
T.Normalize(mean=args['mean_value'], std=args['std_value'])
])
# 根据索引获取单个样本
def __getitem__(self, index):
image_path, label = self.data[index]
image = cv2.imread(image_path, 1) # 使用cv2进行数据读取可以强制将的图像转化为彩色模式,其中0为灰度模式,1为彩色模式
if self.withAugmentation == True:
image = self.transforms(image)
return image, np.array(label, dtype='int64')
# 获取样本总数
def __len__(self):
return len(self.data)
###############################################################
# 测试输入数据类:分别输出进行预处理和未进行预处理的数据形态和例图
if __name__ == "__main__":
import random
# 1. 载入数据
dataset_val = Dataset(dataset_root_path, mode='val')
i = random.randrange(0, len(dataset_val))
img1 = dataset_val[i][0]
print('验证数据预处理前的数据形态(进行预处理后): {}'.format(img1.shape))
dataset_test = Dataset(dataset_root_path, mode='test', withAugmentation=False)
j = random.randrange(0, len(dataset_test))
img2 = dataset_test[j][0]
print('测试数据预处理前的数据形态(未进行预处理): {}'.format(img2.shape))
C:\Users\Administrator\anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above. and should_run_async(code)
验证数据预处理前的数据形态(进行预处理后): [3, 227, 227] 测试数据预处理前的数据形态(未进行预处理): (375, 500, 3)
结合paddle.io.DataLoader工具包,可以将读入的数据进行batch划分,确定是否进行随机打乱和是否丢弃最后的冗余样本。
import os
import sys
from paddle.io import DataLoader
# 1. 从数据集库中获取数据
dataset_trainval = Dataset(dataset_root_path, mode='trainval')
dataset_train = Dataset(dataset_root_path, mode='train')
dataset_val = Dataset(dataset_root_path, mode='val')
dataset_test = Dataset(dataset_root_path, mode='test')
# 2. 创建读取器
trainval_reader = DataLoader(dataset_trainval, batch_size=args['batch_size'], shuffle=True, drop_last=True)
train_reader = DataLoader(dataset_train, batch_size=args['batch_size'], shuffle=True, drop_last=True)
val_reader = DataLoader(dataset_val, batch_size=args['batch_size'], shuffle=False, drop_last=False)
test_reader = DataLoader(dataset_test, batch_size=args['batch_size'], shuffle=False, drop_last=False)
# 测试读取器
if __name__ == "__main__":
for i, (image, label) in enumerate(test_reader()):
print('验证集batch_{}的图像形态:{}, 标签形态:{}'.format(i, image.shape, label.shape))
break
验证集batch_0的图像形态:[64, 3, 227, 227], 标签形态:[64]
定义训练过程中用到的可视化方法, 包括训练损失, 训练集批准确率, 测试集损失,测试机准确率等。 根据具体的需求,可以在训练后展示这些数据和迭代次数的关系. 值得注意的是, 训练过程中可以每个epoch绘制一个数据点,也可以每个batch绘制一个数据点,也可以每个n个batch或n个epoch绘制一个数据点。
下面的程序除了实现训练后自动可视化的函数,同时实现将可视化的图片和数据进行保存,保存的文件夹由 final_figures_path
指定。
# 绘制训练batch精度和平均loss
def draw_process(title, loss_label, accuracy_label, iters, losses, accuracies, figurename=None, isShow=False):
# 第一组坐标轴 Loss
_, ax1 = plt.subplots() # plt.subplots(figsize=(14,6))
ax1.plot(iters, losses, color='red', label=loss_label)
ax1.set_xlabel('Iters', fontsize=20)
ax1.set_ylabel(loss_label, fontsize=20)
max_loss = max(losses)
ax1.set_ylim(0, max_loss*1.2)
# 第二组坐标轴 accuracy
ax2 = ax1.twinx()
ax2.plot(iters, accuracies, color='blue', label=accuracy_label)
ax2.set_ylabel(accuracy_label, fontsize=20)
max_acc = max(accuracies)
ax2.set_ylim(0, max_acc*1.2)
plt.title(title, fontsize=24)
# 图例
handles1, labels1 = ax1.get_legend_handles_labels()
handles2, labels2 = ax2.get_legend_handles_labels()
plt.legend(handles1+handles2, labels1+labels2, loc='best')
plt.grid()
# 将绘图结果保存到final_figures目录
if figurename != None:
if not os.path.exists(final_figures_path):
os.makedirs(final_figures_path)
plt.savefig(os.path.join(final_figures_path, figurename + '.png'))
# 将绘图数据保存到final_figures目录
figure_data = np.array([iters, losses, accuracies])
np.save(os.path.join(final_figures_path, figurename + '.npy'), figure_data)
# 显示绘图结果
if isShow is True:
plt.show()
### 绘图测试 ###################################################
if __name__ == '__main__':
root_path = deployment_final_figures_path
try:
train = np.load(os.path.join(root_path, 'train.npy'))
draw_process('train', 'loss', 'accuracy', train[0], train[1], train[2], figurename=None, isShow=True)
except:
print('数据不存在,无法进行绘制')
数据不存在,无法进行绘制
logging是一个专业的日志输出库,可以用于在屏幕上打印运行结果(和print()函数一致),也可以实现将这些运行结果保存到指定的文件夹中,用于后续的研究。
#################################################
# 修改者: Xinyu Ou (http://ouxinyu.cn)
# 功能: 输出日志, 并保存到日志文件
# 格式: 2021-02-03 23:03:07,354 - INFO: [Messages]
# 调用方法:
# from utils.logger import logger
# logger.info('Good morning')
#################################################
import os
import sys
import logging
def init_log_config():
"""
初始化日志相关配置
:return:
"""
global logger
logger = logging.getLogger()
logger.setLevel(logging.INFO)
if not os.path.exists(logs_path):
os.makedirs(logs_path)
log_name = os.path.join(logs_path, model_name + '.logs')
sh = logging.StreamHandler() # 打印到屏幕控制多台
fh = logging.FileHandler(log_name, mode='w', encoding='utf8') # 打印到文件
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter("%(asctime)s - %(levelname)s: %(message)s")
fh.setFormatter(formatter)
sh.setFormatter(formatter)
logger.addHandler(fh) # 输出到文件
logger.addHandler(sh) # 输出到控制台
return logger
logger = init_log_config()
# 测试
if __name__ == "__main__":
logger.info('测试一下, 模型名称: {}'.format(model_name))
2021-08-22 17:27:54,919 - INFO: 测试一下, 模型名称: Garbage_Resnet50_trainval
import sys
import os
import paddle
import paddle.optimizer as optimizer
def learning_rate_setting(verbose=argsLS['verbose']):
if argsLS['learning_rate_strategy'] == 'PiecewiseDecay':
lr = optimizer.lr.PiecewiseDecay(boundaries=argsLS['Piecewise_boundaries'], values=argsLS['Piecewise_values'], verbose=verbose)
elif argsLS['learning_rate_strategy'] == 'CosineAnnealingDecay':
step_each_epoch = args['num_train'] // (args['batch_size'] * 2)
T_max = step_each_epoch * args['total_epoch']
lr = optimizer.lr.CosineAnnealingDecay(learning_rate=argsLS['learning_rate'], T_max=T_max, verbose=verbose)
elif argsLS['learning_rate_strategy'] == 'ExponentialDecay':
lr = optimizer.lr.ExponentialDecay(learning_rate=argsLS['learning_rate'], gamma=argsLS['Exponential_gamma'], verbose=verbose)
elif argsLS['learning_rate_strategy'] == 'PolynomialDecay':
lr = optimizer.lr.PolynomialDecay(learning_rate=argsLS['learning_rate'], decay_steps=argsLS['Polynomial_decay_steps'], verbose=verbose)
else:
lr = argsLS['learning_rate']
return lr
def optimizer_setting(model, lr):
if argsLS['optimizer_strategy'] == 'Momentum':
# 阶梯型的学习率适合比较大规模的训练数据
opt = optimizer.Momentum(learning_rate=lr, momentum=argsLS['momentum'], parameters=model.parameters())
elif argsLS['optimizer_strategy'] == 'RMS':
# 阶梯型的学习率适合比较大规模的训练数据
opt = optimizer.RMSProp(learning_rate=lr, parameters=model.parameters())
elif argsLS['optimizer_strategy'] == 'SGD':
# loss下降相对较慢,但是最终效果不错,阶梯型的学习率适合比较大规模的训练数据
opt = optimizer.SGD(learning_rate=lr, parameters=model.parameters())
elif argsLS['optimizer_strategy'] == 'Adam':
# 能够比较快速的降低 loss,但是相对后期乏力
opt = optimizer.Adam(learning_rate=lr, parameters=model.parameters())
else:
print('学习率设置错误, 请重新设置。')
return opt
# 学习率输出测试
# if __name__ == '__main__':
# print('当前学习率策略为: {}'.format(argsLS['learning_rate_strategy']))
# linear = paddle.nn.Linear(10, 10)
# lr = learning_rate_setting(verbose=True)
# opt = optimizer_setting(linear, lr)
# if argsLS['learning_rate_strategy'] == 'fixed':
# print('learning = {}'.format(argsLS['learning_rate']))
# else:
# for epoch in range(20):
# for batch_id in range(10):
# x = paddle.uniform([10, 10])
# out = linear(x)
# loss = paddle.mean(out)
# loss.backward()
# opt.step()
# opt.clear_gradients()
# # lr.step() # If you update learning rate each step
# lr.step() # If you update learning rate each epoch
# 载入项目文件夹
import sys
import numpy as np
import paddle
import paddle.nn.functional as F
from paddle.static import InputSpec
__all__ = ['eval']
def eval(model, data_reader, verbose=0):
accuracies_top1 = []
accuracies_top5 = []
losses = []
n_total = 0
for batch_id, (image, label) in enumerate(data_reader):
n_batch = len(label)
n_total = n_total + n_batch
label = paddle.unsqueeze(label, axis=1)
loss, acc = model.eval_batch([image], [label])
losses.append(loss[0])
accuracies_top1.append(acc[0][0]*n_batch)
accuracies_top5.append(acc[0][1]*n_batch)
if verbose == 1:
print('\r Batch:{}/{}, acc_top1:[{:.5f}], acc_top5:[{:.5f}]'.format(batch_id+1, len(data_reader), acc[0][0], acc[0][1]), end='')
avg_loss = np.sum(losses)/n_total # loss 记录的是当前batch的累积值
avg_acc_top1 = np.sum(accuracies_top1)/n_total # metric 是当前batch的平均值
avg_acc_top5 = np.sum(accuracies_top5)/n_total
return avg_loss, avg_acc_top1, avg_acc_top5
##############################################################
if __name__ == '__main__':
try:
# 设置输入样本的维度
input_spec = InputSpec(shape=[None] + args['input_size'], dtype='float32', name='image')
label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')
# 载入模型
network = paddle.vision.models.resnet50(num_classes=args['class_dim'])
model = paddle.Model(network, input_spec, label_spec) # 模型实例化
model.load(deployment_checkpoint_model) # 载入调优模型的参数
model.prepare(loss=paddle.nn.CrossEntropyLoss(), # 设置loss
metrics=paddle.metric.Accuracy(topk=(1,5))) # 设置评价指标
# 执行评估函数,并输出验证集样本的损失和精度
print('开始评估...')
avg_loss, avg_acc_top1, avg_acc_top5 = eval(model, val_reader(), verbose=1)
print('\r [验证集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f} \n'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')
# avg_loss, avg_acc_top1, avg_acc_top5 = eval(model, test_reader(), verbose=1)
# print('\r [测试集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f}'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')
except:
print('数据不存在, 跳过测试')
数据不存在, 跳过测试
import os
import time
import json
import paddle
from paddle.static import InputSpec
# 初始配置变量
total_epoch = train_parameters['total_epoch']
# 初始化绘图列表
all_train_iters = []
all_train_losses = []
all_train_accs_top1 = []
all_train_accs_top5 = []
all_test_losses = []
all_test_iters = []
all_test_accs_top1 = []
all_test_accs_top5 = []
def train(model):
# 初始化临时变量
num_batch = 0
best_result = 0
best_result_id = 0
elapsed = 0
# 根据config文件设置训练数据来源
if train_parameters['training_data'] == 'trainval':
data_reader = trainval_reader
elif train_parameters['training_data'] == 'train':
data_reader = train_reader
for epoch in range(1, total_epoch+1):
for batch_id, (image, label) in enumerate(data_reader):
num_batch += 1
label = paddle.unsqueeze(label, axis=1)
loss, acc = model.train_batch([image], [label])
if num_batch % train_parameters['log_interval'] == 0: # 每10个batch显示一次日志,适合大数据集
avg_loss = loss[0][0]
acc_top1 = acc[0][0]
acc_top5 = acc[0][1]
elapsed_step = time.perf_counter() - elapsed - start
elapsed = time.perf_counter() - start
logger.info('Epoch:{}/{}, batch:{}, train_loss:[{:.5f}], acc_top1:[{:.5f}], acc_top5:[{:.5f}]({:.2f}s)'
.format(epoch, total_epoch, num_batch, loss[0][0], acc[0][0], acc[0][1], elapsed_step))
# 记录训练过程,用于可视化训练过程中的loss和accuracy
all_train_iters.append(num_batch)
all_train_losses.append(avg_loss)
all_train_accs_top1.append(acc_top1)
all_train_accs_top5.append(acc_top5)
# 每隔一定周期进行一次测试
if epoch % train_parameters['eval_interval'] == 0 or epoch == total_epoch:
# 模型校验
avg_loss, avg_acc_top1, avg_acc_top5 = eval(model, val_reader())
logger.info('[validation] Epoch:{}/{}, val_loss:[{:.5f}], val_top1:[{:.5f}], val_top5:[{:.5f}]'.format(epoch, total_epoch, avg_loss, avg_acc_top1, avg_acc_top5))
# 记录测试过程,用于可视化训练过程中的loss和accuracy
all_test_iters.append(epoch)
all_test_losses.append(avg_loss)
all_test_accs_top1.append(avg_acc_top1)
all_test_accs_top5.append(avg_acc_top5)
# 将性能最好的模型保存为final模型
if avg_acc_top1 > best_result:
best_result = avg_acc_top1
best_result_id = epoch
# finetune model 用于调优和恢复训练
model.save(os.path.join(checkpoint_models_path, model_name + '_final'))
# inference model 用于部署和预测
model.save(os.path.join(final_models_path, model_name + '_final'), training=False)
logger.info('已保存当前测试模型(epoch={})为最优模型:{}_final'.format(best_result_id, model_name))
logger.info('最优top1测试精度:{:.5f} (epoch={})'.format(best_result, best_result_id))
# 根据需要决定是否需要将每次测试的模型都进行保存(if needed),保存模型需要耗费一定的运算时间和大量的存储资源
# 建议在训练大型模型时,开启该选项,方便训练中断时能够及时恢复训练
# 训练小型模型(训练时间短)时,可以关闭该选项,以进一步提高训练速度
if train_parameters['checkpointed']:
model.save(os.path.join(checkpoint_models_path, model_name + '_' + str(epoch)))
logger.info('训练完成,最终性能accuracy={:.5f}(epoch={}), 总耗时{:.2f}s, 已将其保存为:{}_final'.format(best_result, best_result_id, time.perf_counter() - start, model_name))
#### 训练主函数 ########################################################3
if __name__ == '__main__':
# model = MyNet(num_classes=10)
# train(model)
# 将此次训练的超参数进行保存
data = json.dumps(train_parameters, indent=4, ensure_ascii=False, sort_keys=False, separators=(',', ':')) # 格式化字典格式的参数列表
logger.info(data)
# 启动训练过程
logger.info('训练参数保存完毕,使用{}模型, 训练{}数据, 训练集{}, 启动训练...'.format(train_parameters['architecture'],train_parameters['dataset_name'],train_parameters['training_data']))
logger.info('当前模型目录为:{}'.format(model_name))
# 设置输入样本的维度
input_spec = InputSpec(shape=[None] + train_parameters['input_size'], dtype='float32', name='image')
label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')
# 载入官方标准模型,若不存在则会自动进行下载,pretrained=True|False控制是否使用Imagenet预训练参数
network = paddle.vision.models.resnet50(num_classes=args['class_dim'], pretrained=args['pretrained'])
model = paddle.Model(network, input_spec, label_spec)
logger.info('模型参数信息:')
logger.info(model.summary()) # 是否显示神经网络的具体信息
if train_parameters['checkpoint_train'] == True:
model.load(checkpoint_load_model)
logger.info('载入{}中断模型和参数完毕,开始从checkpoint恢复训练'.format(train_parameters['architecture']))
logger.info('checkpoint模型:{}'.format(checkpoint_load_model))
else:
if train_parameters['pretrained'] == False:
logger.info('载入{}模型完毕,从初始状态开始训练'.format(train_parameters['architecture']))
elif train_parameters['pretrained_model'] == 'API':
logger.info('载入Imagenet-{}预训练模型完毕,开始微调训练(fine-tune)'.format(train_parameters['architecture']))
else:
model.load(project_pretrained_model)
logger.info('载入自定义预训练{}模型完毕,开始微调训练(fine-tune)'.format(train_parameters['architecture']))
logger.info('预训练模型:{}'.format(project_pretrained_model))
# 设置学习率、优化器、损失函数和评价指标
lr = learning_rate_setting()
optimizer = optimizer_setting(model, lr)
model.prepare(optimizer,
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy(topk=(1,5)))
# 启动训练过程
start = time.perf_counter()
train(model)
logger.info('训练完毕,结果路径{}.'.format(result_root_path))
# 输出训练过程图
logger.info('Done.')
draw_process("Training Process", 'Train Loss', 'Train Accuracy(top1)', all_train_iters, all_train_losses, all_train_accs_top1, 'train')
draw_process("Validation Results", 'Validation Loss', 'Validation Accuracy(top1)', all_test_iters, all_test_losses, all_test_accs_top1, 'val')
2021-08-22 17:27:55,083 - INFO: { "course_name":"DeepLearning", "project_name":"Comp01GarbageClassification", "dataset_name":"Garbage", "architecture":"Resnet50", "training_data":"trainval", "postfix":"", "starting_time":"202108221727", "input_size":[ 3, 227, 227 ], "mean_value":[ 0.485, 0.456, 0.406 ], "std_value":[ 0.229, 0.224, 0.225 ], "num_trainval":14402, "num_train":12944, "num_val":1458, "num_test":400, "class_dim":40, "label_dict":{ "0":"其他垃圾/一次性快餐盒", "1":"其他垃圾/污损塑料", "2":"其他垃圾/烟蒂", "3":"其他垃圾/牙签", "4":"其他垃圾/破碎花盆及碟碗", "5":"其他垃圾/竹筷", "6":"厨余垃圾/剩饭剩菜", "7":"厨余垃圾/大骨头", "8":"厨余垃圾/水果果皮", "9":"厨余垃圾/水果果肉", "10":"厨余垃圾/茶叶渣", "11":"厨余垃圾/菜叶菜根", "12":"厨余垃圾/蛋壳", "13":"厨余垃圾/鱼骨", "14":"可回收物/充电宝", "15":"可回收物/包", "16":"可回收物/化妆品瓶", "17":"可回收物/塑料玩具", "18":"可回收物/塑料碗盆", "19":"可回收物/塑料衣架", "20":"可回收物/快递纸袋", "21":"可回收物/插头电线", "22":"可回收物/旧衣服", "23":"可回收物/易拉罐", "24":"可回收物/枕头", "25":"可回收物/毛绒玩具", "26":"可回收物/洗发水瓶", "27":"可回收物/玻璃杯", "28":"可回收物/皮鞋", "29":"可回收物/砧板", "30":"可回收物/纸板箱", "31":"可回收物/调料瓶", "32":"可回收物/酒瓶", "33":"可回收物/金属食品罐", "34":"可回收物/锅", "35":"可回收物/食用油桶", "36":"可回收物/饮料瓶", "37":"有害垃圾/干电池", "38":"有害垃圾/软膏", "39":"有害垃圾/过期药物" }, "total_epoch":10, "batch_size":64, "log_interval":10, "eval_interval":1, "checkpointed":false, "checkpoint_train":false, "checkpoint_model":"Garbage_Mobilenetv2", "checkpoint_time":"202102182058", "pretrained":true, "pretrained_model":"API", "dataset_root_path":"D:\\Workspace\\ExpDatasets\\", "result_root_path":"D:\\Workspace\\ExpResults\\", "deployment_root_path":"D:\\Workspace\\ExpDeployments\\", "useGPU":true, "learning_strategy":{ "optimizer_strategy":"Momentum", "learning_rate_strategy":"CosineAnnealingDecay", "learning_rate":0.001, "momentum":0.9, "Piecewise_boundaries":[ 60, 80, 90 ], "Piecewise_values":[ 0.01, 0.001, 0.0001, 1e-05 ], "Exponential_gamma":0.9, "Polynomial_decay_steps":10, "verbose":false }, "augmentation_strategy":{ "withAugmentation":true, "augmentation_prob":0.5, "rotate_angle":15, "Hflip_prob":0.5, "brightness":0.4, "contrast":0.4, "saturation":0.4, "hue":0.4 } } 2021-08-22 17:27:55,085 - INFO: 训练参数保存完毕,使用Resnet50模型, 训练Garbage数据, 训练集trainval, 启动训练... 2021-08-22 17:27:55,086 - INFO: 当前模型目录为:Garbage_Resnet50_trainval 2021-08-22 17:27:55,186 - INFO: unique_endpoints {''} 2021-08-22 17:27:55,187 - INFO: File C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams md5 checking... 2021-08-22 17:27:55,506 - INFO: Found C:\Users\Administrator/.cache/paddle/hapi/weights\resnet50.pdparams C:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1301: UserWarning: Skip loading for fc.weight. fc.weight receives a shape [2048, 1000], but the expected shape is [2048, 40]. warnings.warn(("Skip loading for {}. ".format(key) + str(err))) C:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\dygraph\layers.py:1301: UserWarning: Skip loading for fc.bias. fc.bias receives a shape [1000], but the expected shape is [40]. warnings.warn(("Skip loading for {}. ".format(key) + str(err))) 2021-08-22 17:27:56,300 - INFO: 模型参数信息: 2021-08-22 17:27:56,345 - INFO: {'total_params': 23643112, 'trainable_params': 23536872} 2021-08-22 17:27:56,346 - INFO: 载入Imagenet-Resnet50预训练模型完毕,开始微调训练(fine-tune)
------------------------------------------------------------------------------- Layer (type) Input Shape Output Shape Param # =============================================================================== Conv2D-54 [[1, 3, 227, 227]] [1, 64, 114, 114] 9,408 BatchNorm2D-54 [[1, 64, 114, 114]] [1, 64, 114, 114] 256 ReLU-18 [[1, 64, 114, 114]] [1, 64, 114, 114] 0 MaxPool2D-2 [[1, 64, 114, 114]] [1, 64, 57, 57] 0 Conv2D-56 [[1, 64, 57, 57]] [1, 64, 57, 57] 4,096 BatchNorm2D-56 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 ReLU-19 [[1, 256, 57, 57]] [1, 256, 57, 57] 0 Conv2D-57 [[1, 64, 57, 57]] [1, 64, 57, 57] 36,864 BatchNorm2D-57 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 Conv2D-58 [[1, 64, 57, 57]] [1, 256, 57, 57] 16,384 BatchNorm2D-58 [[1, 256, 57, 57]] [1, 256, 57, 57] 1,024 Conv2D-55 [[1, 64, 57, 57]] [1, 256, 57, 57] 16,384 BatchNorm2D-55 [[1, 256, 57, 57]] [1, 256, 57, 57] 1,024 BottleneckBlock-17 [[1, 64, 57, 57]] [1, 256, 57, 57] 0 Conv2D-59 [[1, 256, 57, 57]] [1, 64, 57, 57] 16,384 BatchNorm2D-59 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 ReLU-20 [[1, 256, 57, 57]] [1, 256, 57, 57] 0 Conv2D-60 [[1, 64, 57, 57]] [1, 64, 57, 57] 36,864 BatchNorm2D-60 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 Conv2D-61 [[1, 64, 57, 57]] [1, 256, 57, 57] 16,384 BatchNorm2D-61 [[1, 256, 57, 57]] [1, 256, 57, 57] 1,024 BottleneckBlock-18 [[1, 256, 57, 57]] [1, 256, 57, 57] 0 Conv2D-62 [[1, 256, 57, 57]] [1, 64, 57, 57] 16,384 BatchNorm2D-62 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 ReLU-21 [[1, 256, 57, 57]] [1, 256, 57, 57] 0 Conv2D-63 [[1, 64, 57, 57]] [1, 64, 57, 57] 36,864 BatchNorm2D-63 [[1, 64, 57, 57]] [1, 64, 57, 57] 256 Conv2D-64 [[1, 64, 57, 57]] [1, 256, 57, 57] 16,384 BatchNorm2D-64 [[1, 256, 57, 57]] [1, 256, 57, 57] 1,024 BottleneckBlock-19 [[1, 256, 57, 57]] [1, 256, 57, 57] 0 Conv2D-66 [[1, 256, 57, 57]] [1, 128, 57, 57] 32,768 BatchNorm2D-66 [[1, 128, 57, 57]] [1, 128, 57, 57] 512 ReLU-22 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-67 [[1, 128, 57, 57]] [1, 128, 29, 29] 147,456 BatchNorm2D-67 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 Conv2D-68 [[1, 128, 29, 29]] [1, 512, 29, 29] 65,536 BatchNorm2D-68 [[1, 512, 29, 29]] [1, 512, 29, 29] 2,048 Conv2D-65 [[1, 256, 57, 57]] [1, 512, 29, 29] 131,072 BatchNorm2D-65 [[1, 512, 29, 29]] [1, 512, 29, 29] 2,048 BottleneckBlock-20 [[1, 256, 57, 57]] [1, 512, 29, 29] 0 Conv2D-69 [[1, 512, 29, 29]] [1, 128, 29, 29] 65,536 BatchNorm2D-69 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 ReLU-23 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-70 [[1, 128, 29, 29]] [1, 128, 29, 29] 147,456 BatchNorm2D-70 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 Conv2D-71 [[1, 128, 29, 29]] [1, 512, 29, 29] 65,536 BatchNorm2D-71 [[1, 512, 29, 29]] [1, 512, 29, 29] 2,048 BottleneckBlock-21 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-72 [[1, 512, 29, 29]] [1, 128, 29, 29] 65,536 BatchNorm2D-72 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 ReLU-24 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-73 [[1, 128, 29, 29]] [1, 128, 29, 29] 147,456 BatchNorm2D-73 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 Conv2D-74 [[1, 128, 29, 29]] [1, 512, 29, 29] 65,536 BatchNorm2D-74 [[1, 512, 29, 29]] [1, 512, 29, 29] 2,048 BottleneckBlock-22 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-75 [[1, 512, 29, 29]] [1, 128, 29, 29] 65,536 BatchNorm2D-75 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 ReLU-25 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-76 [[1, 128, 29, 29]] [1, 128, 29, 29] 147,456 BatchNorm2D-76 [[1, 128, 29, 29]] [1, 128, 29, 29] 512 Conv2D-77 [[1, 128, 29, 29]] [1, 512, 29, 29] 65,536 BatchNorm2D-77 [[1, 512, 29, 29]] [1, 512, 29, 29] 2,048 BottleneckBlock-23 [[1, 512, 29, 29]] [1, 512, 29, 29] 0 Conv2D-79 [[1, 512, 29, 29]] [1, 256, 29, 29] 131,072 BatchNorm2D-79 [[1, 256, 29, 29]] [1, 256, 29, 29] 1,024 ReLU-26 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-80 [[1, 256, 29, 29]] [1, 256, 15, 15] 589,824 BatchNorm2D-80 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-81 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-81 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 Conv2D-78 [[1, 512, 29, 29]] [1, 1024, 15, 15] 524,288 BatchNorm2D-78 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-24 [[1, 512, 29, 29]] [1, 1024, 15, 15] 0 Conv2D-82 [[1, 1024, 15, 15]] [1, 256, 15, 15] 262,144 BatchNorm2D-82 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 ReLU-27 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-83 [[1, 256, 15, 15]] [1, 256, 15, 15] 589,824 BatchNorm2D-83 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-84 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-84 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-25 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-85 [[1, 1024, 15, 15]] [1, 256, 15, 15] 262,144 BatchNorm2D-85 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 ReLU-28 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-86 [[1, 256, 15, 15]] [1, 256, 15, 15] 589,824 BatchNorm2D-86 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-87 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-87 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-26 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-88 [[1, 1024, 15, 15]] [1, 256, 15, 15] 262,144 BatchNorm2D-88 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 ReLU-29 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-89 [[1, 256, 15, 15]] [1, 256, 15, 15] 589,824 BatchNorm2D-89 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-90 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-90 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-27 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-91 [[1, 1024, 15, 15]] [1, 256, 15, 15] 262,144 BatchNorm2D-91 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 ReLU-30 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-92 [[1, 256, 15, 15]] [1, 256, 15, 15] 589,824 BatchNorm2D-92 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-93 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-93 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-28 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-94 [[1, 1024, 15, 15]] [1, 256, 15, 15] 262,144 BatchNorm2D-94 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 ReLU-31 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-95 [[1, 256, 15, 15]] [1, 256, 15, 15] 589,824 BatchNorm2D-95 [[1, 256, 15, 15]] [1, 256, 15, 15] 1,024 Conv2D-96 [[1, 256, 15, 15]] [1, 1024, 15, 15] 262,144 BatchNorm2D-96 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 4,096 BottleneckBlock-29 [[1, 1024, 15, 15]] [1, 1024, 15, 15] 0 Conv2D-98 [[1, 1024, 15, 15]] [1, 512, 15, 15] 524,288 BatchNorm2D-98 [[1, 512, 15, 15]] [1, 512, 15, 15] 2,048 ReLU-32 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 0 Conv2D-99 [[1, 512, 15, 15]] [1, 512, 8, 8] 2,359,296 BatchNorm2D-99 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,048 Conv2D-100 [[1, 512, 8, 8]] [1, 2048, 8, 8] 1,048,576 BatchNorm2D-100 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 8,192 Conv2D-97 [[1, 1024, 15, 15]] [1, 2048, 8, 8] 2,097,152 BatchNorm2D-97 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 8,192 BottleneckBlock-30 [[1, 1024, 15, 15]] [1, 2048, 8, 8] 0 Conv2D-101 [[1, 2048, 8, 8]] [1, 512, 8, 8] 1,048,576 BatchNorm2D-101 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,048 ReLU-33 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 0 Conv2D-102 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,359,296 BatchNorm2D-102 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,048 Conv2D-103 [[1, 512, 8, 8]] [1, 2048, 8, 8] 1,048,576 BatchNorm2D-103 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 8,192 BottleneckBlock-31 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 0 Conv2D-104 [[1, 2048, 8, 8]] [1, 512, 8, 8] 1,048,576 BatchNorm2D-104 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,048 ReLU-34 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 0 Conv2D-105 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,359,296 BatchNorm2D-105 [[1, 512, 8, 8]] [1, 512, 8, 8] 2,048 Conv2D-106 [[1, 512, 8, 8]] [1, 2048, 8, 8] 1,048,576 BatchNorm2D-106 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 8,192 BottleneckBlock-32 [[1, 2048, 8, 8]] [1, 2048, 8, 8] 0 AdaptiveAvgPool2D-2 [[1, 2048, 8, 8]] [1, 2048, 1, 1] 0 Linear-2 [[1, 2048]] [1, 40] 81,960 =============================================================================== Total params: 23,643,112 Trainable params: 23,536,872 Non-trainable params: 106,240 ------------------------------------------------------------------------------- Input size (MB): 0.59 Forward/backward pass size (MB): 282.41 Params size (MB): 90.19 Estimated Total Size (MB): 373.19 -------------------------------------------------------------------------------
C:\Users\Administrator\anaconda3\lib\site-packages\paddle\nn\layer\norm.py:640: UserWarning: When training, we now always track global mean and variance. warnings.warn( 2021-08-22 17:28:05,690 - INFO: Epoch:1/10, batch:10, train_loss:[3.67485], acc_top1:[0.06250], acc_top5:[0.15625](9.30s) 2021-08-22 17:28:14,592 - INFO: Epoch:1/10, batch:20, train_loss:[3.58831], acc_top1:[0.07812], acc_top5:[0.25000](8.90s) 2021-08-22 17:28:23,508 - INFO: Epoch:1/10, batch:30, train_loss:[3.55868], acc_top1:[0.04688], acc_top5:[0.25000](8.92s) 2021-08-22 17:28:32,280 - INFO: Epoch:1/10, batch:40, train_loss:[3.38091], acc_top1:[0.14062], acc_top5:[0.32812](8.77s) 2021-08-22 17:28:41,100 - INFO: Epoch:1/10, batch:50, train_loss:[3.11168], acc_top1:[0.21875], acc_top5:[0.57812](8.82s) 2021-08-22 17:28:50,009 - INFO: Epoch:1/10, batch:60, train_loss:[3.05547], acc_top1:[0.26562], acc_top5:[0.59375](8.91s) 2021-08-22 17:28:59,066 - INFO: Epoch:1/10, batch:70, train_loss:[2.93201], acc_top1:[0.28125], acc_top5:[0.56250](9.06s) 2021-08-22 17:29:07,792 - INFO: Epoch:1/10, batch:80, train_loss:[2.67746], acc_top1:[0.32812], acc_top5:[0.68750](8.73s) 2021-08-22 17:29:16,539 - INFO: Epoch:1/10, batch:90, train_loss:[2.65316], acc_top1:[0.28125], acc_top5:[0.67188](8.75s) 2021-08-22 17:29:25,534 - INFO: Epoch:1/10, batch:100, train_loss:[2.54797], acc_top1:[0.31250], acc_top5:[0.65625](9.00s) 2021-08-22 17:29:34,522 - INFO: Epoch:1/10, batch:110, train_loss:[2.16646], acc_top1:[0.46875], acc_top5:[0.76562](8.99s) 2021-08-22 17:29:43,318 - INFO: Epoch:1/10, batch:120, train_loss:[2.28952], acc_top1:[0.34375], acc_top5:[0.78125](8.80s) 2021-08-22 17:29:51,516 - INFO: Epoch:1/10, batch:130, train_loss:[2.03039], acc_top1:[0.54688], acc_top5:[0.84375](8.20s) 2021-08-22 17:29:59,760 - INFO: Epoch:1/10, batch:140, train_loss:[2.08615], acc_top1:[0.40625], acc_top5:[0.79688](8.24s) 2021-08-22 17:30:07,945 - INFO: Epoch:1/10, batch:150, train_loss:[1.96128], acc_top1:[0.51562], acc_top5:[0.79688](8.19s) 2021-08-22 17:30:16,460 - INFO: Epoch:1/10, batch:160, train_loss:[2.07360], acc_top1:[0.42188], acc_top5:[0.76562](8.52s) 2021-08-22 17:30:24,873 - INFO: Epoch:1/10, batch:170, train_loss:[1.95943], acc_top1:[0.51562], acc_top5:[0.82812](8.41s) 2021-08-22 17:30:33,245 - INFO: Epoch:1/10, batch:180, train_loss:[1.86051], acc_top1:[0.54688], acc_top5:[0.78125](8.37s) 2021-08-22 17:30:41,657 - INFO: Epoch:1/10, batch:190, train_loss:[1.82222], acc_top1:[0.54688], acc_top5:[0.82812](8.41s) 2021-08-22 17:30:50,003 - INFO: Epoch:1/10, batch:200, train_loss:[1.78653], acc_top1:[0.48438], acc_top5:[0.82812](8.35s) 2021-08-22 17:30:58,218 - INFO: Epoch:1/10, batch:210, train_loss:[1.82434], acc_top1:[0.50000], acc_top5:[0.82812](8.22s) 2021-08-22 17:31:06,553 - INFO: Epoch:1/10, batch:220, train_loss:[1.56021], acc_top1:[0.51562], acc_top5:[0.90625](8.33s) 2021-08-22 17:31:18,225 - INFO: [validation] Epoch:1/10, val_loss:[0.01975], val_top1:[0.65364], val_top5:[0.92867] C:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\layers\utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working return (isinstance(seq, collections.Sequence) and C:\Users\Administrator\anaconda3\lib\site-packages\paddle\fluid\layers\math_op_patch.py:317: UserWarning: C:\Users\Administrator\anaconda3\lib\site-packages\paddle\vision\models\resnet.py:143 The behavior of expression A + B has been unified with elementwise_add(X, Y, axis=-1) from Paddle 2.0. If your code works well in the older versions but crashes in this version, try to use elementwise_add(X, Y, axis=0) instead of A + B. This transitional warning will be dropped in the future. warnings.warn( 2021-08-22 17:31:21,031 - INFO: 已保存当前测试模型(epoch=1)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:31:21,032 - INFO: 最优top1测试精度:0.65364 (epoch=1) 2021-08-22 17:31:25,312 - INFO: Epoch:2/10, batch:230, train_loss:[1.39671], acc_top1:[0.59375], acc_top5:[0.89062](18.76s) 2021-08-22 17:31:33,591 - INFO: Epoch:2/10, batch:240, train_loss:[1.74088], acc_top1:[0.50000], acc_top5:[0.84375](8.28s) 2021-08-22 17:31:41,789 - INFO: Epoch:2/10, batch:250, train_loss:[1.53703], acc_top1:[0.60938], acc_top5:[0.89062](8.20s) 2021-08-22 17:31:50,123 - INFO: Epoch:2/10, batch:260, train_loss:[1.59857], acc_top1:[0.59375], acc_top5:[0.85938](8.33s) 2021-08-22 17:31:58,507 - INFO: Epoch:2/10, batch:270, train_loss:[1.28254], acc_top1:[0.64062], acc_top5:[0.93750](8.38s) 2021-08-22 17:32:06,764 - INFO: Epoch:2/10, batch:280, train_loss:[1.38655], acc_top1:[0.57812], acc_top5:[0.90625](8.26s) 2021-08-22 17:32:15,018 - INFO: Epoch:2/10, batch:290, train_loss:[1.63991], acc_top1:[0.54688], acc_top5:[0.89062](8.25s) 2021-08-22 17:32:23,491 - INFO: Epoch:2/10, batch:300, train_loss:[1.64938], acc_top1:[0.54688], acc_top5:[0.84375](8.47s) 2021-08-22 17:32:31,900 - INFO: Epoch:2/10, batch:310, train_loss:[1.38280], acc_top1:[0.56250], acc_top5:[0.89062](8.41s) 2021-08-22 17:32:40,150 - INFO: Epoch:2/10, batch:320, train_loss:[1.33596], acc_top1:[0.59375], acc_top5:[0.87500](8.25s) 2021-08-22 17:32:48,366 - INFO: Epoch:2/10, batch:330, train_loss:[1.10432], acc_top1:[0.62500], acc_top5:[0.95312](8.22s) 2021-08-22 17:32:56,738 - INFO: Epoch:2/10, batch:340, train_loss:[1.23082], acc_top1:[0.71875], acc_top5:[0.87500](8.37s) 2021-08-22 17:33:04,955 - INFO: Epoch:2/10, batch:350, train_loss:[1.18891], acc_top1:[0.68750], acc_top5:[0.87500](8.22s) 2021-08-22 17:33:13,118 - INFO: Epoch:2/10, batch:360, train_loss:[1.58997], acc_top1:[0.57812], acc_top5:[0.82812](8.16s) 2021-08-22 17:33:21,469 - INFO: Epoch:2/10, batch:370, train_loss:[1.33329], acc_top1:[0.64062], acc_top5:[0.89062](8.35s) 2021-08-22 17:33:29,751 - INFO: Epoch:2/10, batch:380, train_loss:[1.44426], acc_top1:[0.65625], acc_top5:[0.84375](8.28s) 2021-08-22 17:33:38,002 - INFO: Epoch:2/10, batch:390, train_loss:[1.17054], acc_top1:[0.67188], acc_top5:[0.89062](8.25s) 2021-08-22 17:33:46,303 - INFO: Epoch:2/10, batch:400, train_loss:[1.37294], acc_top1:[0.59375], acc_top5:[0.84375](8.30s) 2021-08-22 17:33:54,615 - INFO: Epoch:2/10, batch:410, train_loss:[1.25524], acc_top1:[0.68750], acc_top5:[0.87500](8.31s) 2021-08-22 17:34:02,839 - INFO: Epoch:2/10, batch:420, train_loss:[1.31660], acc_top1:[0.62500], acc_top5:[0.85938](8.22s) 2021-08-22 17:34:11,085 - INFO: Epoch:2/10, batch:430, train_loss:[1.14597], acc_top1:[0.65625], acc_top5:[0.93750](8.25s) 2021-08-22 17:34:19,375 - INFO: Epoch:2/10, batch:440, train_loss:[1.22304], acc_top1:[0.62500], acc_top5:[0.92188](8.29s) 2021-08-22 17:34:27,765 - INFO: Epoch:2/10, batch:450, train_loss:[1.21761], acc_top1:[0.67188], acc_top5:[0.93750](8.39s) 2021-08-22 17:34:35,295 - INFO: [validation] Epoch:2/10, val_loss:[0.01354], val_top1:[0.74348], val_top5:[0.96502] 2021-08-22 17:34:38,030 - INFO: 已保存当前测试模型(epoch=2)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:34:38,031 - INFO: 最优top1测试精度:0.74348 (epoch=2) 2021-08-22 17:34:46,490 - INFO: Epoch:3/10, batch:460, train_loss:[1.10856], acc_top1:[0.67188], acc_top5:[0.93750](18.72s) 2021-08-22 17:34:54,732 - INFO: Epoch:3/10, batch:470, train_loss:[1.21693], acc_top1:[0.65625], acc_top5:[0.87500](8.24s) 2021-08-22 17:35:03,014 - INFO: Epoch:3/10, batch:480, train_loss:[1.26761], acc_top1:[0.60938], acc_top5:[0.89062](8.28s) 2021-08-22 17:35:11,362 - INFO: Epoch:3/10, batch:490, train_loss:[1.27709], acc_top1:[0.65625], acc_top5:[0.89062](8.35s) 2021-08-22 17:35:19,676 - INFO: Epoch:3/10, batch:500, train_loss:[1.06718], acc_top1:[0.70312], acc_top5:[0.89062](8.31s) 2021-08-22 17:35:27,997 - INFO: Epoch:3/10, batch:510, train_loss:[1.33707], acc_top1:[0.62500], acc_top5:[0.89062](8.32s) 2021-08-22 17:35:36,388 - INFO: Epoch:3/10, batch:520, train_loss:[1.04264], acc_top1:[0.71875], acc_top5:[0.95312](8.39s) 2021-08-22 17:35:44,826 - INFO: Epoch:3/10, batch:530, train_loss:[1.30165], acc_top1:[0.59375], acc_top5:[0.92188](8.44s) 2021-08-22 17:35:53,138 - INFO: Epoch:3/10, batch:540, train_loss:[1.32691], acc_top1:[0.60938], acc_top5:[0.87500](8.31s) 2021-08-22 17:36:01,420 - INFO: Epoch:3/10, batch:550, train_loss:[1.24538], acc_top1:[0.65625], acc_top5:[0.90625](8.28s) 2021-08-22 17:36:09,756 - INFO: Epoch:3/10, batch:560, train_loss:[0.85617], acc_top1:[0.81250], acc_top5:[0.95312](8.34s) 2021-08-22 17:36:18,131 - INFO: Epoch:3/10, batch:570, train_loss:[1.17075], acc_top1:[0.65625], acc_top5:[0.89062](8.38s) 2021-08-22 17:36:26,539 - INFO: Epoch:3/10, batch:580, train_loss:[1.16896], acc_top1:[0.65625], acc_top5:[0.90625](8.41s) 2021-08-22 17:36:34,815 - INFO: Epoch:3/10, batch:590, train_loss:[1.07500], acc_top1:[0.71875], acc_top5:[0.92188](8.28s) 2021-08-22 17:36:43,103 - INFO: Epoch:3/10, batch:600, train_loss:[1.32218], acc_top1:[0.67188], acc_top5:[0.90625](8.29s) 2021-08-22 17:36:51,426 - INFO: Epoch:3/10, batch:610, train_loss:[1.26806], acc_top1:[0.62500], acc_top5:[0.92188](8.32s) 2021-08-22 17:36:59,665 - INFO: Epoch:3/10, batch:620, train_loss:[1.07995], acc_top1:[0.67188], acc_top5:[0.90625](8.24s) 2021-08-22 17:37:07,992 - INFO: Epoch:3/10, batch:630, train_loss:[0.90826], acc_top1:[0.75000], acc_top5:[0.93750](8.33s) 2021-08-22 17:37:16,304 - INFO: Epoch:3/10, batch:640, train_loss:[1.14179], acc_top1:[0.62500], acc_top5:[0.92188](8.31s) 2021-08-22 17:37:24,396 - INFO: Epoch:3/10, batch:650, train_loss:[1.03412], acc_top1:[0.73438], acc_top5:[0.92188](8.09s) 2021-08-22 17:37:32,710 - INFO: Epoch:3/10, batch:660, train_loss:[1.27789], acc_top1:[0.62500], acc_top5:[0.87500](8.31s) 2021-08-22 17:37:41,021 - INFO: Epoch:3/10, batch:670, train_loss:[1.44667], acc_top1:[0.64062], acc_top5:[0.89062](8.31s) 2021-08-22 17:37:52,605 - INFO: [validation] Epoch:3/10, val_loss:[0.01146], val_top1:[0.77984], val_top5:[0.97257] 2021-08-22 17:37:55,430 - INFO: 已保存当前测试模型(epoch=3)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:37:55,431 - INFO: 最优top1测试精度:0.77984 (epoch=3) 2021-08-22 17:37:59,689 - INFO: Epoch:4/10, batch:680, train_loss:[0.98014], acc_top1:[0.71875], acc_top5:[0.95312](18.67s) 2021-08-22 17:38:08,022 - INFO: Epoch:4/10, batch:690, train_loss:[0.91084], acc_top1:[0.73438], acc_top5:[0.95312](8.33s) 2021-08-22 17:38:16,478 - INFO: Epoch:4/10, batch:700, train_loss:[0.71410], acc_top1:[0.82812], acc_top5:[0.96875](8.45s) 2021-08-22 17:38:24,799 - INFO: Epoch:4/10, batch:710, train_loss:[1.02693], acc_top1:[0.73438], acc_top5:[0.90625](8.32s) 2021-08-22 17:38:33,067 - INFO: Epoch:4/10, batch:720, train_loss:[0.97428], acc_top1:[0.75000], acc_top5:[0.89062](8.27s) 2021-08-22 17:38:41,353 - INFO: Epoch:4/10, batch:730, train_loss:[0.91705], acc_top1:[0.78125], acc_top5:[0.95312](8.29s) 2021-08-22 17:38:49,641 - INFO: Epoch:4/10, batch:740, train_loss:[0.98061], acc_top1:[0.71875], acc_top5:[0.90625](8.29s) 2021-08-22 17:38:58,082 - INFO: Epoch:4/10, batch:750, train_loss:[1.18697], acc_top1:[0.60938], acc_top5:[0.95312](8.44s) 2021-08-22 17:39:06,448 - INFO: Epoch:4/10, batch:760, train_loss:[1.16721], acc_top1:[0.62500], acc_top5:[0.93750](8.37s) 2021-08-22 17:39:14,915 - INFO: Epoch:4/10, batch:770, train_loss:[1.12347], acc_top1:[0.67188], acc_top5:[0.87500](8.47s) 2021-08-22 17:39:23,148 - INFO: Epoch:4/10, batch:780, train_loss:[1.24175], acc_top1:[0.67188], acc_top5:[0.92188](8.23s) 2021-08-22 17:39:31,395 - INFO: Epoch:4/10, batch:790, train_loss:[1.05856], acc_top1:[0.70312], acc_top5:[0.90625](8.25s) 2021-08-22 17:39:39,756 - INFO: Epoch:4/10, batch:800, train_loss:[0.95015], acc_top1:[0.75000], acc_top5:[0.95312](8.36s) 2021-08-22 17:39:48,074 - INFO: Epoch:4/10, batch:810, train_loss:[1.10692], acc_top1:[0.65625], acc_top5:[0.90625](8.32s) 2021-08-22 17:39:56,461 - INFO: Epoch:4/10, batch:820, train_loss:[0.77265], acc_top1:[0.81250], acc_top5:[0.96875](8.39s) 2021-08-22 17:40:04,914 - INFO: Epoch:4/10, batch:830, train_loss:[1.39020], acc_top1:[0.51562], acc_top5:[0.89062](8.45s) 2021-08-22 17:40:13,305 - INFO: Epoch:4/10, batch:840, train_loss:[0.70597], acc_top1:[0.81250], acc_top5:[0.98438](8.39s) 2021-08-22 17:40:21,750 - INFO: Epoch:4/10, batch:850, train_loss:[1.24161], acc_top1:[0.67188], acc_top5:[0.85938](8.44s) 2021-08-22 17:40:30,032 - INFO: Epoch:4/10, batch:860, train_loss:[1.08874], acc_top1:[0.67188], acc_top5:[0.93750](8.28s) 2021-08-22 17:40:38,323 - INFO: Epoch:4/10, batch:870, train_loss:[0.96163], acc_top1:[0.71875], acc_top5:[0.95312](8.29s) 2021-08-22 17:40:46,702 - INFO: Epoch:4/10, batch:880, train_loss:[1.20141], acc_top1:[0.65625], acc_top5:[0.93750](8.38s) 2021-08-22 17:40:55,005 - INFO: Epoch:4/10, batch:890, train_loss:[1.07510], acc_top1:[0.65625], acc_top5:[0.93750](8.30s) 2021-08-22 17:41:03,378 - INFO: Epoch:4/10, batch:900, train_loss:[1.15707], acc_top1:[0.70312], acc_top5:[0.87500](8.37s) 2021-08-22 17:41:10,958 - INFO: [validation] Epoch:4/10, val_loss:[0.00985], val_top1:[0.81550], val_top5:[0.97874] 2021-08-22 17:41:13,614 - INFO: 已保存当前测试模型(epoch=4)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:41:13,616 - INFO: 最优top1测试精度:0.81550 (epoch=4) 2021-08-22 17:41:21,952 - INFO: Epoch:5/10, batch:910, train_loss:[1.21092], acc_top1:[0.68750], acc_top5:[0.92188](18.57s) 2021-08-22 17:41:30,353 - INFO: Epoch:5/10, batch:920, train_loss:[0.77554], acc_top1:[0.82812], acc_top5:[0.93750](8.40s) 2021-08-22 17:41:38,725 - INFO: Epoch:5/10, batch:930, train_loss:[0.83194], acc_top1:[0.75000], acc_top5:[0.95312](8.37s) 2021-08-22 17:41:47,044 - INFO: Epoch:5/10, batch:940, train_loss:[1.24034], acc_top1:[0.65625], acc_top5:[0.90625](8.32s) 2021-08-22 17:41:55,414 - INFO: Epoch:5/10, batch:950, train_loss:[0.96913], acc_top1:[0.75000], acc_top5:[0.90625](8.37s) 2021-08-22 17:42:03,672 - INFO: Epoch:5/10, batch:960, train_loss:[1.01854], acc_top1:[0.73438], acc_top5:[0.92188](8.26s) 2021-08-22 17:42:12,109 - INFO: Epoch:5/10, batch:970, train_loss:[0.91807], acc_top1:[0.73438], acc_top5:[0.92188](8.44s) 2021-08-22 17:42:20,511 - INFO: Epoch:5/10, batch:980, train_loss:[0.82761], acc_top1:[0.70312], acc_top5:[0.95312](8.40s) 2021-08-22 17:42:28,976 - INFO: Epoch:5/10, batch:990, train_loss:[1.00417], acc_top1:[0.68750], acc_top5:[0.93750](8.46s) 2021-08-22 17:42:37,235 - INFO: Epoch:5/10, batch:1000, train_loss:[0.83253], acc_top1:[0.78125], acc_top5:[0.93750](8.26s) 2021-08-22 17:42:45,617 - INFO: Epoch:5/10, batch:1010, train_loss:[0.90146], acc_top1:[0.71875], acc_top5:[0.96875](8.38s) 2021-08-22 17:42:54,023 - INFO: Epoch:5/10, batch:1020, train_loss:[1.01277], acc_top1:[0.65625], acc_top5:[0.93750](8.41s) 2021-08-22 17:43:02,389 - INFO: Epoch:5/10, batch:1030, train_loss:[1.01358], acc_top1:[0.62500], acc_top5:[0.95312](8.37s) 2021-08-22 17:43:10,685 - INFO: Epoch:5/10, batch:1040, train_loss:[0.83703], acc_top1:[0.81250], acc_top5:[0.93750](8.30s) 2021-08-22 17:43:19,039 - INFO: Epoch:5/10, batch:1050, train_loss:[1.19066], acc_top1:[0.65625], acc_top5:[0.90625](8.35s) 2021-08-22 17:43:27,332 - INFO: Epoch:5/10, batch:1060, train_loss:[0.94342], acc_top1:[0.73438], acc_top5:[0.95312](8.29s) 2021-08-22 17:43:35,716 - INFO: Epoch:5/10, batch:1070, train_loss:[0.92655], acc_top1:[0.71875], acc_top5:[0.96875](8.38s) 2021-08-22 17:43:43,944 - INFO: Epoch:5/10, batch:1080, train_loss:[0.79080], acc_top1:[0.73438], acc_top5:[0.93750](8.23s) 2021-08-22 17:43:52,429 - INFO: Epoch:5/10, batch:1090, train_loss:[0.89353], acc_top1:[0.76562], acc_top5:[0.95312](8.49s) 2021-08-22 17:44:00,653 - INFO: Epoch:5/10, batch:1100, train_loss:[1.00425], acc_top1:[0.75000], acc_top5:[0.90625](8.22s) 2021-08-22 17:44:09,135 - INFO: Epoch:5/10, batch:1110, train_loss:[1.04594], acc_top1:[0.70312], acc_top5:[0.92188](8.48s) 2021-08-22 17:44:17,642 - INFO: Epoch:5/10, batch:1120, train_loss:[0.89451], acc_top1:[0.76562], acc_top5:[0.95312](8.51s) 2021-08-22 17:44:29,580 - INFO: [validation] Epoch:5/10, val_loss:[0.00869], val_top1:[0.82785], val_top5:[0.98560] 2021-08-22 17:44:32,265 - INFO: 已保存当前测试模型(epoch=5)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:44:32,266 - INFO: 最优top1测试精度:0.82785 (epoch=5) 2021-08-22 17:44:36,556 - INFO: Epoch:6/10, batch:1130, train_loss:[0.81509], acc_top1:[0.75000], acc_top5:[0.96875](18.91s) 2021-08-22 17:44:44,891 - INFO: Epoch:6/10, batch:1140, train_loss:[0.60806], acc_top1:[0.82812], acc_top5:[0.96875](8.33s) 2021-08-22 17:44:53,338 - INFO: Epoch:6/10, batch:1150, train_loss:[1.00414], acc_top1:[0.71875], acc_top5:[0.87500](8.45s) 2021-08-22 17:45:01,604 - INFO: Epoch:6/10, batch:1160, train_loss:[1.00464], acc_top1:[0.71875], acc_top5:[0.93750](8.27s) 2021-08-22 17:45:09,879 - INFO: Epoch:6/10, batch:1170, train_loss:[1.05906], acc_top1:[0.65625], acc_top5:[0.93750](8.28s) 2021-08-22 17:45:18,125 - INFO: Epoch:6/10, batch:1180, train_loss:[0.80997], acc_top1:[0.71875], acc_top5:[0.95312](8.25s) 2021-08-22 17:45:26,511 - INFO: Epoch:6/10, batch:1190, train_loss:[1.19182], acc_top1:[0.65625], acc_top5:[0.90625](8.39s) 2021-08-22 17:45:34,912 - INFO: Epoch:6/10, batch:1200, train_loss:[0.88738], acc_top1:[0.70312], acc_top5:[0.92188](8.40s) 2021-08-22 17:45:43,215 - INFO: Epoch:6/10, batch:1210, train_loss:[0.71135], acc_top1:[0.76562], acc_top5:[0.95312](8.30s) 2021-08-22 17:45:51,645 - INFO: Epoch:6/10, batch:1220, train_loss:[0.82651], acc_top1:[0.81250], acc_top5:[0.93750](8.43s) 2021-08-22 17:46:00,019 - INFO: Epoch:6/10, batch:1230, train_loss:[0.55874], acc_top1:[0.82812], acc_top5:[0.98438](8.37s) 2021-08-22 17:46:08,396 - INFO: Epoch:6/10, batch:1240, train_loss:[0.96529], acc_top1:[0.67188], acc_top5:[0.93750](8.38s) 2021-08-22 17:46:16,884 - INFO: Epoch:6/10, batch:1250, train_loss:[0.69602], acc_top1:[0.73438], acc_top5:[1.00000](8.49s) 2021-08-22 17:46:25,109 - INFO: Epoch:6/10, batch:1260, train_loss:[0.72889], acc_top1:[0.76562], acc_top5:[0.96875](8.22s) 2021-08-22 17:46:33,349 - INFO: Epoch:6/10, batch:1270, train_loss:[0.69251], acc_top1:[0.79688], acc_top5:[0.95312](8.24s) 2021-08-22 17:46:41,650 - INFO: Epoch:6/10, batch:1280, train_loss:[0.65876], acc_top1:[0.81250], acc_top5:[0.95312](8.30s) 2021-08-22 17:46:49,958 - INFO: Epoch:6/10, batch:1290, train_loss:[0.82440], acc_top1:[0.75000], acc_top5:[0.95312](8.31s) 2021-08-22 17:46:58,236 - INFO: Epoch:6/10, batch:1300, train_loss:[0.92767], acc_top1:[0.76562], acc_top5:[0.92188](8.28s) 2021-08-22 17:47:06,597 - INFO: Epoch:6/10, batch:1310, train_loss:[0.98067], acc_top1:[0.73438], acc_top5:[0.89062](8.36s) 2021-08-22 17:47:15,108 - INFO: Epoch:6/10, batch:1320, train_loss:[0.78507], acc_top1:[0.73438], acc_top5:[0.96875](8.51s) 2021-08-22 17:47:23,390 - INFO: Epoch:6/10, batch:1330, train_loss:[0.94009], acc_top1:[0.71875], acc_top5:[0.95312](8.28s) 2021-08-22 17:47:31,738 - INFO: Epoch:6/10, batch:1340, train_loss:[0.98264], acc_top1:[0.67188], acc_top5:[0.93750](8.35s) 2021-08-22 17:47:39,900 - INFO: Epoch:6/10, batch:1350, train_loss:[0.77431], acc_top1:[0.73438], acc_top5:[0.96875](8.16s) 2021-08-22 17:47:47,370 - INFO: [validation] Epoch:6/10, val_loss:[0.00767], val_top1:[0.84842], val_top5:[0.98834] 2021-08-22 17:47:50,013 - INFO: 已保存当前测试模型(epoch=6)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:47:50,014 - INFO: 最优top1测试精度:0.84842 (epoch=6) 2021-08-22 17:47:58,313 - INFO: Epoch:7/10, batch:1360, train_loss:[0.39035], acc_top1:[0.92188], acc_top5:[0.98438](18.41s) 2021-08-22 17:48:06,523 - INFO: Epoch:7/10, batch:1370, train_loss:[0.81988], acc_top1:[0.75000], acc_top5:[0.95312](8.21s) 2021-08-22 17:48:14,851 - INFO: Epoch:7/10, batch:1380, train_loss:[1.05729], acc_top1:[0.70312], acc_top5:[0.93750](8.33s) 2021-08-22 17:48:23,081 - INFO: Epoch:7/10, batch:1390, train_loss:[0.87161], acc_top1:[0.78125], acc_top5:[0.96875](8.23s) 2021-08-22 17:48:31,487 - INFO: Epoch:7/10, batch:1400, train_loss:[0.71689], acc_top1:[0.79688], acc_top5:[0.96875](8.41s) 2021-08-22 17:48:39,690 - INFO: Epoch:7/10, batch:1410, train_loss:[0.86077], acc_top1:[0.71875], acc_top5:[0.96875](8.20s) 2021-08-22 17:48:48,185 - INFO: Epoch:7/10, batch:1420, train_loss:[0.71743], acc_top1:[0.79688], acc_top5:[0.95312](8.49s) 2021-08-22 17:48:56,464 - INFO: Epoch:7/10, batch:1430, train_loss:[0.85207], acc_top1:[0.76562], acc_top5:[0.92188](8.28s) 2021-08-22 17:49:04,986 - INFO: Epoch:7/10, batch:1440, train_loss:[0.83428], acc_top1:[0.73438], acc_top5:[0.95312](8.52s) 2021-08-22 17:49:13,300 - INFO: Epoch:7/10, batch:1450, train_loss:[1.14916], acc_top1:[0.67188], acc_top5:[0.93750](8.31s) 2021-08-22 17:49:21,591 - INFO: Epoch:7/10, batch:1460, train_loss:[1.04666], acc_top1:[0.70312], acc_top5:[0.93750](8.29s) 2021-08-22 17:49:29,938 - INFO: Epoch:7/10, batch:1470, train_loss:[1.23661], acc_top1:[0.65625], acc_top5:[0.85938](8.35s) 2021-08-22 17:49:38,291 - INFO: Epoch:7/10, batch:1480, train_loss:[0.84696], acc_top1:[0.75000], acc_top5:[0.96875](8.35s) 2021-08-22 17:49:46,717 - INFO: Epoch:7/10, batch:1490, train_loss:[0.94950], acc_top1:[0.68750], acc_top5:[0.93750](8.43s) 2021-08-22 17:49:55,244 - INFO: Epoch:7/10, batch:1500, train_loss:[0.59389], acc_top1:[0.82812], acc_top5:[0.96875](8.53s) 2021-08-22 17:50:03,499 - INFO: Epoch:7/10, batch:1510, train_loss:[0.80815], acc_top1:[0.73438], acc_top5:[0.95312](8.25s) 2021-08-22 17:50:11,905 - INFO: Epoch:7/10, batch:1520, train_loss:[0.80854], acc_top1:[0.75000], acc_top5:[0.95312](8.41s) 2021-08-22 17:50:20,275 - INFO: Epoch:7/10, batch:1530, train_loss:[0.78511], acc_top1:[0.79688], acc_top5:[0.95312](8.37s) 2021-08-22 17:50:28,564 - INFO: Epoch:7/10, batch:1540, train_loss:[0.75096], acc_top1:[0.76562], acc_top5:[0.96875](8.29s) 2021-08-22 17:50:36,903 - INFO: Epoch:7/10, batch:1550, train_loss:[1.10031], acc_top1:[0.71875], acc_top5:[0.89062](8.34s) 2021-08-22 17:50:45,151 - INFO: Epoch:7/10, batch:1560, train_loss:[0.85813], acc_top1:[0.79688], acc_top5:[0.93750](8.25s) 2021-08-22 17:50:53,427 - INFO: Epoch:7/10, batch:1570, train_loss:[1.02403], acc_top1:[0.73438], acc_top5:[0.93750](8.28s) 2021-08-22 17:51:05,155 - INFO: [validation] Epoch:7/10, val_loss:[0.00696], val_top1:[0.85734], val_top5:[0.98971] 2021-08-22 17:51:07,760 - INFO: 已保存当前测试模型(epoch=7)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:51:07,761 - INFO: 最优top1测试精度:0.85734 (epoch=7) 2021-08-22 17:51:12,080 - INFO: Epoch:8/10, batch:1580, train_loss:[0.60168], acc_top1:[0.84375], acc_top5:[0.95312](18.65s) 2021-08-22 17:51:20,296 - INFO: Epoch:8/10, batch:1590, train_loss:[0.64573], acc_top1:[0.82812], acc_top5:[0.96875](8.22s) 2021-08-22 17:51:28,598 - INFO: Epoch:8/10, batch:1600, train_loss:[0.89696], acc_top1:[0.75000], acc_top5:[0.96875](8.30s) 2021-08-22 17:51:36,778 - INFO: Epoch:8/10, batch:1610, train_loss:[0.75785], acc_top1:[0.78125], acc_top5:[0.95312](8.18s) 2021-08-22 17:51:45,208 - INFO: Epoch:8/10, batch:1620, train_loss:[0.77314], acc_top1:[0.76562], acc_top5:[0.95312](8.43s) 2021-08-22 17:51:53,615 - INFO: Epoch:8/10, batch:1630, train_loss:[0.70136], acc_top1:[0.81250], acc_top5:[0.93750](8.41s) 2021-08-22 17:52:01,935 - INFO: Epoch:8/10, batch:1640, train_loss:[0.74639], acc_top1:[0.76562], acc_top5:[0.95312](8.32s) 2021-08-22 17:52:10,351 - INFO: Epoch:8/10, batch:1650, train_loss:[0.69012], acc_top1:[0.81250], acc_top5:[0.95312](8.42s) 2021-08-22 17:52:18,766 - INFO: Epoch:8/10, batch:1660, train_loss:[0.95555], acc_top1:[0.73438], acc_top5:[0.90625](8.42s) 2021-08-22 17:52:27,092 - INFO: Epoch:8/10, batch:1670, train_loss:[0.74534], acc_top1:[0.75000], acc_top5:[0.98438](8.33s) 2021-08-22 17:52:35,413 - INFO: Epoch:8/10, batch:1680, train_loss:[0.72694], acc_top1:[0.79688], acc_top5:[0.95312](8.32s) 2021-08-22 17:52:43,880 - INFO: Epoch:8/10, batch:1690, train_loss:[0.71387], acc_top1:[0.81250], acc_top5:[0.96875](8.47s) 2021-08-22 17:52:52,313 - INFO: Epoch:8/10, batch:1700, train_loss:[0.71251], acc_top1:[0.81250], acc_top5:[0.93750](8.43s) 2021-08-22 17:53:00,535 - INFO: Epoch:8/10, batch:1710, train_loss:[0.84922], acc_top1:[0.75000], acc_top5:[0.95312](8.22s) 2021-08-22 17:53:09,060 - INFO: Epoch:8/10, batch:1720, train_loss:[0.78461], acc_top1:[0.76562], acc_top5:[0.96875](8.53s) 2021-08-22 17:53:17,408 - INFO: Epoch:8/10, batch:1730, train_loss:[0.78426], acc_top1:[0.75000], acc_top5:[0.95312](8.35s) 2021-08-22 17:53:25,906 - INFO: Epoch:8/10, batch:1740, train_loss:[0.55775], acc_top1:[0.82812], acc_top5:[0.96875](8.50s) 2021-08-22 17:53:34,421 - INFO: Epoch:8/10, batch:1750, train_loss:[0.85211], acc_top1:[0.76562], acc_top5:[0.95312](8.52s) 2021-08-22 17:53:42,708 - INFO: Epoch:8/10, batch:1760, train_loss:[0.54555], acc_top1:[0.87500], acc_top5:[0.95312](8.29s) 2021-08-22 17:53:51,220 - INFO: Epoch:8/10, batch:1770, train_loss:[0.60635], acc_top1:[0.81250], acc_top5:[0.96875](8.51s) 2021-08-22 17:53:59,790 - INFO: Epoch:8/10, batch:1780, train_loss:[0.74180], acc_top1:[0.75000], acc_top5:[1.00000](8.57s) 2021-08-22 17:54:08,932 - INFO: Epoch:8/10, batch:1790, train_loss:[0.76196], acc_top1:[0.76562], acc_top5:[0.98438](9.14s) 2021-08-22 17:54:17,850 - INFO: Epoch:8/10, batch:1800, train_loss:[0.67637], acc_top1:[0.82812], acc_top5:[0.95312](8.92s) 2021-08-22 17:54:25,708 - INFO: [validation] Epoch:8/10, val_loss:[0.00675], val_top1:[0.86968], val_top5:[0.99040] 2021-08-22 17:54:28,590 - INFO: 已保存当前测试模型(epoch=8)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:54:28,591 - INFO: 最优top1测试精度:0.86968 (epoch=8) 2021-08-22 17:54:37,691 - INFO: Epoch:9/10, batch:1810, train_loss:[0.77545], acc_top1:[0.78125], acc_top5:[0.98438](19.84s) 2021-08-22 17:54:46,684 - INFO: Epoch:9/10, batch:1820, train_loss:[0.63141], acc_top1:[0.79688], acc_top5:[0.98438](8.99s) 2021-08-22 17:54:55,813 - INFO: Epoch:9/10, batch:1830, train_loss:[0.54144], acc_top1:[0.82812], acc_top5:[0.98438](9.13s) 2021-08-22 17:55:04,650 - INFO: Epoch:9/10, batch:1840, train_loss:[0.90921], acc_top1:[0.73438], acc_top5:[0.90625](8.84s) 2021-08-22 17:55:13,490 - INFO: Epoch:9/10, batch:1850, train_loss:[0.74689], acc_top1:[0.73438], acc_top5:[0.96875](8.84s) 2021-08-22 17:55:22,430 - INFO: Epoch:9/10, batch:1860, train_loss:[0.79644], acc_top1:[0.79688], acc_top5:[0.92188](8.94s) 2021-08-22 17:55:31,414 - INFO: Epoch:9/10, batch:1870, train_loss:[0.70958], acc_top1:[0.79688], acc_top5:[0.92188](8.98s) 2021-08-22 17:55:40,365 - INFO: Epoch:9/10, batch:1880, train_loss:[0.77298], acc_top1:[0.81250], acc_top5:[0.95312](8.95s) 2021-08-22 17:55:49,349 - INFO: Epoch:9/10, batch:1890, train_loss:[0.78440], acc_top1:[0.79688], acc_top5:[0.92188](8.98s) 2021-08-22 17:55:58,278 - INFO: Epoch:9/10, batch:1900, train_loss:[0.71061], acc_top1:[0.78125], acc_top5:[0.95312](8.93s) 2021-08-22 17:56:07,476 - INFO: Epoch:9/10, batch:1910, train_loss:[0.54883], acc_top1:[0.79688], acc_top5:[0.96875](9.20s) 2021-08-22 17:56:16,555 - INFO: Epoch:9/10, batch:1920, train_loss:[0.70008], acc_top1:[0.75000], acc_top5:[0.96875](9.08s) 2021-08-22 17:56:25,429 - INFO: Epoch:9/10, batch:1930, train_loss:[0.66951], acc_top1:[0.78125], acc_top5:[0.95312](8.87s) 2021-08-22 17:56:34,442 - INFO: Epoch:9/10, batch:1940, train_loss:[0.55841], acc_top1:[0.81250], acc_top5:[0.98438](9.01s) 2021-08-22 17:56:43,255 - INFO: Epoch:9/10, batch:1950, train_loss:[0.87944], acc_top1:[0.76562], acc_top5:[0.93750](8.81s) 2021-08-22 17:56:52,382 - INFO: Epoch:9/10, batch:1960, train_loss:[0.89601], acc_top1:[0.68750], acc_top5:[0.92188](9.13s) 2021-08-22 17:57:01,320 - INFO: Epoch:9/10, batch:1970, train_loss:[0.61195], acc_top1:[0.81250], acc_top5:[0.96875](8.94s) 2021-08-22 17:57:10,331 - INFO: Epoch:9/10, batch:1980, train_loss:[0.59344], acc_top1:[0.81250], acc_top5:[0.96875](9.01s) 2021-08-22 17:57:19,313 - INFO: Epoch:9/10, batch:1990, train_loss:[0.49988], acc_top1:[0.89062], acc_top5:[1.00000](8.98s) 2021-08-22 17:57:28,083 - INFO: Epoch:9/10, batch:2000, train_loss:[0.64842], acc_top1:[0.79688], acc_top5:[0.95312](8.77s) 2021-08-22 17:57:37,056 - INFO: Epoch:9/10, batch:2010, train_loss:[0.96179], acc_top1:[0.71875], acc_top5:[0.92188](8.97s) 2021-08-22 17:57:46,006 - INFO: Epoch:9/10, batch:2020, train_loss:[0.84787], acc_top1:[0.73438], acc_top5:[0.93750](8.95s) 2021-08-22 17:57:58,497 - INFO: [validation] Epoch:9/10, val_loss:[0.00613], val_top1:[0.88340], val_top5:[0.99040] 2021-08-22 17:58:01,712 - INFO: 已保存当前测试模型(epoch=9)为最优模型:Garbage_Resnet50_trainval_final 2021-08-22 17:58:01,712 - INFO: 最优top1测试精度:0.88340 (epoch=9) 2021-08-22 17:58:06,317 - INFO: Epoch:10/10, batch:2030, train_loss:[0.73180], acc_top1:[0.82812], acc_top5:[0.95312](20.31s) 2021-08-22 17:58:15,251 - INFO: Epoch:10/10, batch:2040, train_loss:[0.87980], acc_top1:[0.75000], acc_top5:[0.93750](8.93s) 2021-08-22 17:58:24,079 - INFO: Epoch:10/10, batch:2050, train_loss:[0.91473], acc_top1:[0.76562], acc_top5:[0.90625](8.83s) 2021-08-22 17:58:32,979 - INFO: Epoch:10/10, batch:2060, train_loss:[0.58253], acc_top1:[0.84375], acc_top5:[0.98438](8.90s) 2021-08-22 17:58:41,954 - INFO: Epoch:10/10, batch:2070, train_loss:[0.87739], acc_top1:[0.78125], acc_top5:[0.95312](8.98s) 2021-08-22 17:58:51,066 - INFO: Epoch:10/10, batch:2080, train_loss:[0.75413], acc_top1:[0.81250], acc_top5:[0.98438](9.11s) 2021-08-22 17:59:00,028 - INFO: Epoch:10/10, batch:2090, train_loss:[0.58096], acc_top1:[0.84375], acc_top5:[0.96875](8.96s) 2021-08-22 17:59:09,074 - INFO: Epoch:10/10, batch:2100, train_loss:[0.71027], acc_top1:[0.79688], acc_top5:[0.93750](9.05s) 2021-08-22 17:59:18,065 - INFO: Epoch:10/10, batch:2110, train_loss:[0.70039], acc_top1:[0.70312], acc_top5:[1.00000](8.99s) 2021-08-22 17:59:27,370 - INFO: Epoch:10/10, batch:2120, train_loss:[0.50366], acc_top1:[0.84375], acc_top5:[1.00000](9.30s) 2021-08-22 17:59:36,317 - INFO: Epoch:10/10, batch:2130, train_loss:[0.81826], acc_top1:[0.78125], acc_top5:[0.92188](8.95s) 2021-08-22 17:59:45,273 - INFO: Epoch:10/10, batch:2140, train_loss:[0.65945], acc_top1:[0.79688], acc_top5:[0.98438](8.96s) 2021-08-22 17:59:54,316 - INFO: Epoch:10/10, batch:2150, train_loss:[0.81727], acc_top1:[0.76562], acc_top5:[0.95312](9.04s) 2021-08-22 18:00:03,367 - INFO: Epoch:10/10, batch:2160, train_loss:[0.46495], acc_top1:[0.85938], acc_top5:[0.96875](9.05s) 2021-08-22 18:00:12,358 - INFO: Epoch:10/10, batch:2170, train_loss:[0.50102], acc_top1:[0.84375], acc_top5:[0.96875](8.99s) 2021-08-22 18:00:21,301 - INFO: Epoch:10/10, batch:2180, train_loss:[0.69640], acc_top1:[0.76562], acc_top5:[0.95312](8.94s) 2021-08-22 18:00:30,190 - INFO: Epoch:10/10, batch:2190, train_loss:[0.56214], acc_top1:[0.79688], acc_top5:[0.96875](8.89s) 2021-08-22 18:00:39,047 - INFO: Epoch:10/10, batch:2200, train_loss:[0.73349], acc_top1:[0.78125], acc_top5:[0.96875](8.86s) 2021-08-22 18:00:48,064 - INFO: Epoch:10/10, batch:2210, train_loss:[0.79292], acc_top1:[0.78125], acc_top5:[0.93750](9.02s) 2021-08-22 18:00:57,105 - INFO: Epoch:10/10, batch:2220, train_loss:[0.70676], acc_top1:[0.79688], acc_top5:[0.95312](9.04s) 2021-08-22 18:01:06,072 - INFO: Epoch:10/10, batch:2230, train_loss:[0.72489], acc_top1:[0.79688], acc_top5:[0.93750](8.97s) 2021-08-22 18:01:14,885 - INFO: Epoch:10/10, batch:2240, train_loss:[0.68028], acc_top1:[0.82812], acc_top5:[0.95312](8.81s) 2021-08-22 18:01:23,927 - INFO: Epoch:10/10, batch:2250, train_loss:[0.85313], acc_top1:[0.71875], acc_top5:[0.95312](9.04s) 2021-08-22 18:01:31,787 - INFO: [validation] Epoch:10/10, val_loss:[0.00617], val_top1:[0.88134], val_top5:[0.99177] 2021-08-22 18:01:31,788 - INFO: 最优top1测试精度:0.88340 (epoch=9) 2021-08-22 18:01:31,788 - INFO: 训练完成,最终性能accuracy=0.88340(epoch=9), 总耗时2015.40s, 已将其保存为:Garbage_Resnet50_trainval_final 2021-08-22 18:01:31,788 - INFO: 训练完毕,结果路径D:\Workspace\ExpResults\Comp01GarbageClassification\Garbage_Resnet50_trainval. 2021-08-22 18:01:31,789 - INFO: Done.
训练完成后,建议将 *ExpResults* 文件夹的最终文件 **copy** 到 *ExpDeployments* 用于进行部署和应用。
注意Garage数据集中的 test
样本未提供 label
信息,因此无法进行准确度评估
from paddle.static import InputSpec
if __name__ == '__main__':
# 设置输入样本的维度
input_spec = InputSpec(shape=[None] + args['input_size'], dtype='float32', name='image')
label_spec = InputSpec(shape=[None, 1], dtype='int64', name='label')
# 载入模型
network = paddle.vision.models.resnet50(num_classes=args['class_dim'])
model = paddle.Model(network, input_spec, label_spec) # 模型实例化
model.load(deployment_checkpoint_model) # 载入调优模型的参数
model.prepare(loss=paddle.nn.CrossEntropyLoss(), # 设置loss
metrics=paddle.metric.Accuracy(topk=(1,5))) # 设置评价指标
# 执行评估函数,并输出验证集样本的损失和精度
print('开始评估...')
avg_loss, avg_acc_top1, avg_acc_top5 = eval(model, val_reader(), verbose=1)
print('\r [验证集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f} \n'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')
# avg_loss, avg_acc_top1, avg_acc_top5 = eval(model, test_reader(), verbose=1)
# print('\r [测试集] 损失: {:.5f}, top1精度:{:.5f}, top5精度为:{:.5f}'.format(avg_loss, avg_acc_top1, avg_acc_top5), end='')
开始评估... [验证集] 损失: 0.00613, top1精度:0.88340, top5精度为:0.99040
【结果分析】
需要注意的是此处的精度与训练过程中输出的测试精度是不相同的,因为训练过程中使用的是验证集, 而这里的离线测试使用的是测试集.
# 导入依赖库
import numpy as np
import random
import os
import cv2
import json
import matplotlib.pyplot as plt
import paddle
import paddle.nn.functional as F
args={
'course_name': 'DeepLearning',
'project_name': 'Comp01GarbageClassification',
'dataset_name': 'Garbage',
'architecture': 'Resnet50',
'training_data': 'trainval',
'input_size': [3, 227, 227],
'mean_value': [0.485, 0.456, 0.406], # Imagenet均值
'std_value': [0.229, 0.224, 0.225], # Imagenet标准差
'batch_size': 64, # 设置每个批次的数据大小,同时对训练提供器和测试
'dataset_root_path': 'D:\\Workspace\\ExpDatasets\\',
'result_root_path': 'D:\\Workspace\\ExpResults\\',
'deployment_root_path': 'D:\\Workspace\\ExpDeployments\\',
}
model_name = args['dataset_name'] + '_' + args['architecture']
if args['training_data'] == 'trainval':
model_name = model_name + '_trainval'
deployment_final_models = os.path.join(args['deployment_root_path'], args['course_name'], args['project_name'], model_name, 'final_models', model_name + '_final')
dataset_root_path = os.path.join(args['dataset_root_path'], args['dataset_name'])
json_dataset_info = os.path.join(dataset_root_path, 'dataset_info.json')
import os
import sys
import cv2
import numpy as np
import paddle
import paddle.vision.transforms as T
from paddle.io import DataLoader
paddle.vision.set_image_backend('cv2')
input_size = (args['input_size'][1], args['input_size'][2])
# 1. 数据集的定义
class Dataset(paddle.io.Dataset):
def __init__(self, dataset_root_path, mode='test', withAugmentation=argsAS['withAugmentation']):
assert mode in ['train', 'val', 'test', 'trainval']
self.data = []
self.withAugmentation = withAugmentation
with open(os.path.join(dataset_root_path, mode+'.txt')) as f:
for line in f.readlines():
info = line.strip().split('\t')
image_path = os.path.join(dataset_root_path, info[0].strip())
if len(info) == 2:
self.data.append([image_path, info[1].strip()])
elif len(info) == 1:
self.data.append([image_path, -1])
prob = np.random.random()
if mode in ['train', 'trainval'] and prob >= argsAS['augmentation_prob']:
self.transforms = T.Compose([
T.RandomResizedCrop(input_size),
T.RandomHorizontalFlip(argsAS['Hflip_prob']),
T.RandomRotation(argsAS['rotate_angle']),
T.ColorJitter(brightness=argsAS['brightness'], contrast=argsAS['contrast'], saturation=argsAS['saturation'], hue=argsAS['hue']),
T.ToTensor(),
T.Normalize(mean=args['mean_value'], std=args['std_value'])
])
elif mode in ['val', 'test'] or prob < argsAS['augmentation_prob']:
self.transforms = T.Compose([
T.Resize(input_size),
T.ToTensor(),
T.Normalize(mean=args['mean_value'], std=args['std_value'])
])
# 根据索引获取单个样本
def __getitem__(self, index):
image_path, label = self.data[index]
image = cv2.imread(image_path, 1) # 使用cv2进行数据读取可以强制将的图像转化为彩色模式,其中0为灰度模式,1为彩色模式
if self.withAugmentation == True:
image = self.transforms(image)
return image, np.array(label, dtype='int64')
# 获取样本总数
def __len__(self):
return len(self.data)
###############################################################
# 测试输入数据类:分别输出进行预处理和未进行预处理的数据形态和例图
if __name__ == "__main__":
import random
# 1. 载入数据
dataset_val = Dataset(dataset_root_path, mode='val')
i = random.randrange(0, len(dataset_val))
img1 = dataset_val[i][0]
print('验证数据预处理前的数据形态(进行预处理后): {}'.format(img1.shape))
dataset_test = Dataset(dataset_root_path, mode='test', withAugmentation=False)
j = random.randrange(0, len(dataset_test))
img2 = dataset_test[j][0]
print('测试数据预处理前的数据形态(未进行预处理): {}'.format(img2.shape))
验证数据预处理前的数据形态(进行预处理后): [3, 227, 227] 测试数据预处理前的数据形态(未进行预处理): (688, 687, 3)
import os
import sys
from paddle.io import DataLoader
# 1. 从数据集库中获取数据
dataset_trainval = Dataset(dataset_root_path, mode='trainval')
dataset_train = Dataset(dataset_root_path, mode='train')
dataset_val = Dataset(dataset_root_path, mode='val')
dataset_test = Dataset(dataset_root_path, mode='test')
# 2. 创建读取器
trainval_reader = DataLoader(dataset_trainval, batch_size=args['batch_size'], shuffle=True, drop_last=True)
train_reader = DataLoader(dataset_train, batch_size=args['batch_size'], shuffle=True, drop_last=True)
val_reader = DataLoader(dataset_val, batch_size=args['batch_size'], shuffle=False, drop_last=False)
test_reader = DataLoader(dataset_test, batch_size=args['batch_size'], shuffle=False, drop_last=False)
# 测试读取器
if __name__ == "__main__":
for i, (image, label) in enumerate(test_reader()):
print('验证集batch_{}的图像形态:{}, 标签形态:{}'.format(i, image.shape, label.shape))
break
验证集batch_0的图像形态:[64, 3, 227, 227], 标签形态:[64]
import paddle
import paddle.vision.transforms as T
# 2. 用于测试的十重切割
def TenCrop(img, crop_size=args['input_size'][1]):
# input_data: Height x Width x Channel
img_size = 256
img = T.functional.resize(img, img_size)
data = np.zeros([10, crop_size, crop_size, 3], dtype=np.uint8)
# 获取左上、右上、左下、右下、中央,及其对应的翻转,共计10个样本
data[0] = T.functional.crop(img,0,0,crop_size,crop_size)
data[1] = T.functional.crop(img,0,img_size-crop_size,crop_size,crop_size)
data[2] = T.functional.crop(img,img_size-crop_size,0,crop_size,crop_size)
data[3] = T.functional.crop(img,img_size-crop_size,img_size-crop_size,crop_size,crop_size)
data[4] = T.functional.center_crop(img, crop_size)
data[5] = T.functional.hflip(data[0, :, :, :])
data[6] = T.functional.hflip(data[1, :, :, :])
data[7] = T.functional.hflip(data[2, :, :, :])
data[8] = T.functional.hflip(data[3, :, :, :])
data[9] = T.functional.hflip(data[4, :, :, :])
return data
# 3. 对于单幅图片(十重切割)所使用的数据预处理,包括均值消除,尺度变换
def SimplePreprocessing(image, input_size = args['input_size'][1:3], isTenCrop = True):
image = cv2.resize(image, input_size)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
transform = T.Compose([
T.ToTensor(),
T.Normalize(mean=args['mean_value'], std=args['std_value'])
])
if isTenCrop:
fake_data = np.zeros([10, 3, input_size[0], input_size[1]], dtype=np.float32)
fake_blob = TenCrop(image)
for i in range(10):
fake_data[i] = transform(fake_blob[i]).numpy()
else:
fake_data = transform(image)
return fake_data
##############################################################
# 测试输入数据类:分别输出进行预处理和未进行预处理的数据形态和例图
if __name__ == "__main__":
img_path = 'D:\\Workspace\\ExpDatasets\\Garbage\\test\\test10.jpg'
img0 = cv2.imread(img_path, 1)
img1 = SimplePreprocessing(img0, isTenCrop=False)
img2 = SimplePreprocessing(img0, isTenCrop=True)
print('原始图像的形态为: {}'.format(img0.shape))
print('简单预处理后(经过十重切割后): {}'.format(img1.shape))
print('简单预处理后(未经过十重切割后) {}'.format(img2.shape))
img1_show = img1.transpose((1, 2, 0))
img2_show = img2[0].transpose((1, 2, 0))
plt.figure(figsize=(18, 6))
ax0 = plt.subplot(1,3,1)
ax0.set_title('img0')
plt.imshow(img0)
ax1 = plt.subplot(1,3,2)
ax1.set_title('img1_show')
plt.imshow(img1_show)
ax2 = plt.subplot(1,3,3)
ax2.set_title('img2_show')
plt.imshow(img2_show)
plt.show()
2021-08-22 19:53:24,731 - WARNING: Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). 2021-08-22 19:53:24,745 - WARNING: Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
原始图像的形态为: (342, 544, 3) 简单预处理后(经过十重切割后): [3, 227, 227] 简单预处理后(未经过十重切割后) (10, 3, 227, 227)
#################################################
# 修改者: Xinyu Ou (http://ouxinyu.cn)
# 功能: 使用部署模型对测试集进行评估
# 基本功能:
# 1. 使用部署模型在测试集上进行批量预测,并输出预测结果
# 2. 使用部署模型在测试集上进行单样本预测
#################################################
import numpy as np
import random
import codecs
import os
import cv2
import json
import matplotlib.pyplot as plt
import paddle
import paddle.nn.functional as F
# 1. 使用部署模型在测试集上进行批量预测,并输出预测结果
def predict_batch(model, data_reader):
prediction = []
for batch_id, (image, label) in enumerate(data_reader):
logits = model(image)
pred = F.softmax(logits)
pred_id = np.argmax(pred.numpy(), axis=1)
prediction = np.append(prediction, pred_id).astype('int64')
prediction_path = os.path.join(deployment_root_path, 'model_result.txt')
with codecs.open(prediction_path, 'w', 'utf-8') as f_pred:
for i in range(len(prediction)):
f_pred.write('{}\n'.format(prediction[i]))
print(prediction)
print('结果文件保存到 {} 成功.'.format(prediction_path))
# 2. 使用部署模型在测试集上进行单样本预测
def predict(model, image):
isTenCrop = True
image = SimplePreprocessing(image, isTenCrop=isTenCrop)
if isTenCrop:
logits = model(image)
pred = F.softmax(logits)
pred = np.mean(pred.numpy(), axis=0)
else:
image = paddle.unsqueeze(image, axis=0)
logits = model(image)
pred = F.softmax(logits)
pred_id = np.argmax(pred)
return pred_id
##############################################################
if __name__ == '__main__':
# 载入模型
model = paddle.jit.load(deployment_final_models)
# 1. 计算测试集的准确度
predict_batch(model, test_reader())
# 2. 输出单个样本测试结果
# 2.1 获取待预测样本的标签信息
dataset_info_list = os.path.join(dataset_root_path, 'dataset_info.json')
with open(dataset_info_list, 'r', encoding='utf-8') as f_info:
dataset_info = json.load(f_info)
# 2.2 从测试列表中随机选择一副图像
test_list = os.path.join(dataset_root_path, 'test.txt')
with open(test_list, 'r') as f_test:
lines = f_test.readlines()
line = lines[0]
# line = random.choice(lines)
img_path = line.split()[0]
img_path = os.path.join(dataset_root_path, img_path)
# img_path = 'D:\\Workspace\\ExpDatasets\\Garbage\\test\\test1.jpg'
image = cv2.imread(img_path, 1)
# 2.4 给出待测样本的类别
pred_id = predict(model, image)
# 将预测的label和ground_turth label转换为label name
label_name_pred = dataset_info['label_dict'][str(pred_id)]
print('待测样本{}的预测类别为:{}(id={})'.format(line.split()[0], label_name_pred, pred_id))
# 2.5 显示待预测样本
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_rgb)
plt.show()
[29 1 4 23 5 0 1 31 39 34 26 17 37 22 22 13 20 35 28 30 5 15 21 26 33 14 10 4 7 30 38 15 19 14 39 31 38 28 23 37 19 28 37 35 6 24 0 18 9 3 25 19 13 35 31 31 18 35 39 32 36 1 24 36 22 3 35 17 3 1 15 10 28 7 12 22 18 37 8 20 36 16 21 9 23 32 21 16 12 14 1 6 20 11 1 18 25 26 5 27 12 38 38 22 11 35 21 5 8 24 6 7 17 14 24 16 6 7 9 3 14 0 19 17 19 32 24 32 21 9 21 14 6 10 3 11 29 13 39 18 6 7 35 13 30 1 34 22 18 10 19 39 14 2 25 16 34 22 26 16 3 26 18 23 38 23 21 12 24 23 8 9 9 36 14 23 0 3 15 20 34 29 39 25 36 5 2 4 6 34 27 28 27 9 21 23 27 23 5 23 15 2 27 5 9 11 24 0 33 27 27 10 0 23 11 14 22 18 14 31 34 4 29 17 17 39 24 21 13 38 7 23 4 26 21 8 17 21 10 25 28 33 34 2 31 8 33 21 29 10 2 5 20 34 17 33 25 0 32 3 24 13 2 29 6 36 15 26 20 7 37 21 0 4 12 18 16 10 5 26 36 17 27 39 30 35 1 27 23 12 33 34 2 30 3 28 29 17 28 38 20 27 15 36 7 31 17 23 4 33 13 35 19 10 18 39 39 2 24 23 18 39 12 22 32 19 6 30 28 22 8 11 36 5 26 6 10 10 16 28 6 27 11 30 13 35 37 33 12 26 13 7 1 23 37 39 21 7 2 34 14 16 39 14 17 39 39 10 14 33 39 6 2 1 12 5 23 3 30 13 29 12 19 18 25 21 31 28 9 21 19 18 11 25 30 21 9 26 8 37] 结果文件保存到 D:\Workspace\ExpDeployments\DeepLearning\Comp01GarbageClassification\Garbage_Resnet50_trainval\model_result.txt 成功. 待测样本test\test1.jpg的预测类别为:可回收物/砧板(id=29)
下面我们将在ResNet50, ResNet18, Mobilenetv2, VGG16四个模型对垃圾分类
进行评估,所有模型设置batch_size=64,学习率0.001。所有模型均采用Imagenet预训练模型进行训练,训练集为train。
模型名称 | Baseline模型 | ImageNet预训练 | learning_rate | best_epoch | top-1 acc | top-5 acc | loss | 单batch时间/总训练时间(s) | 可训练参数/总参数 |
---|---|---|---|---|---|---|---|---|---|
Garbage_Resnet18 | ResNet18 | 是 | 0.001 | 10/10 | 0.75240 | 0.95679 | 0.01308 | 5.48/1172.13 | 11,172,042/11,191,242 |
Garbage_Resnet50 | ResNet50 | 是 | 0.001 | 8/10 | 0.87723 | 0.98834 | 0.00697 | 5.94/1536.76 | 23,479,500/23,585,740 |
Garbage_Resnet50 | ResNet50 | 是 | 0.01 | 10/10 | 0.74211 | 0.95062 | 0.01485 | 8.35/1835.17 | 23,479,500/23,585,740 |
Garbage_VGG16 | VGG16 | 是 | 0.001 | 8/10 | 0.70165 | 0.94444 | 0.01514 | 10.35/2346.10 | 134,424,424/134,424,424 |
Garbage_Mobilenetv2 | Mobilenetv2 | 是 | 0.001 | 9/10 | 0.77572 | 0.96228 | 0.01208 | 6.02/1540.33 | 2,205,132/2,273,356 |
经过初步训练,在10个epoch下,Resnet50模型具有最好的性能。因此,我们固定所有参数,将训练集改为trainval,同样训练10个周期后停止训练。最终的Baseline模型使用test子集在trainval输出的模型上进行推理。