色哟哟视频在线观看-色哟哟视频在线-色哟哟欧美15最新在线-色哟哟免费在线观看-国产l精品国产亚洲区在线观看-国产l精品国产亚洲区久久

0
  • 聊天消息
  • 系統消息
  • 評論與回復
登錄后你可以
  • 下載海量資料
  • 學習在線課程
  • 觀看技術視頻
  • 寫文章/發帖/加入社區
會員中心
創作中心

完善資料讓更多小伙伴認識你,還能領取20積分哦,立即完善>

3天內不再提示

如何讓網絡模型加速訓練

科技綠洲 ? 來源:Python數據科學 ? 作者:Python數據科學 ? 2023-11-03 10:00 ? 次閱讀

如果我們使用的 數據集較大 ,且 網絡較深 ,則會造成 訓練較慢 ,此時我們要想加速訓練可以使用 Pytorch的AMPautocast與Gradscaler );本文便是依據此寫出的博文,對 Pytorch的AMP (autocast與Gradscaler進行對比) 自動混合精度對模型訓練加速

注意Pytorch1.6+,已經內置torch.cuda.amp,因此便不需要加載NVIDIA的apex庫(半精度加速),為方便我們便 不使用NVIDIA的apex庫 (安裝麻煩),轉而 使用torch.cuda.amp

AMP (Automatic mixed precision): 自動混合精度,那 什么是自動混合精度

先來梳理一下歷史:先有NVIDIA的apex,之后NVIDIA的開發人員將其貢獻到Pytorch 1.6+產生了torch.cuda.amp[這是筆者梳理,可能有誤,請留言]

詳細講:默認情況下,大多數深度學習框架都采用32位浮點算法進行訓練。2017年,NVIDIA研究了一種用于混合精度訓練的方法(apex),該方法在訓練網絡時將單精度(FP32)與半精度(FP16)結合在一起,并使用相同的超參數實現了與FP32幾乎相同的精度,且速度比之前快了不少

之后,來到了AMP時代(特指torch.cuda.amp),此有兩個關鍵詞:自動混合精度 (Pytorch 1.6+中的torch.cuda.amp)其中,自動表現在Tensor的dtype類型會自動變化,框架按需自動調整tensor的dtype,可能有些地方需要手動干預;混合精度表現在采用不止一種精度的Tensor, torch.FloatTensor與torch.HalfTensor。并且從名字可以看出torch.cuda.amp,這個功能 只能在cuda上使用

為什么我們要使用AMP自動混合精度?

1.減少顯存占用(FP16優勢)

2.加快訓練和推斷的計算(FP16優勢)

3.張量核心的普及(NVIDIA Tensor Core),低精度(FP16優勢)

  1. 混合精度訓練緩解舍入誤差問題,(FP16有此劣勢,但是FP32可以避免此)

5.損失放大,可能使用混合精度還會出現無法收斂的問題[其原因時激活梯度值較小],造成了溢出,則可以通過使用torch.cuda.amp.GradScaler放大損失來防止梯度的下溢

申明此篇博文主旨如何讓網絡模型加速訓練 ,而非去了解其原理,且其以AlexNet為網絡架構(其需要輸入的圖像大小為227x227x3),CIFAR10為數據集,Adamw為梯度下降函數,學習率機制為ReduceLROnPlateau舉例。使用的電腦是2060的拯救者,雖然渣,但是還是可以搞搞這些測試。

本文從1.沒使用DDP與DP訓練與評估代碼(之后加入amp),2.分布式DP訓練與評估代碼(之后加入amp),3.單進程占用多卡DDP訓練與評估代碼(之后加入amp) 角度講解。

運行此程序時,文件的結構:

D:/PycharmProject/Simple-CV-Pytorch-master
|
|
|
|----AMP(train_without.py、train_DP.py、train_autocast.py、train_GradScaler.py、eval_XXX.py
|等,之后加入的alexnet也在這里,alexnet.py)
|
|
|
|----tensorboard(保存tensorboard的文件夾)
|
|
|
|----checkpoint(保存模型的文件夾)
|
|
|
|----data(數據集所在文件夾)

1.沒使用DDP與DP訓練與評估代碼

沒使用DDP與DP的訓練與評估實驗,作為我們實驗的參照組

(1)原本模型的訓練與評估源碼:

訓練源碼:

注意:此段代碼無比簡陋,僅為代碼的雛形,大致能理解尚可!

train_without.py

import time
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision.models import alexnet
from torchvision import transforms
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        outputs = model(imgs)
        loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

代碼特別粗獷,尤其是device與精度計算,僅供參考,切勿模仿!

eval_without.py

import torch
import torchvision
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()
# 1.Create model
model = alexnet()


# 2.Ready Dataset
if args.dataset == 'CIFAR10':
    test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                transform=transforms.Compose(
                                                    [transforms.Resize(args.img_size),
                                                     transforms.ToTensor()]),
                                                download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
# 3.Length
test_dataset_size = len(test_dataset)
print("the test dataset size is {}".format(test_dataset_size))

# 4.DataLoader
test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)

# 5. Set some parameters for testing the network
total_accuracy = 0

# test
model.eval()
with torch.no_grad():
    for data in test_dataloader:
        imgs, targets = data
        device = torch.device('cpu')
        imgs, targets = imgs.to(device), targets.to(device)
        model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
        model.load_state_dict(model_load)
        outputs = model(imgs)
        outputs = outputs.to(device)
        accuracy = (outputs.argmax(1) == targets).sum()
        total_accuracy = total_accuracy + accuracy
        accuracy = total_accuracy / test_dataset_size
    print("the total accuracy is {}".format(accuracy))

運行結果:

圖片

分析:

原本模型訓練完20個epochs花費了22分22秒,得到的準確率為0.8191

(2)原本模型加入autocast的訓練與評估源碼:

訓練源碼:

訓練大致代碼流程:

from torch.cuda.amp import autocast as autocast

...

# Create model, default torch.FloatTensor
model = Net().cuda()

# SGD,Adm, Admw,...
optim = optim.XXX(model.parameters(),..)

...

for imgs,targets in dataloader:
    imgs,targets = imgs.cuda(),targets.cuda()

    ....
    with autocast():
        outputs = model(imgs)
        loss = loss_fn(outputs,targets)
   ...
    optim.zero_grad()
    loss.backward()
    optim.step()

...

train_autocast_without.py

import time
import torch
import torchvision
from torch import nn
from torch.cuda.amp import autocast
from torchvision import transforms
from torchvision.models import alexnet
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_without.py 和 1.(1)一樣

運行結果:

圖片

分析:

原本模型訓練完20個epochs花費了22分22秒,加入autocast之后模型花費的時間為21分21秒,說明模型速度增加了,并且準確率從之前的0.8191提升到0.8403

(3)原本模型加入autocast與GradScaler的訓練與評估源碼:

使用torch.cuda.amp.GradScaler是放大損失值來防止梯度的下溢

訓練源碼:

訓練大致代碼流程:

from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
...

# Create model, default torch.FloatTensor
model = Net().cuda()

# SGD,Adm, Admw,...
optim = optim.XXX(model.parameters(),..)
scaler = GradScaler()

...

for imgs,targets in dataloader:
    imgs,targets = imgs.cuda(),targets.cuda()
    ...
    optim.zero_grad()
    ....
    with autocast():
        outputs = model(imgs)
        loss = loss_fn(outputs,targets)

    scaler.scale(loss).backward()
    scaler.step(optim)
    scaler.update()
...

train_GradScaler_without.py

import time
import torch
import torchvision
from torch import nn
from torch.cuda.amp import autocast, GradScaler
from torchvision import transforms
from torchvision.models import alexnet
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
scaler = GradScaler()
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        optim.zero_grad()
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        scaler.scale(loss_train).backward()
        scaler.step(optim)
        scaler.update()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_without.py 和 1.(1)一樣

運行結果:

圖片

分析:

為什么,我們訓練完20個epochs花費了27分27秒,比之前原模型未使用任何amp的時間(22分22秒)都多了?

這是因為我們使用了GradScaler放大了損失降低了模型訓練的速度,還有個原因可能是筆者自身的顯卡太小,沒有起到加速的作用

2.分布式DP訓練與評估代碼

(1)DP原本模型的訓練與評估源碼:

訓練源碼:

train_DP.py

import time
import torch
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision.models import alexnet
from torchvision import transforms
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        outputs = model(imgs)
        loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DP.py

import torch
import torchvision
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()
# 1.Create model
model = alexnet()
model = torch.nn.DataParallel(model)

# 2.Ready Dataset
if args.dataset == 'CIFAR10':
    test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                transform=transforms.Compose(
                                                    [transforms.Resize(args.img_size),
                                                     transforms.ToTensor()]),
                                                download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
# 3.Length
test_dataset_size = len(test_dataset)
print("the test dataset size is {}".format(test_dataset_size))

# 4.DataLoader
test_dataloader = DataLoader(dataset=test_dataset, batch_size=args.batch_size)

# 5. Set some parameters for testing the network
total_accuracy = 0

# test
model.eval()
with torch.no_grad():
    for data in test_dataloader:
        imgs, targets = data
        device = torch.device('cpu')
        imgs, targets = imgs.to(device), targets.to(device)
        model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
        model.load_state_dict(model_load)
        outputs = model(imgs)
        outputs = outputs.to(device)
        accuracy = (outputs.argmax(1) == targets).sum()
        total_accuracy = total_accuracy + accuracy
        accuracy = total_accuracy / test_dataset_size
    print("the total accuracy is {}".format(accuracy))

運行結果:

圖片

(2)DP使用autocast的訓練與評估源碼:

訓練源碼:

如果你 這樣寫代碼 ,那么你的代碼 無效 !!!

...
    model = Model()
    model = torch.nn.DataParallel(model)
    ...
    with autocast():
        output = model(imgs)
        loss = loss_fn(output)

正確寫法 ,訓練大致流程代碼:

1.Model(nn.Module):
      @autocast()
      def forward(self, input):
      ...

2.Model(nn.Module):
      def foward(self, input):
          with autocast():
              ...

1與2皆可,之后:

...
model = Model()
model = torch.nn.DataParallel(model)
with autocast():
    output = model(imgs)
    loss = loss_fn(output)
...

模型:

須在forward函數上加入@autocast()或者在forward里面最上面加入with autocast():

alexnet.py

import torch
import torch.nn as nn
from torchvision.models.utils import load_state_dict_from_url
from torch.cuda.amp import autocast
from typing import Any

__all__ = ['AlexNet', 'alexnet']

model_urls = {
    'alexnet': 'https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth',
}


class AlexNet(nn.Module):

    def __init__(self, num_classes: int = 1000) - > None:
        super(AlexNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(64, 192, kernel_size=5, padding=2),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
            nn.Conv2d(192, 384, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(384, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, padding=1),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2),
        )
        self.avgpool = nn.AdaptiveAvgPool2d((6, 6))
        self.classifier = nn.Sequential(
            nn.Dropout(),
            nn.Linear(256 * 6 * 6, 4096),
            nn.ReLU(inplace=True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(inplace=True),
            nn.Linear(4096, num_classes),
        )

    @autocast()
    def forward(self, x: torch.Tensor) - > torch.Tensor:
        x = self.features(x)
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x


def alexnet(pretrained: bool = False, progress: bool = True, **kwargs: Any) - > AlexNet:
    r"""AlexNet model architecture from the
    `"One weird trick..." < https://arxiv.org/abs/1404.5997 >`_ paper.
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
        progress (bool): If True, displays a progress bar of the download to stderr
    """
    model = AlexNet(**kwargs)
    if pretrained:
        state_dict = load_state_dict_from_url(model_urls["alexnet"],
                                              progress=progress)
        model.load_state_dict(state_dict)
    return model

train_DP_autocast.py 導入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.cuda.amp import autocast as autocast
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
        loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        optim.zero_grad()
        loss_train.backward()
        optim.step()
        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DP.py 相比與2. (1)導入自己的alexnet.py

運行結果:

圖片

分析:

可以看出DP使用autocast訓練完20個epochs時需要花費的時間是21分21秒,相比與之前DP沒有使用的時間(22分22秒)快了1分1秒

之前DP未使用amp能達到準確率0.8216,而現在準確率降低到0.8188,說明還是使用自動混合精度加速還是對模型的準確率有所影響,后期可通過增大batch_sizel讓運行時間和之前一樣,但是準確率上升,來降低此影響

(3)DP使用autocast與GradScaler的訓練與評估源碼:

訓練源碼:

train_DP_GradScaler.py 導入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()

# 1.Create SummaryWriter
if args.tensorboard:
    writer = SummaryWriter(args.tensorboard_log)

# 2.Ready dataset
if args.dataset == 'CIFAR10':
    train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
        [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
else:
    raise ValueError("Dataset is not CIFAR10")
cuda = torch.cuda.is_available()
print('CUDA available: {}'.format(cuda))

# 3.Length
train_dataset_size = len(train_dataset)
print("the train dataset size is {}".format(train_dataset_size))

# 4.DataLoader
train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size)

# 5.Create model
model = alexnet()

if args.cuda == cuda:
    model = model.cuda()
    model = torch.nn.DataParallel(model).cuda()
else:
    model = torch.nn.DataParallel(model)

# 6.Create loss
cross_entropy_loss = nn.CrossEntropyLoss()

# 7.Optimizer
optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
scaler = GradScaler()
# 8. Set some parameters to control loop
# epoch
iter = 0
t0 = time.time()
for epoch in range(args.epochs):
    t1 = time.time()
    print(" -----------------the {} number of training epoch --------------".format(epoch))
    model.train()
    for data in train_dataloader:
        loss = 0
        imgs, targets = data
        optim.zero_grad()
        if args.cuda == cuda:
            cross_entropy_loss = cross_entropy_loss.cuda()
            imgs, targets = imgs.cuda(), targets.cuda()
        with autocast():
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
        if args.tensorboard:
            writer.add_scalar("train_loss", loss_train.item(), iter)

        scaler.scale(loss_train).backward()
        scaler.step(optim)
        scaler.update()

        iter = iter + 1
        if iter % 100 == 0:
            print(
                "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                    .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                            np.mean(loss)))
    if args.tensorboard:
        writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
    scheduler.step(np.mean(loss))
    t2 = time.time()
    h = (t2 - t1) // 3600
    m = ((t2 - t1) % 3600) // 60
    s = ((t2 - t1) % 3600) % 60
    print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

    if epoch % 1 == 0:
        print("Save state, iter: {} ".format(epoch))
        torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
t3 = time.time()
h_t = (t3 - t0) // 3600
m_t = ((t3 - t0) % 3600) // 60
s_t = ((t3 - t0) % 3600) // 60
print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
if args.tensorboard:
    writer.close()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DP.py 相比與2. (1)導入自己的alexnet.py

運行結果:

圖片

分析:

跟之前一樣,DP使用了GradScaler放大了損失降低了模型訓練的速度

現在DP使用了autocast與GradScaler的準確率為0.8409,相比與DP只使用autocast準確率0.8188還是有所上升,并且之前DP未使用amp是準確率(0.8216)也提高了不少

3.單進程占用多卡DDP訓練與評估代碼

(1)DDP原模型訓練與評估源碼:

訓練源碼:

train_DDP.py

import time
import torch
from torchvision.models.alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)

    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            outputs = model(imgs)
            loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            optim.zero_grad()
            loss_train.backward()
            optim.step()
            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DDP.py

import torch
import torchvision
import torch.distributed as dist
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
# from alexnet import alexnet
from torchvision.models.alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def eval():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create model
    model = alexnet()
    model = torch.nn.parallel.DistributedDataParallel(model)

    # 2.Ready Dataset
    if args.dataset == 'CIFAR10':
        test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                    transform=transforms.Compose(
                                                        [transforms.Resize(args.img_size),
                                                         transforms.ToTensor()]),
                                                    download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    # 3.Length
    test_dataset_size = len(test_dataset)
    print("the test dataset size is {}".format(test_dataset_size))
    test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)

    # 4.DataLoader
    test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,
                                 num_workers=2,
                                 pin_memory=True)

    # 5. Set some parameters for testing the network
    total_accuracy = 0

    # test
    model.eval()
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            device = torch.device('cpu')
            imgs, targets = imgs.to(device), targets.to(device)
            model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
            model.load_state_dict(model_load)
            outputs = model(imgs)
            outputs = outputs.to(device)
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy
            accuracy = total_accuracy / test_dataset_size
        print("the total accuracy is {}".format(accuracy))


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    eval()

運行結果:

圖片

(2)DDP使用autocast的訓練與評估源碼:

訓練源碼:

train_DDP_autocast.py 導入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.cuda.amp import autocast as autocast
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)

    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            with autocast():
                outputs = model(imgs)
                loss_train = cross_entropy_loss(outputs, targets)
            loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            optim.zero_grad()
            loss_train.backward()
            optim.step()
            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DDP.py 導入自己的alexnet.py

import torch
import torchvision
import torch.distributed as dist
from torch.utils.data import DataLoader
from torchvision.transforms import transforms
from alexnet import alexnet
# from torchvision.models.alexnet import alexnet
import argparse


# eval
def parse_args():
    parser = argparse.ArgumentParser(description='CV Evaluation')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def eval():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create model
    model = alexnet()
    model = torch.nn.parallel.DistributedDataParallel(model)

    # 2.Ready Dataset
    if args.dataset == 'CIFAR10':
        test_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=False,
                                                    transform=transforms.Compose(
                                                        [transforms.Resize(args.img_size),
                                                         transforms.ToTensor()]),
                                                    download=True)

    else:
        raise ValueError("Dataset is not CIFAR10")

    # 3.Length
    test_dataset_size = len(test_dataset)
    print("the test dataset size is {}".format(test_dataset_size))
    test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset)

    # 4.DataLoader
    test_dataloader = DataLoader(dataset=test_dataset, sampler=test_sampler, batch_size=args.batch_size,
                                 num_workers=2,
                                 pin_memory=True)

    # 5. Set some parameters for testing the network
    total_accuracy = 0

    # test
    model.eval()
    with torch.no_grad():
        for data in test_dataloader:
            imgs, targets = data
            device = torch.device('cpu')
            imgs, targets = imgs.to(device), targets.to(device)
            model_load = torch.load("{}/AlexNet.pth".format(args.checkpoint), map_location=device)
            model.load_state_dict(model_load)
            outputs = model(imgs)
            outputs = outputs.to(device)
            accuracy = (outputs.argmax(1) == targets).sum()
            total_accuracy = total_accuracy + accuracy
            accuracy = total_accuracy / test_dataset_size
        print("the total accuracy is {}".format(accuracy))


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    eval()

運行結果:

圖片

分析:

從DDP未使用amp花費21分21秒,DDP使用autocast花費20分20秒,說明速度提升了

DDP未使用amp的準確率0.8224,之后DDP使用了autocast準確率下降到0.8162

(3)DDP使用autocast與GradScaler的訓練與評估源碼

訓練源碼:

train_DDP_GradScaler.py 導入自己的alexnet.py

import time
import torch
from alexnet import alexnet
import torchvision
from torch import nn
import torch.distributed as dist
from torchvision import transforms
from torch.utils.data import DataLoader
from torch.cuda.amp import autocast as autocast
from torch.cuda.amp import GradScaler as GradScaler
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import argparse


def parse_args():
    parser = argparse.ArgumentParser(description='CV Train')
    parser.add_mutually_exclusive_group()
    parser.add_argument("--rank", type=int, default=0)
    parser.add_argument("--world_size", type=int, default=1)
    parser.add_argument("--master_addr", type=str, default="127.0.0.1")
    parser.add_argument("--master_port", type=str, default="12355")
    parser.add_argument('--dataset', type=str, default='CIFAR10', help='CIFAR10')
    parser.add_argument('--dataset_root', type=str, default='../data', help='Dataset root directory path')
    parser.add_argument('--img_size', type=int, default=227, help='image size')
    parser.add_argument('--tensorboard', type=str, default=True, help='Use tensorboard for loss visualization')
    parser.add_argument('--tensorboard_log', type=str, default='../tensorboard', help='tensorboard folder')
    parser.add_argument('--cuda', type=str, default=True, help='if is cuda available')
    parser.add_argument('--batch_size', type=int, default=64, help='batch size')
    parser.add_argument('--lr', type=float, default=1e-4, help='learning rate')
    parser.add_argument('--epochs', type=int, default=20, help='Number of epochs to train.')
    parser.add_argument('--checkpoint', type=str, default='../checkpoint', help='Save .pth fold')
    return parser.parse_args()


args = parse_args()


def train():
    dist.init_process_group("gloo", init_method="tcp://{}:{}".format(args.master_addr, args.master_port),
                            rank=args.rank,
                            world_size=args.world_size)
    # 1.Create SummaryWriter
    if args.tensorboard:
        writer = SummaryWriter(args.tensorboard_log)

    # 2.Ready dataset
    if args.dataset == 'CIFAR10':
        train_dataset = torchvision.datasets.CIFAR10(root=args.dataset_root, train=True, transform=transforms.Compose(
            [transforms.Resize(args.img_size), transforms.ToTensor()]), download=True)
    else:
        raise ValueError("Dataset is not CIFAR10")

    cuda = torch.cuda.is_available()
    print('CUDA available: {}'.format(cuda))

    # 3.Length
    train_dataset_size = len(train_dataset)
    print("the train dataset size is {}".format(train_dataset_size))

    train_sampler = torch.utils.data.distributed.DistributedSampler(train_dataset)
    # 4.DataLoader
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=args.batch_size, sampler=train_sampler,
                                  num_workers=2,
                                  pin_memory=True)

    # 5.Create model
    model = alexnet()

    if args.cuda == cuda:
        model = model.cuda()
        model = torch.nn.parallel.DistributedDataParallel(model).cuda()
    else:
        model = torch.nn.parallel.DistributedDataParallel(model)

    # 6.Create loss
    cross_entropy_loss = nn.CrossEntropyLoss()

    # 7.Optimizer
    optim = torch.optim.AdamW(model.parameters(), lr=args.lr)
    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, patience=3, verbose=True)
    scaler = GradScaler()
    # 8. Set some parameters to control loop
    # epoch
    iter = 0
    t0 = time.time()
    for epoch in range(args.epochs):
        t1 = time.time()
        print(" -----------------the {} number of training epoch --------------".format(epoch))
        model.train()
        for data in train_dataloader:
            loss = 0
            imgs, targets = data
            optim.zero_grad()
            if args.cuda == cuda:
                cross_entropy_loss = cross_entropy_loss.cuda()
                imgs, targets = imgs.cuda(), targets.cuda()
            with autocast():
                outputs = model(imgs)
                loss_train = cross_entropy_loss(outputs, targets)
                loss = loss_train.item() + loss
            if args.tensorboard:
                writer.add_scalar("train_loss", loss_train.item(), iter)

            scaler.scale(loss_train).backward()
            scaler.step(optim)
            scaler.update()

            iter = iter + 1
            if iter % 100 == 0:
                print(
                    "Epoch: {} | Iteration: {} | lr: {} | loss: {} | np.mean(loss): {} "
                        .format(epoch, iter, optim.param_groups[0]['lr'], loss_train.item(),
                                np.mean(loss)))
        if args.tensorboard:
            writer.add_scalar("lr", optim.param_groups[0]['lr'], epoch)
        scheduler.step(np.mean(loss))
        t2 = time.time()
        h = (t2 - t1) // 3600
        m = ((t2 - t1) % 3600) // 60
        s = ((t2 - t1) % 3600) % 60
        print("epoch {} is finished, and time is {}h{}m{}s".format(epoch, int(h), int(m), int(s)))

        if epoch % 1 == 0:
            print("Save state, iter: {} ".format(epoch))
            torch.save(model.state_dict(), "{}/AlexNet_{}.pth".format(args.checkpoint, epoch))

    torch.save(model.state_dict(), "{}/AlexNet.pth".format(args.checkpoint))
    t3 = time.time()
    h_t = (t3 - t0) // 3600
    m_t = ((t3 - t0) % 3600) // 60
    s_t = ((t3 - t0) % 3600) // 60
    print("The finished time is {}h{}m{}s".format(int(h_t), int(m_t), int(s_t)))
    if args.tensorboard:
        writer.close()


if __name__ == "__main__":
    local_size = torch.cuda.device_count()
    print("local_size: ".format(local_size))
    train()

運行結果:

圖片

圖片

Tensorboard觀察:

圖片

圖片

評估源碼:

eval_DDP.py 與3. (2) 一樣,導入自己的alexnet.py

運行結果:

圖片

分析:

運行起來了,速度也比DDP未使用amp(用時21分21秒)快了不少(用時20分20秒),之前DDP未使用amp準確率到達0.8224,現在DDP使用了autocast與GradScaler的準確率達到0.8252,提升了

聲明:本文內容及配圖由入駐作者撰寫或者入駐合作網站授權轉載。文章觀點僅代表作者本人,不代表電子發燒友網立場。文章及其配圖僅供工程師學習之用,如有內容侵權或者其他違規問題,請聯系本站處理。 舉報投訴
  • NVIDIA
    +關注

    關注

    14

    文章

    5107

    瀏覽量

    104467
  • 數據集
    +關注

    關注

    4

    文章

    1212

    瀏覽量

    24990
  • 網絡模型
    +關注

    關注

    0

    文章

    44

    瀏覽量

    8550
  • 深度學習
    +關注

    關注

    73

    文章

    5527

    瀏覽量

    121878
收藏 人收藏

    評論

    相關推薦

    深層神經網絡模型訓練:過擬合優化

    為了訓練出高效可用的深層神經網絡模型,在訓練時必須要避免過擬合的現象。過擬合現象的優化方法通常有三種。
    的頭像 發表于 12-02 14:17 ?2913次閱讀
    深層神經<b class='flag-5'>網絡</b><b class='flag-5'>模型</b>的<b class='flag-5'>訓練</b>:過擬合優化

    【大語言模型:原理與工程實踐】大語言模型的預訓練

    大語言模型的核心特點在于其龐大的參數量,這賦予了模型強大的學習容量,使其無需依賴微調即可適應各種下游任務,而更傾向于培養通用的處理能力。然而,隨著學習容量的增加,對預訓練數據的需求也相應
    發表于 05-07 17:10

    請問Labveiw如何調用matlab訓練好的神經網絡模型呢?

    我在matlab中訓練好了一個神經網絡模型,想在labview中調用,請問應該怎么做呢?或者labview有自己的神經網絡工具包嗎?
    發表于 07-05 17:32

    Pytorch模型訓練實用PDF教程【中文】

    本教程以實際應用、工程開發為目的,著重介紹模型訓練過程中遇到的實際問題和方法。在機器學習模型開發中,主要涉及三大部分,分別是數據、模型和損失函數及優化器。本文也按順序的依次介紹數據、
    發表于 12-21 09:18

    基于tensorflow.js設計、訓練面向web的神經網絡模型的經驗

    NVIDIA顯卡。tensorflow.js在底層使用了WebGL加速,所以在瀏覽器中訓練模型的一個好處是可以利用AMD顯卡。另外,在瀏覽器中訓練
    的頭像 發表于 10-18 09:43 ?4215次閱讀

    如何利用Google Colab的云TPU加速Keras模型訓練

    云TPU包含8個TPU核,每個核都作為獨立的處理單元運作。如果沒有用上全部8個核心,那就沒有充分利用TPU。為了充分加速訓練,相比在單GPU上訓練的同樣的模型,我們可以選擇較大的bat
    的頭像 發表于 11-16 09:10 ?1.1w次閱讀

    如何PyTorch模型訓練變得飛快?

    讓我們面對現實吧,你的模型可能還停留在石器時代。我敢打賭你仍然使用32位精度或GASP甚至只在一個GPU上訓練。 我明白,網上都是各種神經網絡加速指南,但是一個checklist都沒有
    的頭像 發表于 11-27 10:43 ?1832次閱讀

    基于預訓練模型和長短期記憶網絡的深度學習模型

    作為模型的初始化詞向量。但是,隨機詞向量存在不具備語乂和語法信息的缺點;預訓練詞向量存在¨一詞-乂”的缺點,無法為模型提供具備上下文依賴的詞向量。針對該問題,提岀了一種基于預訓練
    發表于 04-20 14:29 ?19次下載
    基于預<b class='flag-5'>訓練</b><b class='flag-5'>模型</b>和長短期記憶<b class='flag-5'>網絡</b>的深度學習<b class='flag-5'>模型</b>

    利用視覺語言模型對檢測器進行預訓練

    訓練通常被用于自然語言處理以及計算機視覺領域,以增強主干網絡的特征提取能力,達到加速訓練和提高模型泛化性能的目的。該方法亦可以用于場景文本
    的頭像 發表于 08-08 15:33 ?1509次閱讀

    類GPT模型訓練提速26.5%,清華朱軍等人用INT4算法加速神經網絡訓練

    我們知道,將激活、權重和梯度量化為 4-bit 對于加速神經網絡訓練非常有價值。但現有的 4-bit 訓練方法需要自定義數字格式,而當代硬件不支持這些格式。在本文中,清華朱軍等人提出了
    的頭像 發表于 07-02 20:35 ?755次閱讀
    類GPT<b class='flag-5'>模型</b><b class='flag-5'>訓練</b>提速26.5%,清華朱軍等人用INT4算法<b class='flag-5'>加速</b>神經<b class='flag-5'>網絡</b><b class='flag-5'>訓練</b>

    卷積神經網絡模型訓練步驟

    卷積神經網絡模型訓練步驟? 卷積神經網絡(Convolutional Neural Network, CNN)是一種常用的深度學習算法,廣泛應用于圖像識別、語音識別、自然語言處理等諸多
    的頭像 發表于 08-21 16:42 ?1915次閱讀

    人臉識別模型訓練流程

    據準備階段,需要收集大量的人臉圖像數據,并進行數據清洗、標注和增強等操作。 1.1 數據收集 數據收集是人臉識別模型訓練的第一步。可以通過網絡爬蟲、公開數據集、合作伙伴等途徑收集人臉圖像數據。在收集數據時,需要注意
    的頭像 發表于 07-04 09:19 ?1310次閱讀

    如何使用經過訓練的神經網絡模型

    使用經過訓練的神經網絡模型是一個涉及多個步驟的過程,包括數據準備、模型加載、預測執行以及后續優化等。
    的頭像 發表于 07-12 11:43 ?1374次閱讀

    PyTorch GPU 加速訓練模型方法

    在深度學習領域,GPU加速訓練模型已經成為提高訓練效率和縮短訓練時間的重要手段。PyTorch作為一個流行的深度學習框架,提供了豐富的工具和
    的頭像 發表于 11-05 17:43 ?782次閱讀

    如何使用FP8新技術加速模型訓練

    利用 FP8 技術加速 LLM 推理和訓練越來越受到關注,本文主要和大家介紹如何使用 FP8 這項新技術加速模型訓練。 使用 FP8 進
    的頭像 發表于 12-09 11:30 ?572次閱讀
    主站蜘蛛池模板: xxx动漫xxx在线观看 | 男生jj插入女生jj | 亚洲黄色录像片 | 一本之道高清在线观看一区 | 免费女性裸身照无遮挡网站 | 日产久久视频 | 天天躁日日躁狠狠躁中文字幕老牛 | 久久高清免费视频 | 乌克兰18性hd | 任你躁精品一区二区三区 | 综合久久久久久久综合网 | 97国产露脸精品国产麻豆 | 亚洲综合色在线视频久 | 美女扒开尿口让男生添动态图 | 久久电影精品久久99久久 | 欧美大片免费观看 | 99久久精品一区二区三区 | 亚洲在线无码免费观看 | 国产精品成人无码免费视频 | 边摸边吃奶边做激情叫床视 | 美女被抽插到哭内射视频免费 | 97无码欧美熟妇人妻蜜桃天美 | 国产成人精品综合在线 | 男女车车的车车网站W98免费 | 2021年国产精品久久 | 欧美日韩888在线观看 | 无码137片内射在线影院 | 99久久国产宗和精品1上映 | ZZoo兽2皇| 蜜臀色欲AV无人A片一区 | 高清国产激情视频在线观看 | 国产欧美国产综合第一区 | 国产成人片视频一区二区青青 | 日日撸影院在线 | 日韩黄色免费 | 免费啪视频观试看视频 | 亚洲精品黄色 | 天天色天天综合网 | 恋夜秀场支持安卓版全部视频国产 | 亚洲欧美日韩在线码不卡 | 好紧好湿太硬了我太爽了小说 |