色哟哟视频在线观看-色哟哟视频在线-色哟哟欧美15最新在线-色哟哟免费在线观看-国产l精品国产亚洲区在线观看-国产l精品国产亚洲区久久

電子發(fā)燒友App

硬聲App

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示
創(chuàng)作
電子發(fā)燒友網(wǎng)>電子資料下載>電子資料>PyTorch教程13.6之多個(gè)GPU的簡潔實(shí)現(xiàn)

PyTorch教程13.6之多個(gè)GPU的簡潔實(shí)現(xiàn)

2023-06-05 | pdf | 0.19 MB | 次下載 | 免費(fèi)

資料介紹

為每個(gè)新模型從頭開始實(shí)施并行性并不好玩。此外,優(yōu)化同步工具以獲得高性能有很大的好處。在下文中,我們將展示如何使用深度學(xué)習(xí)框架的高級(jí) API 來執(zhí)行此操作。數(shù)學(xué)和算法與第 13.5 節(jié)中的相同毫不奇怪,您至少需要兩個(gè) GPU 才能運(yùn)行本節(jié)的代碼。

import torch
from torch import nn
from d2l import torch as d2l
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l

npx.set_np()

13.6.1。玩具網(wǎng)絡(luò)

讓我們使用一個(gè)比13.5 節(jié)中的 LeNet 更有意義的網(wǎng)絡(luò) ,它仍然足夠容易和快速訓(xùn)練。我們選擇了一個(gè) ResNet-18 變體He et al. , 2016。由于輸入圖像很小,我們對(duì)其進(jìn)行了輕微修改。特別地,與第 8.6 節(jié)的不同之處在于,我們?cè)陂_始時(shí)使用了更小的卷積核、步長和填充。此外,我們刪除了最大池化層。

#@save
def resnet18(num_classes, in_channels=1):
  """A slightly modified ResNet-18 model."""
  def resnet_block(in_channels, out_channels, num_residuals,
           first_block=False):
    blk = []
    for i in range(num_residuals):
      if i == 0 and not first_block:
        blk.append(d2l.Residual(out_channels, use_1x1conv=True,
                    strides=2))
      else:
        blk.append(d2l.Residual(out_channels))
    return nn.Sequential(*blk)

  # This model uses a smaller convolution kernel, stride, and padding and
  # removes the max-pooling layer
  net = nn.Sequential(
    nn.Conv2d(in_channels, 64, kernel_size=3, stride=1, padding=1),
    nn.BatchNorm2d(64),
    nn.ReLU())
  net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))
  net.add_module("resnet_block2", resnet_block(64, 128, 2))
  net.add_module("resnet_block3", resnet_block(128, 256, 2))
  net.add_module("resnet_block4", resnet_block(256, 512, 2))
  net.add_module("global_avg_pool", nn.AdaptiveAvgPool2d((1,1)))
  net.add_module("fc", nn.Sequential(nn.Flatten(),
                    nn.Linear(512, num_classes)))
  return net
#@save
def resnet18(num_classes):
  """A slightly modified ResNet-18 model."""
  def resnet_block(num_channels, num_residuals, first_block=False):
    blk = nn.Sequential()
    for i in range(num_residuals):
      if i == 0 and not first_block:
        blk.add(d2l.Residual(
          num_channels, use_1x1conv=True, strides=2))
      else:
        blk.add(d2l.Residual(num_channels))
    return blk

  net = nn.Sequential()
  # This model uses a smaller convolution kernel, stride, and padding and
  # removes the max-pooling layer
  net.add(nn.Conv2D(64, kernel_size=3, strides=1, padding=1),
      nn.BatchNorm(), nn.Activation('relu'))
  net.add(resnet_block(64, 2, first_block=True),
      resnet_block(128, 2),
      resnet_block(256, 2),
      resnet_block(512, 2))
  net.add(nn.GlobalAvgPool2D(), nn.Dense(num_classes))
  return net

13.6.2。網(wǎng)絡(luò)初始化

我們將在訓(xùn)練循環(huán)內(nèi)初始化網(wǎng)絡(luò)。有關(guān)初始化方法的復(fù)習(xí),請(qǐng)參閱第 5.4 節(jié)。

net = resnet18(10)
# Get a list of GPUs
devices = d2l.try_all_gpus()
# We will initialize the network inside the training loop

The initialize function allows us to initialize parameters on a device of our choice. For a refresher on initialization methods see Section 5.4. What is particularly convenient is that it also allows us to initialize the network on multiple devices simultaneously. Let’s try how this works in practice.

net = resnet18(10)
# Get a list of GPUs
devices = d2l.try_all_gpus()
# Initialize all the parameters of the network
net.initialize(init=init.Normal(sigma=0.01), ctx=devices)

Using the split_and_load function introduced in Section 13.5 we can divide a minibatch of data and copy portions to the list of devices provided by the devices variable. The network instance automatically uses the appropriate GPU to compute the value of the forward propagation. Here we generate 4 observations and split them over the GPUs.

x = np.random.uniform(size=(4, 1, 28, 28))
x_shards = gluon.utils.split_and_load(x, devices)
net(x_shards[0]), net(x_shards[1])
[08:00:43] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
(array([[ 2.2610207e-06, 2.2045981e-06, -5.4046786e-06, 1.2869955e-06,
     5.1373163e-06, -3.8297967e-06, 1.4339059e-07, 5.4683451e-06,
     -2.8279192e-06, -3.9651104e-06],
    [ 2.0698672e-06, 2.0084667e-06, -5.6382510e-06, 1.0498458e-06,
     5.5506434e-06, -4.1065491e-06, 6.0830087e-07, 5.4521784e-06,
     -3.7365021e-06, -4.1891640e-06]], ctx=gpu(0)),
 array([[ 2.4629783e-06, 2.6015525e-06, -5.4362617e-06, 1.2938218e-06,
     5.6387889e-06, -4.1360108e-06, 3.5758853e-07, 5.5125256e-06,
     -3.1957325e-06, -4.2976326e-06],
    [ 1.9431673e-06, 2.2600434e-06, -5.2698201e-06, 1.4807417e-06,
     5.4830934e-06, -3.9678889e-06, 7.5751018e-08, 5.6764356e-06,
     -3.2530229e-06, -4.0943951e-06]], ctx=gpu(1)))

Once data passes through the network, the corresponding parameters are initialized on the device the data passed through. This means that initialization happens on a per-device basis. Since we picked GPU 0 and GPU 1 for initialization, the network is initialized only there, and not on the CPU. In fact, the parameters do not even exist on the CPU. We can verify this by printing out the parameters and observing any errors that might arise.

weight = net[0].params.get('weight')

try:
  weight.data()
except RuntimeError:
  print('not initialized on cpu')
weight.data(devices[0])[0], weight.data(devices[1])[0]
not initialized on cpu
(array([[[ 0.01382882, -0.01183044, 0.01417865],
     [-0.00319718, 0.00439528, 0.02562625],
     [-0.00835081, 0.01387452, -0.01035946]]], ctx=gpu(0)),
 array([[[ 0.01382882, -0.01183044, 0.01417865],
     [-0.00319718, 0.00439528, 0.02562625],
     [-0.00835081, 0.01387452, -0.01035946]]], ctx=gpu(1)))

Next, let’s replace the code to evaluate the accuracy by one that works in parallel across multiple devices. This serves as a replacement of the evaluate_accuracy_gpu function from Section 7.6. The main difference is that we split a minibatch before invoking the network. All else is essentially identical.

#@save
def evaluate_accuracy_gpus(net, data_iter, split_f=d2l.split_batch):
  """Compute the accuracy for a model on a dataset using multiple GPUs."""
  # Query the list of devices
  devices = list(net.collect_params().values())[0].list_ctx()
  # No. of correct predictions, no. of predictions
  metric = d2l.Accumulator(2)
  for features, labels in data_iter:
    X_shards, y_shards = split_f(features, labels, devices)
    # Run in parallel
    pred_shards = [net(X_shard) for X_shard in X_shards]
    metric.add(sum(float(d2l.accuracy(pred_shard, y_shard)) for
            pred_shard, y_shard in zip(
              pred_shards, y_shards)), labels.size)
  return metric[0] / metric[1]

13.6.3。訓(xùn)練

和以前一樣,訓(xùn)練代碼需要執(zhí)行幾個(gè)基本功能以實(shí)現(xiàn)高效并行:

  • 需要在所有設(shè)備上初始化網(wǎng)絡(luò)參數(shù)。

  • 在迭代數(shù)據(jù)集時(shí),小批量將被劃分到所有設(shè)備上。

  • 我們跨設(shè)備并行計(jì)算損失及其梯度。

  • 梯度被聚合并且參數(shù)被相應(yīng)地更新。

最后,我們計(jì)算精度(再次并行)以報(bào)告網(wǎng)絡(luò)的最終性能。訓(xùn)練例程與前面章節(jié)中的實(shí)現(xiàn)非常相似,只是我們需要拆分和聚合數(shù)據(jù)。

def train(net, num_gpus, batch_size, lr):
  train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
  devices = [d2l.try_gpu(i) for i in range(num_gpus)]
  def init_weights(module):
    if type(module) in [nn.Linear, nn.Conv2d

下載該資料的人也在下載 下載該資料的人還在閱讀
更多 >

評(píng)論

查看更多

下載排行

本周

  1. 1山景DSP芯片AP8248A2數(shù)據(jù)手冊(cè)
  2. 1.06 MB  |  532次下載  |  免費(fèi)
  3. 2RK3399完整板原理圖(支持平板,盒子VR)
  4. 3.28 MB  |  339次下載  |  免費(fèi)
  5. 3TC358743XBG評(píng)估板參考手冊(cè)
  6. 1.36 MB  |  330次下載  |  免費(fèi)
  7. 4DFM軟件使用教程
  8. 0.84 MB  |  295次下載  |  免費(fèi)
  9. 5元宇宙深度解析—未來的未來-風(fēng)口還是泡沫
  10. 6.40 MB  |  227次下載  |  免費(fèi)
  11. 6迪文DGUS開發(fā)指南
  12. 31.67 MB  |  194次下載  |  免費(fèi)
  13. 7元宇宙底層硬件系列報(bào)告
  14. 13.42 MB  |  182次下載  |  免費(fèi)
  15. 8FP5207XR-G1中文應(yīng)用手冊(cè)
  16. 1.09 MB  |  178次下載  |  免費(fèi)

本月

  1. 1OrCAD10.5下載OrCAD10.5中文版軟件
  2. 0.00 MB  |  234315次下載  |  免費(fèi)
  3. 2555集成電路應(yīng)用800例(新編版)
  4. 0.00 MB  |  33566次下載  |  免費(fèi)
  5. 3接口電路圖大全
  6. 未知  |  30323次下載  |  免費(fèi)
  7. 4開關(guān)電源設(shè)計(jì)實(shí)例指南
  8. 未知  |  21549次下載  |  免費(fèi)
  9. 5電氣工程師手冊(cè)免費(fèi)下載(新編第二版pdf電子書)
  10. 0.00 MB  |  15349次下載  |  免費(fèi)
  11. 6數(shù)字電路基礎(chǔ)pdf(下載)
  12. 未知  |  13750次下載  |  免費(fèi)
  13. 7電子制作實(shí)例集錦 下載
  14. 未知  |  8113次下載  |  免費(fèi)
  15. 8《LED驅(qū)動(dòng)電路設(shè)計(jì)》 溫德爾著
  16. 0.00 MB  |  6656次下載  |  免費(fèi)

總榜

  1. 1matlab軟件下載入口
  2. 未知  |  935054次下載  |  免費(fèi)
  3. 2protel99se軟件下載(可英文版轉(zhuǎn)中文版)
  4. 78.1 MB  |  537798次下載  |  免費(fèi)
  5. 3MATLAB 7.1 下載 (含軟件介紹)
  6. 未知  |  420027次下載  |  免費(fèi)
  7. 4OrCAD10.5下載OrCAD10.5中文版軟件
  8. 0.00 MB  |  234315次下載  |  免費(fèi)
  9. 5Altium DXP2002下載入口
  10. 未知  |  233046次下載  |  免費(fèi)
  11. 6電路仿真軟件multisim 10.0免費(fèi)下載
  12. 340992  |  191187次下載  |  免費(fèi)
  13. 7十天學(xué)會(huì)AVR單片機(jī)與C語言視頻教程 下載
  14. 158M  |  183279次下載  |  免費(fèi)
  15. 8proe5.0野火版下載(中文版免費(fèi)下載)
  16. 未知  |  138040次下載  |  免費(fèi)
主站蜘蛛池模板: 国产成人免费不卡在线观看| 国产亚洲精品久久久999无毒| 国产人妻麻豆蜜桃色在线| 婚后被调教当众高潮H喷水 | 空姐厕所啪啪啪| 色狗av影院| 曰韩一本道高清无码av| 成人国产在线观看| 久久re热线视频国产| 日本妈妈xxxx| 最近中文字幕无吗免费高清| 纲手胸被爆羞羞免费| 玖玖爱在线播放| 无码人妻99久久密AV| 2020年国产精品午夜福利在线观看 | 欧美激情性AAAAA片欧美| 午夜国产精品视频在线| 91精品国产91热久久p| 国产久久精品热99看| 欧美成人无码A区在线观看免费 | 国产午夜一级鲁丝片| 欧美一区二区三区久久综| 亚洲永久精品AV在线观看| 插骚妇好爽好骚| 毛片免费播放| 亚洲免费视频在线观看| 儿子操妈妈视频| 男人网站在线| 亚洲精品一线二线三线无人区 | 性生生活大片又黄又| 99无人区码一码二码三| 久草色在线| 亚洲 欧美 制服 校园 动漫| xxxxxl荷兰| 猫咪www958ii| 亚洲高清有码中文字| 动漫美女的阴| 暖暖视频免费高清在线观看 视频| 亚洲一区二区三区免费看 | 久久人妻少妇嫩草AV无码| 香蕉久久日日躁夜夜嗓|