Pytorch两层神经网络

mac2024-08-04  56

文章目录

Pytorch: optimPytorch: 自定义 nn Modules

Pytorch: optim

这一次我们不用手动更新model的weights,而是使用optim这个包来帮助我们更新参数。optim这个package提供了各种不同的model优化方法,包括SGD+momentum,RMSProp,Adam

import torch N, D_in, H, D_out = 64, 1000, 100, 10 # 随机创建一些数据, N是训练数 x = torch.randn(N, D_in) y = torch.randn(N, D_out) model = torch.nn.Sequential( torch.nn.Linear(D_in, H, bias=False), # w_1 *x +b_1 torch.nn.ReLU(), torch.nn.Linear(H, D_out, bias=False), ) # # 发现model结果不太好,尝试修改初始化 # torch.nn.init.normal_(model[0].weight) # torch.nn.init.normal_(model[2].weight) loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for it in range(500): # Forward pass y_pred = model(x) # compute loss loss = loss_fn(y_pred, y) # computational graph print(it, loss.item()) optimizer.zero_grad() # Backward pass # compute the gradient loss.backward() # update weights of w1, w2 optimizer.step()

Pytorch: 自定义 nn Modules

我们可以定义一个模型,继承nn.Module类。如果需要定义一个比Sequential更复杂的模型,就需要定义nn.Module。

import torch N, D_in, H, D_out = 64, 1000, 100, 10 # 随机创建一些数据, N是训练数 x = torch.randn(N, D_in) y = torch.randn(N, D_out) class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): # define the model architecture super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H, bias=False) self.linear2 = torch.nn.Linear(H, D_out, bias=False) def forward(self, x): y_pred = self.linear2(self.linear1(x).clamp(min=0)) return y_pred model = TwoLayerNet(D_in, H, D_out) loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for it in range(500): # Forward pass y_pred = model(x) # compute loss loss = loss_fn(y_pred, y) # computational graph print(it, loss.item()) optimizer.zero_grad() # Backward pass # compute the gradient loss.backward() # update weights of w1, w2 optimizer.step()
最新回复(0)