前言

本次作业需要完成:

  • 实现SVM损失函数,并且是完全向量化的
  • 实现相关的梯度计算,也是向量化的
  • 使用数值梯度验证梯度是否正确
  • 使用验证集来选择一组好的学习率以及正则化系数
  • 使用SGD方法优化loss
  • 可视化最终的权重

代码实现

使用for循环计算SVM的loss以及grad

其中W为权重矩阵,形状为(D,C);X为测试数据,形状为(N,D);y为X对应的标签值,形状为(N,);reg为正则化系数。
函数需要返回float型的loss以及W对应的梯度矩阵
svm的损失函数如下:

其中si是f(w,x)的计算结果S(N,)的第i项。表示样本属于第i类的概率,syi表示样本被分为正确类别的概率。

实现svm_loss_naive(W, X, y, reg)函数

def svm_loss_naive(W, X, y, reg):
  """
  Structured SVM loss function, naive implementation (with loops).

  Inputs have dimension D, there are C classes, and we operate on minibatches
  of N examples.

  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength

  Returns a tuple of:
  - loss as single float
  - gradient with respect to weights W; an array of same shape as W
  """
  dW = np.zeros(W.shape) # initialize the gradient as zero

  # compute the loss and the gradient
  num_classes = W.shape[1]
  num_train = X.shape[0]
  loss = 0.0
  for i in range(num_train):
    scores = X[i].dot(W) # X点乘W,获得S
    correct_class_score = scores[y[i]] # 获得Syi
    ds_w = np.repeat(X[i], num_classes).reshape(-1, num_classes) # 计算偏S偏W
    dm_s = np.zeros(W.shape)
    for j in range(num_classes):
      if j == y[i]:
        continue
      margin = scores[j] - correct_class_score + 1 # note delta = 1
      if margin > 0:
        dm_s[:, j] = 1      #这两步计算偏margin偏S
        dm_s[:, y[i]] -= 1
        loss += margin
    dW_i = ds_w * dm_s #计算偏margin偏w,单个输入
    dW += dW_i # 求得总的梯度

  # Right now the loss is a sum over all training examples, but we want it
  # to be an average instead so we divide by num_train.
  loss /= num_train
  dW /= num_train 

  # Add regularization to the loss.
  loss += reg * np.sum(W * W)  # 加上正则化项
  dW += W*2 # 加上正则化项的梯度

  #############################################################################
  # TODO:                                                                     #
  # Compute the gradient of the loss function and store it dW.                #
  # Rather that first computing the loss and then computing the derivative,   #
  # it may be simpler to compute the derivative at the same time that the     #
  # loss is being computed. As a result you may need to modify some of the    #
  # code above to compute the gradient.                                       #
  #############################################################################


  return loss, dW

实现svm_loss_vectorized(W, X, y, reg)函数

def svm_loss_vectorized(W, X, y, reg):
  """
  Structured SVM loss function, vectorized implementation.

  Inputs and outputs are the same as svm_loss_naive.
  """
  loss = 0.0
  dW = np.zeros(W.shape) # initialize the gradient as zero

  #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the structured SVM loss, storing the    #
  # result in loss.                                                           #
  #############################################################################
  num_train = X.shape[0]
  num_classes = W.shape[1]
  scores = X.dot(W)
  correct_class_scores = scores[np.arange(num_train), y]
  margins = np.maximum(0, scores - correct_class_scores[:, np.newaxis] + 1.0)
  margins[np.arange(num_train), y] = 0
  loss = np.sum(margins)
  loss /= num_train
  loss += reg * np.sum(W * W)  
  #############################################################################
  #                             END OF YOUR CODE                              #
  #############################################################################


  #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the gradient for the structured SVM     #
  # loss, storing the result in dW.                                           #
  #                                                                           #
  # Hint: Instead of computing the gradient from scratch, it may be easier    #
  # to reuse some of the intermediate values that you used to compute the     #
  # loss.                                                                     #
  #############################################################################
  D = W.shape[0]
  dm_s = np.zeros_like(margins)
  dm_s[margins > 0] = 1
  num_pos = np.sum(margins > 0, axis=1)
  dm_s[np.arange(num_train), y] = -num_pos
  dW = X.T.dot(dm_s)
  dW /= num_train
  dW += W*2
  #############################################################################
  #                             END OF YOUR CODE                              #
  #############################################################################

  return loss, dW

结果

numerical: 1.680214 analytic: 1.679971, relative error: 7.230767e-05
numerical: -11.835214 analytic: -11.835290, relative error: 3.186856e-06
numerical: 31.223996 analytic: 31.224021, relative error: 3.971612e-07
numerical: -11.983471 analytic: -11.983169, relative error: 1.261847e-05
numerical: 14.276020 analytic: 14.275969, relative error: 1.817105e-06
numerical: 60.570112 analytic: 60.570076, relative error: 3.005679e-07
numerical: -21.435424 analytic: -21.435447, relative error: 5.177246e-07
numerical: 10.956106 analytic: 10.956302, relative error: 8.935366e-06
numerical: 15.374184 analytic: 15.374405, relative error: 7.184253e-06
numerical: 18.606596 analytic: 18.606262, relative error: 8.968162e-06
numerical: 6.584964 analytic: 6.576627, relative error: 6.334218e-04
numerical: -53.592687 analytic: -53.587162, relative error: 5.154812e-05
numerical: -37.440261 analytic: -37.452605, relative error: 1.648300e-04
numerical: -4.948189 analytic: -4.938414, relative error: 9.887377e-04
numerical: -28.108544 analytic: -28.111811, relative error: 5.811183e-05
numerical: 19.087159 analytic: 19.079373, relative error: 2.040010e-04
numerical: 39.119884 analytic: 39.115284, relative error: 5.880564e-05
numerical: -11.900470 analytic: -11.914449, relative error: 5.870076e-04
numerical: -17.774522 analytic: -17.779592, relative error: 1.426094e-04
numerical: -10.194233 analytic: -10.194915, relative error: 3.343300e-05

实现SGD

在实现了loss和gradient计算之后,实现SGD是很简单的事情,所以就不贴代码了