Sha256: e11c1a58d17da3a5bce818df352565638bd0c8aa734c220a4e36fd02cdee37d9

Contents?: true

Size: 1.56 KB

Versions: 4

Compression:

Stored size: 1.56 KB

Contents

# frozen_string_literal: true

require 'rumale/validation'
require 'rumale/base/base_estimator'

module Rumale
  module Optimizer
    # AdaGrad is a class that implements AdaGrad optimizer.
    #
    # @deprecated AdaGrad will be deleted in version 0.20.0.
    #
    # *Reference*
    # - Duchi, J., Hazan, E., and Singer, Y., "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization," J. Machine Learning Research, vol. 12, pp. 2121--2159, 2011.
    class AdaGrad
      include Base::BaseEstimator
      include Validation

      # Create a new optimizer with AdaGrad.
      #
      # @param learning_rate [Float] The initial value of learning rate.
      def initialize(learning_rate: 0.01)
        warn 'warning: AdaGrad is deprecated. This class will be deleted in version 0.20.0.'
        check_params_numeric(learning_rate: learning_rate)
        check_params_positive(learning_rate: learning_rate)
        @params = {}
        @params[:learning_rate] = learning_rate
        @moment = nil
      end

      # Calculate the updated weight with AdaGrad adaptive learning rate.
      #
      # @param weight [Numo::DFloat] (shape: [n_features]) The weight to be updated.
      # @param gradient [Numo::DFloat] (shape: [n_features]) The gradient for updating the weight.
      # @return [Numo::DFloat] (shape: [n_feautres]) The updated weight.
      def call(weight, gradient)
        @moment ||= Numo::DFloat.zeros(weight.shape[0])
        @moment += gradient**2
        weight - (@params[:learning_rate] / (@moment**0.5 + 1.0e-8)) * gradient
      end
    end
  end
end

Version data entries

4 entries across 4 versions & 1 rubygems

Version Path
rumale-0.19.3 lib/rumale/optimizer/ada_grad.rb
rumale-0.19.2 lib/rumale/optimizer/ada_grad.rb
rumale-0.19.1 lib/rumale/optimizer/ada_grad.rb
rumale-0.19.0 lib/rumale/optimizer/ada_grad.rb