cntk.learners package

A learner tunes a set of parameters during the training process. One can use different learners for different sets of parameters. Currently, CNTK supports the following learning algorithms:

class Learner(parameters, learningRateSchedule)[source]

Bases: cntk.cntk_py.Learner

Abstraction for learning a subset of parameters of a learnable function using first order gradient values. For example momentum, AdaGrad, RMSProp, etc. are different types of learners with their own algorithms for learning parameter values using first order gradients. To instantiate a concrete learner, use the factory methods in this module.

learning_rate()[source]

Current learning rate schedule.

parameters

The set of parameters associated with this learner.

reset_learning_rate(learning_rate)[source]

Resets the learning rate. The new schedule is adjusted to be relative to the current number of elapsed samples/sweeps: the 0 offset in the new schedule corresponds to the current value of elapsed samples/sweeps, and it takes effect from the current position in the training process onwards.

Parameters:learning_rate (output of learning_rate_schedule()) – learning rate to reset to
update(gradient_values, training_sample_count)[source]

Update the parameters associated with this learner.

Parameters:
  • gradient_values (dict) – maps Parameter to a NumPy array containing the first order gradient values for the Parameter w.r.t. the training objective.
  • training_sample_count (int) – number of samples in the minibatch
Returns:

False to indicate that learning has stopped for all of the parameters associated with this learner

Return type:

bool

class UnitType[source]

Bases: enum.Enum

Indicates whether the values in the schedule are specified on the per-sample or per-minibatch basis.

minibatch = 'minibatch'

Schedule contains per-minibatch values (and need to be re-scaled by the learner using the actual minibatch size in samples).

sample = 'sample'

Schedule contains per-sample values.

class UserLearner(parameters, lr_schedule, as_numpy=True)[source]

Bases: cntk.cntk_py.Learner

Base class of all user-defined learners. To implement your own learning algorithm, derive from this class and override the update().

Certain optimizers (such as AdaGrad) require additional storage. This can be allocated and initialized during construction.

update(gradient_values, training_sample_count, sweep_end)[source]

Update the parameters associated with this learner.

Parameters:
  • gradient_values (dict) – maps Parameter to a NumPy array containing the first order gradient values for the Parameter w.r.t. the training objective.
  • training_sample_count (int) – number of samples in the minibatch
  • sweep_end (bool) – if the data is fed by a conforming reader, this indicates whether a full pass over the dataset has just occurred.
Returns:

False to indicate that learning has stopped for all of the parameters associated with this learner

Return type:

bool

adadelta(parameters, lr, rho, epsilon, l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates an AdaDelta learner instance to learn the parameters. See [1] for more information.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • rho (float) – exponential smooth factor for each minibatch.
  • epsilon (float) – epsilon for sqrt.
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

See also

[1] Matthew D. Zeiler, ADADELTA: An Adaptive Learning Rate Method.

adagrad(parameters, lr, need_ave_multiplier=True, l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates an AdaGrad learner instance to learn the parameters. See [1] for more information.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • need_ave_multiplier (bool, default) –
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

See also

[1] J. Duchi, E. Hazan, and Y. Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. The Journal of Machine Learning Research, 2011.

adam(parameters, lr, momentum, unit_gain=default_unit_gain_value(), variance_momentum=momentum_as_time_constant_schedule(720000), l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True, epsilon=1e-8, adamax=False)[source]

Creates an Adam learner instance to learn the parameters. See [1] for more information.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – momentum schedule. For additional information, please refer to the this CNTK Wiki article.
  • unit_gain – when True, momentum is interpreted as a unit-gain filter. Defaults to the value returned by default_unit_gain_value().
  • variance_momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – variance momentum schedule. Defaults to momentum_as_time_constant_schedule(720000).
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
  • epsilon (float, optional) – numerical stability constant, defaults to 1e-8
  • adamax – when True, use infinity-norm variance momentum update instead of L2. Defaults to False
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

See also

[1] D. Kingma, J. Ba. Adam: A Method for Stochastic Optimization. International Conference for Learning Representations, 2015.

default_unit_gain_value()[source]

Returns true if by default momentum is applied in the unit-gain fashion.

default_use_mean_gradient_value()[source]

Returns true if by default input gradient to learner is averaged.

fsadagrad(parameters, lr, momentum, unit_gain=default_unit_gain_value(), variance_momentum=momentum_as_time_constant_schedule(720000), l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates an FSAdaGrad learner instance to learn the parameters.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – momentum schedule. For additional information, please refer to the this CNTK Wiki article.
  • unit_gain – when True, momentum is interpreted as a unit-gain filter. Defaults to the value returned by default_unit_gain_value().
  • variance_momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – variance momentum schedule. Defaults to momentum_as_time_constant_schedule(720000).
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

learning_rate_schedule(lr, unit, epoch_size=None)[source]

Create a learning rate schedule (using the same semantics as training_parameter_schedule()).

Parameters:
Returns:

learning rate schedule

momentum_as_time_constant_schedule(momentum, epoch_size=None)[source]

Create a momentum schedule in a minibatch-size agnostic way (using the same semantics as training_parameter_schedule() with unit=UnitType.sample).

Parameters:

CNTK specifies momentum in a minibatch-size agnostic way as the time constant (in samples) of a unit-gain 1st-order IIR filter. The value specifies the number of samples after which a gradient has an effect of 1/e=37%.

If you want to specify the momentum per sample (or per minibatch), use momentum_schedule().

Examples

>>> # Use a fixed momentum of 1100 for all samples
>>> m = momentum_as_time_constant_schedule(1100)
>>> # Use the time constant 1100 for the first 1000 samples,
>>> # then 1500 for the remaining ones
>>> m = momentum_as_time_constant_schedule([1100, 1500], 1000)
Returns:momentum as time constant schedule
momentum_schedule(momentum, epoch_size=None)[source]

Create a per-minibatch momentum schedule (using the same semantics as training_parameter_schedule() with the unit=UnitType.minibatch).

Parameters:

If you want to provide momentum values in a minibatch-size agnostic way, use momentum_as_time_constant_schedule().

Examples

>>> # Use a fixed momentum of 0.99 for all samples
>>> m = momentum_schedule(0.99)
>>> # Use the momentum value 0.99 for the first 1000 samples,
>>> # then 0.9 for the remaining ones
>>> m = momentum_schedule([0.99,0.9], 1000)
>>> m[0], m[999], m[1000], m[1001]
(0.99, 0.99, 0.9, 0.9)
>>> # Use the momentum value 0.99 for the first 999 samples,
>>> # then 0.88 for the next 888 samples, and 0.77 for the
>>> # the remaining ones
>>> m = momentum_schedule([(999,0.99),(888,0.88),(0, 0.77)])
>>> m[0], m[998], m[999], m[999+888-1], m[999+888]
(0.99, 0.99, 0.88, 0.88, 0.77)
Returns:momentum schedule
momentum_sgd(parameters, lr, momentum, unit_gain=default_unit_gain_value(), l1_regularization_weight=0.0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates a Momentum SGD learner instance to learn the parameters.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – momentum schedule. For additional information, please refer to the this CNTK Wiki article.
  • unit_gain – when True, momentum is interpreted as a unit-gain filter. Defaults to the value returned by default_unit_gain_value().
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

nesterov(parameters, lr, momentum, unit_gain=default_unit_gain_value(), l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates a Nesterov SGD learner instance to learn the parameters. This was originally proposed by Nesterov [1] in 1983 and then shown to work well in a deep learning context by Sutskever, et al. [2].

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • momentum (output of momentum_schedule() or momentum_as_time_constant_schedule()) – momentum schedule. For additional information, please refer to the this CNTK Wiki article.
  • unit_gain – when True, momentum is interpreted as a unit-gain filter. Defaults to the value returned by default_unit_gain_value().
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

See also

[1] Y. Nesterov. A Method of Solving a Convex Programming Problem with Convergence Rate O(1/ sqrt(k)). Soviet Mathematics Doklady, 1983.

[2] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the Importance of Initialization and Momentum in Deep Learning. Proceedings of the 30th International Conference on Machine Learning, 2013.

rmsprop(parameters, lr, gamma, inc, dec, max, min, need_ave_multiplier=True, l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates an RMSProp learner instance to learn the parameters.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • gamma (float) – Trade-off factor for current and previous gradients. Common value is 0.95. Should be in range (0.0, 1.0)
  • inc (float) – Increasing factor when trying to adjust current learning_rate. Should be greater than 1
  • dec (float) – Decreasing factor when trying to adjust current learning_rate. Should be in range (0.0, 1.0)
  • max (float) – Maximum scale allowed for the initial learning_rate. Should be greater than zero and min
  • min (float) – Minimum scale allowed for the initial learning_rate. Should be greater than zero
  • need_ave_multiplier (bool, default True) –
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

set_default_unit_gain_value(value)[source]

Sets globally default unit-gain flag value.

set_default_use_mean_gradient_value(value)[source]

Sets globally default use_mean_gradient_value.

sgd(parameters, lr, l1_regularization_weight=0, l2_regularization_weight=0, gaussian_noise_injection_std_dev=0, gradient_clipping_threshold_per_sample=np.inf, gradient_clipping_with_truncation=True)[source]

Creates an SGD learner instance to learn the parameters. See [1] for more information on how to set the parameters.

Parameters:
  • parameters (list of parameters) – list of network parameters to tune. These can be obtained by the ‘.parameters()’ method of the root operator.
  • lr (output of learning_rate_schedule()) – learning rate schedule.
  • l1_regularization_weight (float, optional) – the L1 regularization weight per sample, defaults to 0.0
  • l2_regularization_weight (float, optional) – the L2 regularization weight per sample, defaults to 0.0
  • gaussian_noise_injection_std_dev (float, optional) – the standard deviation of the Gaussian noise added to parameters post update, defaults to 0.0
  • gradient_clipping_threshold_per_sample (float, optional) – clipping threshold per sample, defaults to infinity
  • gradient_clipping_with_truncation (bool, default True) – use gradient clipping with truncation
  • use_mean_gradient (bool, default False) – use averaged gradient as input to learner. Defaults to the value returned by default_use_mean_gradient_value().
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

See also

[1] L. Bottou. Stochastic Gradient Descent Tricks. Neural Networks: Tricks of the Trade: Springer, 2012.

training_parameter_schedule(schedule, unit, epoch_size=None)[source]

Create a training parameter schedule containing either per-sample (default) or per-minibatch values.

Examples

>>> # Use a fixed value 0.01 for all samples
>>> s = training_parameter_schedule(0.01, UnitType.sample)
>>> s[0], s[1]
(0.01, 0.01)
>>> # Use 0.01 for the first 1000 samples, then 0.001 for the remaining ones
>>> s = training_parameter_schedule([0.01, 0.001], UnitType.sample, 1000)
>>> s[0], s[1], s[1000], s[1001]
(0.01, 0.01, 0.001, 0.001)
>>> # Use 0.1 for the first 12 epochs, then 0.01 for the next 15,
>>> # followed by 0.001 for the remaining ones, with a 100 samples in an epoch
>>> s = training_parameter_schedule([(12, 0.1), (15, 0.01), (1, 0.001)], UnitType.sample, 100)
>>> s[0], s[1199], s[1200], s[2699], s[2700], s[5000]
(0.1, 0.1, 0.01, 0.01, 0.001, 0.001)
Parameters:
  • schedule (float or list) – if float, is the parameter schedule to be used for all samples. In case of list, the elements are used as the values for epoch_size samples. If list contains pair, the second element is used as a value for (epoch_size x first element) samples
  • unit (UnitType) – one of two * sample: the returned schedule contains per-sample values * minibatch: the returned schedule contains per-minibatch values.
  • epoch_size (optional, int) – number of samples as a scheduling unit. Parameters in the schedule change their values every epoch_size samples. If no epoch_size is provided, this parameter is substituted by the size of the full data sweep, in which case the scheduling unit is the entire data sweep (as indicated by the MinibatchSource) and parameters change their values on the sweep-by-sweep basis specified by the schedule.
Returns:

training parameter schedule

universal(update_func, parameters)[source]

Creates a learner which uses a CNTK function to update the parameters.

Parameters:
  • update_func – function that takes parameters and gradients as arguments and returns a Function that performs the desired updates. The returned function updates the parameters by means of containing assign() operations. If update_func does not contain assign() operations the parameters will not be updated.
  • parameters (list) – list of network parameters to tune. These can be obtained by the root operator’s parameters.
Returns:

learner instance that can be passed to the Trainer

Return type:

Learner

Examples

>>> def my_adagrad(parameters, gradients):
...     accumulators = [C.constant(0, shape=p.shape, dtype=p.dtype, name='accum') for p in parameters]
...     update_funcs = []
...     for p, g, a in zip(parameters, gradients, accumulators):
...         accum_new = C.assign(a, g * g)
...         update_funcs.append(C.assign(p, p - 0.01 * g / C.sqrt(accum_new + 1e-6)))
...     return C.combine(update_funcs)
...
>>> x = C.input_variable((10,))
>>> y = C.input_variable((2,))
>>> z = C.layers.Sequential([C.layers.Dense(100, activation=C.relu), C.layers.Dense(2)])(x)
>>> loss = C.cross_entropy_with_softmax(z, y)
>>> learner = C.universal(my_adagrad, z.parameters)
>>> trainer = C.Trainer(z, loss, learner)
>>> # now trainer can be used as any other Trainer