Callbacks

import pyblaze.nn as xnn

The callback module exposes a variety of callbacks that may be used in conjunction with some Engine. The base classes further enable the definition of custom callbacks.

Base Classes

Exception

class pyblaze.nn.CallbackException(message, verbose=False)[source]

Exception raised by callbacks to stop the training procedure.

__init__(message, verbose=False)[source]

Initialize self. See help(type(self)) for accurate signature.

print()[source]

Prints the message of this exception if it is verbose. Otherwise it is a no-op.

Training Callback

class pyblaze.nn.TrainingCallback[source]

Abstract class to be subclassed by all training callbacks. These callbacks are passed to an engine which calls the implemented methods at specific points during training.

before_training(model, num_epochs)[source]

Method is called prior to the start of the training. This method must not raise exceptions.

Parameters
  • model (torch.nn.Module) – The model which is trained.

  • num_epochs (int) – The maximum number of epochs performed during training.

before_epoch(current, num_iterations)[source]

Method is called at the begining of every epoch during training. This method may raise exceptions if training should be stopped. Note, however, that stopping training at this stage is an advanced scenario.

Parameters
  • current (int) – The index of the epoch that is about to start.

  • num_iterations (int) – The expected number of iterations for the batch.

after_batch(metrics)[source]

Method is called at the end of a mini-batch. If the data is not partitioned into batches, this function is never called. The method may not raise exceptions.

Parameters

metrics (float or tuple or dict) – The metrics obtained from the last mini-batch. This is the value that is returned from an engine’s train_batch() method.

after_epoch(metrics)[source]

Method is called at the end of every epoch during training. This method may raise exceptions if training should be stopped.

Parameters

metrics (float or tuple or dict) – Metrics obtained after training this epoch.

after_training()[source]

Method is called upon end of the training procedure. The method may not raise exceptions.

Value Training Callback

class pyblaze.nn.ValueTrainingCallback[source]

A training callback with an additional method that can be used to obtain a dynamically changing value from the callback.

abstract read()[source]

Returns the value that this training callback stores.

Prediction Callback

class pyblaze.nn.PredictionCallback[source]

Abstract class to be subclassed by all prediction callbacks. These callbacks are passed to an engine which calls the implemented methods at specific points during inference.

before_predictions(model, num_iterations)[source]

Called before prediction making starts.

Parameters
  • model (torch.nn.Module) – The model which is used to make predictions.

  • num_iterations (int) – The number of iterations/batches performed for prediction.

after_batch(*args)[source]

Called after prediction is done for one batch.

Parameters

args (varargs) – Usually empty, just to be able to implement both TrainingCallback and PredictionCallback.

after_predictions()[source]

Called after all predictions have been made.

Logging

Epoch Progress

class pyblaze.nn.EpochProgressLogger(file=None)[source]

Logs the training progress. It does only consider epochs (to plot the progress of each batch within an epoch, use BatchProgressLogger).

__init__(file=None)[source]

Initializes a new progress printer for epochs.

Parameters

file (str, default: None) – If given, the progress is not written to the command line, but to a file instead. Might be especially useful when multiple processes perform training simultaneously.

Batch Progress

class pyblaze.nn.BatchProgressLogger(file=None)[source]

Logs the training progress. It does only consider batches (to plot the progress of the entire training, use EpochProgressLogger).

__init__(file=None)[source]

Initializes a new progress logger for batches.

Parameters

file (str, default: None) – If given, the progress is not written to the command line, but to a file instead. Might be especially useful when multiple processes perform training simultaneously.

Prediction Progress

class pyblaze.nn.PredictionProgressLogger(file=None)[source]

Logs the prediction progress.

__init__(file=None)[source]

Initializes a new progress printer for predictions.

Parameters

file (str, default: None) – If given, the progress is not written to the command line, but to a file instead. Might be especially useful when multiple processes perform training simultaneously.

before_predictions(model, num_iterations)[source]

Called before prediction making starts.

Parameters
  • model (torch.nn.Module) – The model which is used to make predictions.

  • num_iterations (int) – The number of iterations/batches performed for prediction.

Checkpointing

Model Saving

class pyblaze.nn.ModelSaverCallback(directory, file_template='model_epoch-{}')[source]

The callback stores the trained model after every epoch with a unique name per epoch. If the model uses the pyblaze.nn.Configurable mixin, its config and state dict are stored after every epoch, otherwise only its state dict.

__init__(directory, file_template='model_epoch-{}')[source]

Initializes a new ModelSaverCallback.

Parameters
  • directory (str) – The directory where the models should be stored.

  • file_template (str, default: 'model_epoch_{}') – A file template that can be formatted with a single integer, i.e. the epoch.

Scheduling

Learning Rate

class pyblaze.nn.LearningRateScheduler(scheduler, metric=None, after_batch=False)[source]

The learning rate scheduler may be used with a PyTorch learning rate scheduler. The callback is automatically triggered after the end of every iteration or epoch.

__init__(scheduler, metric=None, after_batch=False)[source]

Initializes a new learning rate scheduler for the given PyTorch scheduler.

Parameters
  • scheduler (torch.optim.lr_scheduler) – The PyTorch scheduler.

  • metric (str, default: None) – The metric to pass to the scheduler, e.g. useful for reducing the learning rate as the validation loss pleateaus. Typically, it should only be used with after_batch set to False.

  • after_batch (bool, default: False) – Whether to call the scheduler after every batch or after every epoch.

Parameter

class pyblaze.nn.ParameterScheduler(initial, schedule, *args, **kwargs)[source]

The parameter scheduler is able to change the value of a variable over the course of the training.

__init__(initial, schedule, *args, **kwargs)[source]

Initalizes a new scheduler for the given parameter.

Parameters
  • initial (object) – The initial value fo the parameter which should be modified over the course of the training.

  • schedule (func (object, int, int, **kwargs) -> object) – Function which should return the value of the parameter based on the current value of the parameter, the current epoch, and the iteration within the epoch. The function is called after every iteration (i.e. batch). It is further passed the arguments given to this initializer.

  • args (variadic argument) – Additional arguments passed to the schedule function.

  • kwargs (keyword arguments) – Additional keyword arguments passed to the schedule function.

Early Stopping

class pyblaze.nn.EarlyStopping(metric='val_loss', patience=5, restore_best=False, minimize=True)[source]

The early stopping callback watches a specified metric and interrupts training if the metric does not decrease for a specified number of epochs.

__init__(metric='val_loss', patience=5, restore_best=False, minimize=True)[source]

Initializes a new early stopping callback.

Parameters
  • metric (str or list of str, default: 'val_loss') – The metric to watch during training. If a list is given, the sum of the given metrics is considered.

  • patience (int, default: 5) – The number of epochs that training still continues although the watched metric is greater than its smallest value during training.

  • restore_best (bool, default: False) – Whether the model’s parameter should be set to the parameters which showed the best performance in terms of the watched metric.

  • minimize (bool, default: True) – Whether to minimize or maximize the given metric.

Tracking

Neptune

class pyblaze.nn.NeptuneTracker(experiment)[source]

The Neptune tracker can be used to track experiments with https://neptune.ai. As soon as metrics are available they are logged to the experiment that this tracker is managing. It requires neptune-client to be installed.

__init__

Initializes a new tracker for the given neptune experiment.

experiment: neptune.experiments.Experiment

The experiment to log for.

__init__(experiment)[source]

Initialize self. See help(type(self)) for accurate signature.

log_metric(name, val)[source]

Logs the given value for the specified metric.

Tensorboard

class pyblaze.nn.TensorboardTracker(local_dir)[source]

The tensorboard tracker can be used to track experiments with tensorboard. The summary writer is available as writer property on the tracker.

__init__

Initializes a new Tensorboard tracker logging to the specified directory.

local_dir: str

The directory to log to.

__init__(local_dir)[source]

Initialize self. See help(type(self)) for accurate signature.

log_metric(name, val, step)[source]

Logs the given value for the specified metric at the given step.