TorchFMTrainer¶
- class TorchFMTrainer¶
Bases:
TrainerBase
A trainer class for a Factorization Machine model implemented by PyTorch.
Methods
Initialize the model trainer.
Initialize random with a seed.
Set a model class.
Set model parameters to be used to initialize the model class with (
TorchFM
by default).Set machine learning parameters.
Train an FM model (
TorchFM
or equivalent model class instance inheritingTorchFM
).Attributes
Batch size considered for training.
The number of epochs for training.
A loss function class for model training.
A learning rate scheduler class.
A learning rate scheduler parameters.
A model class considered in the training class.
Model parameters the model class is initialized with.
An optimizer class for model training.
Optimizer parameters for model training.
- __init__(model_class: type[~amplify_bbopt.model.TorchFM] = <class 'amplify_bbopt.model.TorchFM'>) None ¶
Initialize the model trainer.
By default
TorchFM
is considered and the default training parameters are set here by callingTorchFMTrainer.set_train_params
.- Parameters:
model_class (Type[TorchFM]) – A surrogate model class. By calling the constructor, default training parameters are set with
TorchFMTrainer.set_train_params
. Defaults toTorchFM
.
- __str__() str ¶
Return human-readable training information.
- Returns:
Training information.
- Return type:
- set_model_class(model_class: type[TorchFM]) None ¶
Set a model class.
- Parameters:
model_class (Type[TorchFM]) – An FM model class.
- set_model_params(**model_params: dict) None ¶
Set model parameters to be used to initialize the model class with (
TorchFM
by default).The following parameters can be set for the default model class (this overwrites the model parameters the optimizer set based on the observation of objective function):
d (int): The size of an FM input (= the number of the Amplify SDK variables fed to the model).
k (int): The FM hyperparameter. In
FMQAOptimizer
default to 10.
- set_train_params(
- batch_size: int = 8,
- epochs: int = 2000,
- loss_class: type[torch.nn.modules.loss._Loss] = <class 'torch.nn.modules.loss.MSELoss'>,
- optimizer_class: type[Optimizer] = <class 'torch.optim.adamw.AdamW'>,
- optimizer_params: dict[str,
- Any] | None = None,
- lr_sche_class: type[lr_scheduler._LRScheduler] | None = <class 'torch.optim.lr_scheduler.StepLR'>,
- lr_sche_params: dict[str,
- Any] | None = None,
- data_split_ratio_train: float = 0.8,
- num_threads: int | None = None,
Set machine learning parameters.
- Parameters:
batch_size (int, optional) – A batch size. Defaults to 8.
epochs (int, optional) – A number of epochs. Defaults to 2000.
loss_class (Type[torch.nn.modules.loss._Loss], optional) – A loss function class. Defaults to nn.MSELoss.
optimizer_class (Type[torch.optim.Optimizer], optional) – An optimizer class. Defaults to torch.optim.AdamW.
optimizer_params (Dict, optional) – Optimization parameters. Defaults to {“lr”: 0.5}.
lr_sche_class (Type[lr_scheduler._LRScheduler] | None, optional) – A learning rate scheduler class. Defaults to lr_scheduler.StepLR.
lr_sche_params (Dict, optional) – Learning rate scheduler parameters. Defaults to {“step_size”: 100, “gamma”: 0.8}.
data_split_ratio_train (float, optional) – Training dataset is split for training and validation. data_split_ratio_train defines the ratio of data used for traininig to the entire dataset samples. Note setting this either 0 or 1, training and validation use the same dataset (no split). Defaults to 0.8.
num_threads (int | None, optional) – The number of threads used for intraop parallel processing in PyTorch. If set None, available threads are used. Defaults to None.
- train(x_values: list[list[bool | int | float]], y_values: list[int | float], logger: Logger | None = None) TorchFM ¶
Train an FM model (
TorchFM
or equivalent model class instance inheritingTorchFM
).For adjustable training parameters, see
TorchFMTrainer.set_train_params
.- Parameters:
- Returns:
A trained FM model.
- Return type:
- __abstractmethods__ = frozenset({})¶
- __dict__ = mappingproxy({'__module__': 'amplify_bbopt.trainer', '__doc__': 'A trainer class for a Factorization Machine model implemented by PyTorch.', '__init__': <function TorchFMTrainer.__init__>, 'train': <function TorchFMTrainer.train>, 'set_model_class': <function TorchFMTrainer.set_model_class>, 'model_params': <property object>, 'model_class': <property object>, 'batch_size': <property object>, 'epochs': <property object>, 'loss_class': <property object>, 'optimizer_class': <property object>, 'optimizer_params': <property object>, 'lr_sche_class': <property object>, 'lr_sche_params': <property object>, 'set_model_params': <function TorchFMTrainer.set_model_params>, 'set_train_params': <function TorchFMTrainer.set_train_params>, '_corrcoef': <function TorchFMTrainer._corrcoef>, '__str__': <function TorchFMTrainer.__str__>, 'init_seed': <function TorchFMTrainer.init_seed>, '__abstractmethods__': frozenset(), '_abc_impl': <_abc._abc_data object>, '__annotations__': {'_model_params': 'dict[str, Any]'}})¶
- __slots__ = ()¶
- __weakref__¶
list of weak references to the object (if defined)
- property lr_sche_class: type[_LRScheduler] | None¶
A learning rate scheduler class. If not used, return None.