MLPModel

class MLPModel(input_size: int, decoder_length: int, hidden_size: List, encoder_length: int = 0, lr: float = 0.001, loss: Optional[torch.nn.modules.module.Module] = None, train_batch_size: int = 16, test_batch_size: int = 16, optimizer_params: Optional[dict] = None, trainer_params: Optional[dict] = None, train_dataloader_params: Optional[dict] = None, test_dataloader_params: Optional[dict] = None, val_dataloader_params: Optional[dict] = None, split_params: Optional[dict] = None)[source]

Bases: etna.models.base.DeepBaseModel

MLPModel.

Init MLP model.

Parameters
  • input_size (int) – size of the input feature space: target plus extra features

  • decoder_length (int) – decoder length

  • hidden_size (List) – List of sizes of the hidden states

  • encoder_length (int) – encoder length

  • lr (float) – learning rate

  • loss (Optional[torch.nn.Module]) – loss function, MSELoss by default

  • train_batch_size (int) – batch size for training

  • test_batch_size (int) – batch size for testing

  • optimizer_params (Optional[dict]) – parameters for optimizer for Adam optimizer (api reference torch.optim.Adam)

  • trainer_params (Optional[dict]) – Pytorch ligthning trainer parameters (api reference pytorch_lightning.trainer.trainer.Trainer)

  • train_dataloader_params (Optional[dict]) – parameters for train dataloader like sampler for example (api reference torch.utils.data.DataLoader)

  • test_dataloader_params (Optional[dict]) – parameters for test dataloader

  • val_dataloader_params (Optional[dict]) – parameters for validation dataloader

  • split_params (Optional[dict]) –

    dictionary with parameters for torch.utils.data.random_split() for train-test splitting
    • train_size: (float) value from 0 to 1 - fraction of samples to use for training

    • generator: (Optional[torch.Generator]) - generator for reproducibile train-test splitting

    • torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing __len__

Inherited-members

Methods

fit(ts)

Fit model.

forecast(ts, prediction_size[, ...])

Make predictions.

get_model()

Get model.

load(path)

Load an object.

params_to_tune()

Get default grid for tuning hyperparameters.

predict(ts, prediction_size[, return_components])

Make predictions.

raw_fit(torch_dataset)

Fit model on torch like Dataset.

raw_predict(torch_dataset)

Make inference on torch like Dataset.

save(path)

Save the object.

set_params(**params)

Return new object instance with modified parameters.

to_dict()

Collect all information about etna object in dict.

Attributes

context_size

Context size of the model.

params_to_tune() Dict[str, etna.distributions.distributions.BaseDistribution][source]

Get default grid for tuning hyperparameters.

This grid tunes parameters: lr, hidden_size.i where i from 0 to len(hidden_size) - 1. Other parameters are expected to be set by the user.

Returns

Grid to tune.

Return type

Dict[str, etna.distributions.distributions.BaseDistribution]