catenets.models.jax.tnet module

Implements a T-Net: T-learner for CATE based on a dense NN

class TNet(binary_y: bool = False, n_layers_out: int = 2, n_units_out: int = 100, n_layers_r: int = 3, n_units_r: int = 200, penalty_l2: float = 0.0001, step_size: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, early_stopping: bool = True, patience: int = 10, n_iter_min: int = 200, n_iter_print: int = 50, seed: int = 42, train_separate: bool = True, penalty_diff: float = 0.0001, nonlin: str = 'elu')

Bases: catenets.models.jax.base.BaseCATENet

TNet class – two separate functions learned for each Potential Outcome function

Parameters
  • binary_y (bool, default False) – Whether the outcome is binary

  • n_layers_out (int) – Number of hypothesis layers (n_layers_out x n_units_out + 1 x Dense layer)

  • n_units_out (int) – Number of hidden units in each hypothesis layer

  • n_layers_r (int) – Number of representation layers before hypothesis layers (distinction between hypothesis layers and representation layers is made to match TARNet & SNets)

  • n_units_r (int) – Number of hidden units in each representation layer

  • penalty_l2 (float) – l2 (ridge) penalty

  • step_size (float) – learning rate for optimizer

  • n_iter (int) – Maximum number of iterations

  • batch_size (int) – Batch size

  • val_split_prop (float) – Proportion of samples used for validation split (can be 0)

  • early_stopping (bool, default True) – Whether to use early stopping

  • patience (int) – Number of iterations to wait before early stopping after decrease in validation loss

  • n_iter_min (int) – Minimum number of iterations to go through before starting early stopping

  • n_iter_print (int) – Number of iterations after which to print updates

  • seed (int) – Seed used

  • train_separate (bool, default True) – Whether to train the two output heads completely separately or whether to regularize their difference

  • penalty_diff (float) – l2-penalty for regularizing the difference between output heads. used only if train_separate=False

  • nonlin (string, default 'elu') – Nonlinearity to use in NN

_abc_impl = <_abc_data object>
_get_predict_function() Callable
_get_train_function() Callable
_train_tnet_jointly(X: jax._src.basearray.Array, y: jax._src.basearray.Array, w: jax._src.basearray.Array, binary_y: bool = False, n_layers_out: int = 2, n_units_out: int = 100, n_layers_r: int = 3, n_units_r: int = 200, penalty_l2: float = 0.0001, step_size: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, early_stopping: bool = True, patience: int = 10, n_iter_min: int = 200, n_iter_print: int = 50, seed: int = 42, return_val_loss: bool = False, same_init: bool = True, penalty_diff: float = 0.0001, nonlin: str = 'elu', avg_objective: bool = True) jax._src.basearray.Array
predict_t_net(X: jax._src.basearray.Array, trained_params: dict, predict_funs: list, return_po: bool = False, return_prop: bool = False) jax._src.basearray.Array
train_tnet(X: jax._src.basearray.Array, y: jax._src.basearray.Array, w: jax._src.basearray.Array, binary_y: bool = False, n_layers_out: int = 2, n_units_out: int = 100, n_layers_r: int = 3, n_units_r: int = 200, penalty_l2: float = 0.0001, step_size: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, early_stopping: bool = True, patience: int = 10, n_iter_min: int = 200, n_iter_print: int = 50, seed: int = 42, return_val_loss: bool = False, train_separate: bool = True, penalty_diff: float = 0.0001, nonlin: str = 'elu', avg_objective: bool = True) Any