catenets.models.torch package

PyTorch-based implementations for the CATE estimators.

class DRLearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, te_estimator: Optional[Any] = None, n_folds: int = 2, n_layers_out: int = 2, n_layers_out_t: int = 2, n_units_out: int = 100, n_units_out_t: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, weight_decay: float = 0.0001, weight_decay_t: float = 0.0001, lr: float = 0.0001, lr_t: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = 'prop', patience: int = 10, n_iter_min: int = 200, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

DR-learner for CATE estimation, based on doubly robust AIPW pseudo-outcome

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
training: bool
class DragonNet(n_unit_in: int, binary_y: bool = False, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, nonlin: str = 'elu', n_units_r: int = 200, batch_norm: bool = True, dropout: bool = False, dropout_prob: float = 0.2, **kwargs: Any)

Bases: catenets.models.torch.representation_nets.BasicDragonNet

Class implements a variant based on Shi et al (2019)’s DragonNet.

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
_step(X: torch.Tensor, w: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
training: bool
class FlexTENet(n_unit_in: int, binary_y: bool, n_layers_out: int = 2, n_units_s_out: int = 50, n_units_p_out: int = 50, n_layers_r: int = 3, n_units_s_r: int = 100, n_units_p_r: int = 100, private_out: bool = False, weight_decay: float = 0.0001, penalty_orthogonal: float = 0.01, lr: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, early_stopping: bool = True, patience: int = 10, n_iter_min: int = 200, n_iter_print: int = 50, seed: int = 42, shared_repr: bool = False, normalize_ortho: bool = False, mode: int = 1, clipping_value: int = 1, dropout: bool = False, dropout_prob: float = 0.5)

Bases: catenets.models.torch.base.BaseCATEEstimator

CLass implements FlexTENet, an architecture for treatment effect estimation that allows for both shared and private information in each layer of the network.

Parameters
  • n_unit_in (int) – Number of features

  • binary_y (bool, default False) – Whether the outcome is binary

  • n_layers_out (int) – Number of hypothesis layers (n_layers_out x n_units_out + 1 x Linear layer)

  • n_units_s_out (int) – Number of hidden units in each shared hypothesis layer

  • n_units_p_out (int) – Number of hidden units in each private hypothesis layer

  • n_layers_r (int) – Number of representation layers before hypothesis layers (distinction between hypothesis layers and representation layers is made to match TARNet & SNets)

  • n_units_s_r (int) – Number of hidden units in each shared representation layer

  • n_units_s_r – Number of hidden units in each private representation layer

  • private_out (bool, False) – Whether the final prediction layer should be fully private, or retain a shared component.

  • weight_decay (float) – l2 (ridge) penalty

  • penalty_orthogonal (float) – orthogonalisation penalty

  • lr (float) – learning rate for optimizer

  • n_iter (int) – Maximum number of iterations

  • batch_size (int) – Batch size

  • val_split_prop (float) – Proportion of samples used for validation split (can be 0)

  • early_stopping (bool, default True) – Whether to use early stopping

  • patience (int) – Number of iterations to wait before early stopping after decrease in validation loss

  • n_iter_min (int) – Minimum number of iterations to go through before starting early stopping

  • n_iter_print (int) – Number of iterations after which to print updates

  • seed (int) – Seed used

  • opt (str, default 'adam') – Optimizer to use, accepts ‘adam’ and ‘sgd’

  • shared_repr (bool, False) – Whether to use a shared representation block as TARNet

  • lr_scale (float) – Whether to scale down the learning rate after unfreezing the private components of the network (only used if pretrain_shared=True)

  • normalize_ortho (bool, False) – Whether to normalize the orthogonality penalty (by depth of network)

  • clipping_value (int, default 1) – Gradients clipping value

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_ortho_penalty_asymmetric() torch.Tensor
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
fit(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor) catenets.models.torch.flextenet.FlexTENet

Fit treatment models.

Parameters
  • X (torch.Tensor of shape (n_samples, n_features)) – The features to fit to

  • y (torch.Tensor of shape (n_samples,) or (n_samples, )) – The outcome variable

  • w (torch.Tensor of shape (n_samples,)) – The treatment indicator

loss(y0_pred: torch.Tensor, y1_pred: torch.Tensor, y_true: torch.Tensor, t_true: torch.Tensor) torch.Tensor
predict(X: torch.Tensor, return_po: bool = False, training: bool = False) torch.Tensor

Predict treatment effects and potential outcomes

Parameters

X (array-like of shape (n_samples, n_features)) – Test-sample features

Returns

y

Return type

array-like of shape (n_samples,)

training: bool
class PWLearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, te_estimator: Optional[Any] = None, n_folds: int = 2, n_layers_out: int = 2, n_layers_out_t: int = 2, n_units_out: int = 100, n_units_out_t: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, weight_decay: float = 0.0001, weight_decay_t: float = 0.0001, lr: float = 0.0001, lr_t: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = 'prop', patience: int = 10, n_iter_min: int = 200, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

PW-learner for CATE estimation, based on singly robust Horvitz Thompson pseudo-outcome

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
training: bool
class RALearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, te_estimator: Optional[Any] = None, n_folds: int = 2, n_layers_out: int = 2, n_layers_out_t: int = 2, n_units_out: int = 100, n_units_out_t: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, weight_decay: float = 0.0001, weight_decay_t: float = 0.0001, lr: float = 0.0001, lr_t: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = 'prop', patience: int = 10, n_iter_min: int = 200, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

RA-learner for CATE estimation, based on singly robust regression-adjusted pseudo-outcome

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
training: bool
class RLearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, te_estimator: Optional[Any] = None, n_folds: int = 2, n_layers_out: int = 2, n_layers_out_t: int = 2, n_units_out: int = 100, n_units_out_t: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, weight_decay: float = 0.0001, weight_decay_t: float = 0.0001, lr: float = 0.0001, lr_t: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = 'prop', patience: int = 10, n_iter_min: int = 200, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

R-learner for CATE estimation. Based on pseudo-outcome (Y-mu(x))/(w-pi(x)) and sample weight (w-pi(x))^2 – can only be implemented if .fit of te_estimator takes argument ‘sample_weight’.

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
training: bool
class SLearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, n_layers_out: int = 2, n_units_out: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 2, weight_decay: float = 0.0001, lr: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = None, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.base.BaseCATEEstimator

S-learner for treatment effect estimation (single learner, treatment indicator just another feature).

Parameters
  • n_unit_in (int) – Number of features

  • binary_y (bool) – Whether the outcome is binary

  • po_estimator (sklearn/PyTorch model, default: None) – Custom potential outcome model. If this parameter is set, the rest of the parameters are ignored.

  • n_layers_out (int) – Number of hypothesis layers (n_layers_out x n_units_out + 1 x Linear layer)

  • n_layers_out_prop (int) – Number of hypothesis layers for propensity score(n_layers_out x n_units_out + 1 x Linear layer)

  • n_units_out (int) – Number of hidden units in each hypothesis layer

  • n_units_out_prop (int) – Number of hidden units in each propensity score hypothesis layer

  • weight_decay (float) – l2 (ridge) penalty

  • lr (float) – learning rate for optimizer

  • n_iter (int) – Maximum number of iterations

  • batch_size (int) – Batch size

  • val_split_prop (float) – Proportion of samples used for validation split (can be 0)

  • n_iter_print (int) – Number of iterations after which to print updates

  • seed (int) – Seed used

  • nonlin (string, default 'elu') – Nonlinearity to use in the neural net. Can be ‘elu’, ‘relu’, ‘selu’ or ‘leaky_relu’.

  • weighting_strategy (optional str, None) – Whether to include propensity head and which weightening strategy to use

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_create_extended_matrices(X: torch.Tensor) torch.Tensor
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
fit(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor) catenets.models.torch.slearner.SLearner

Fit treatment models.

Parameters
  • X (torch.Tensor of shape (n_samples, n_features)) – The features to fit to

  • y (torch.Tensor of shape (n_samples,) or (n_samples, )) – The outcome variable

  • w (torch.Tensor of shape (n_samples,)) – The treatment indicator

predict(X: torch.Tensor, return_po: bool = False, training: bool = False) torch.Tensor

Predict treatment effects and potential outcomes

Parameters

X (array-like of shape (n_samples, n_features)) – Test-sample features

Returns

y

Return type

array-like of shape (n_samples,)

training: bool
class SNet(n_unit_in: int, binary_y: bool = False, n_layers_r: int = 3, n_units_r: int = 100, n_layers_out: int = 2, n_units_r_small: int = 50, n_units_out: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 2, weight_decay: float = 0.0001, penalty_orthogonal: float = 0.01, penalty_disc: float = 0, lr: float = 0.0001, n_iter: int = 10000, n_iter_min: int = 200, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', ortho_reg_type: str = 'abs', patience: int = 10, clipping_value: int = 1, batch_norm: bool = True, with_prop: bool = True, early_stopping: bool = True, prop_loss_multiplier: float = 1, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.base.BaseCATEEstimator

Class implements SNet as discussed in Curth & van der Schaar (2021). Additionally to the version implemented in the AISTATS paper, we also include an implementation that does not have propensity heads (set with_prop=False) :param n_unit_in: Number of features :type n_unit_in: int :param binary_y: Whether the outcome is binary :type binary_y: bool, default False :param n_layers_r: Number of shared & private representation layers before the hypothesis layers. :type n_layers_r: int :param n_units_r: Number of hidden units in representation shared before the hypothesis layer. :type n_units_r: int :param n_layers_out: Number of hypothesis layers (n_layers_out x n_units_out + 1 x Linear layer) :type n_layers_out: int :param n_layers_out_prop: Number of hypothesis layers for propensity score(n_layers_out x n_units_out + 1 x Linear

layer)

Parameters
  • n_units_out (int) – Number of hidden units in each hypothesis layer

  • n_units_out_prop (int) – Number of hidden units in each propensity score hypothesis layer

  • n_units_r_small (int) – Number of hidden units in each PO functions private representation

  • weight_decay (float) – l2 (ridge) penalty

  • lr (float) – learning rate for optimizer

  • n_iter (int) – Maximum number of iterations

  • batch_size (int) – Batch size

  • val_split_prop (float) – Proportion of samples used for validation split (can be 0)

  • patience (int) – Number of iterations to wait before early stopping after decrease in validation loss

  • n_iter_min (int) – Minimum number of iterations to go through before starting early stopping

  • n_iter_print (int) – Number of iterations after which to print updates

  • seed (int) – Seed used

  • nonlin (string, default 'elu') – Nonlinearity to use in the neural net. Can be ‘elu’, ‘relu’, ‘selu’ or ‘leaky_relu’.

  • penalty_disc (float, default zero) – Discrepancy penalty. Defaults to zero as this feature is not tested.

  • clipping_value (int, default 1) – Gradients clipping value

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_forward(X: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_maximum_mean_discrepancy(X: torch.Tensor, w: torch.Tensor) torch.Tensor
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_ortho_reg() float
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
_step(X: torch.Tensor, w: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]
fit(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor) catenets.models.torch.snet.SNet

Fit treatment models.

Parameters
  • X (torch.Tensor of shape (n_samples, n_features)) – The features to fit to

  • y (torch.Tensor of shape (n_samples,) or (n_samples, )) – The outcome variable

  • w (torch.Tensor of shape (n_samples,)) – The treatment indicator

loss(y0_pred: torch.Tensor, y1_pred: torch.Tensor, t_pred: torch.Tensor, discrepancy: torch.Tensor, y_true: torch.Tensor, t_true: torch.Tensor) torch.Tensor
predict(X: torch.Tensor, return_po: bool = False, training: bool = False) torch.Tensor

Predict treatment effects and potential outcomes

Parameters

X (array-like of shape (n_samples, n_features)) – Test-sample features

Returns

y

Return type

array-like of shape (n_samples,)

training: bool
class TARNet(n_unit_in: int, binary_y: bool = False, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, nonlin: str = 'elu', penalty_disc: float = 0, batch_norm: bool = True, dropout: bool = False, dropout_prob: float = 0.2, **kwargs: Any)

Bases: catenets.models.torch.representation_nets.BasicDragonNet

Class implements Shalit et al (2017)’s TARNet

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
_step(X: torch.Tensor, w: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
training: bool
class TLearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, n_layers_out: int = 2, n_units_out: int = 100, weight_decay: float = 0.0001, lr: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.base.BaseCATEEstimator

TLearner class – two separate functions learned for each Potential Outcome function

Parameters
  • n_unit_in (int) – Number of features

  • binary_y (bool, default False) – Whether the outcome is binary

  • po_estimator (sklearn/PyTorch model, default: None) – Custom plugin model. If this parameter is set, the rest of the parameters are ignored.

  • n_layers_out (int) – Number of hypothesis layers (n_layers_out x n_units_out + 1 x Linear layer)

  • n_units_out (int) – Number of hidden units in each hypothesis layer

  • weight_decay (float) – l2 (ridge) penalty

  • lr (float) – learning rate for optimizer

  • n_iter (int) – Maximum number of iterations

  • batch_size (int) – Batch size

  • val_split_prop (float) – Proportion of samples used for validation split (can be 0)

  • n_iter_print (int) – Number of iterations after which to print updates

  • seed (int) – Seed used

  • nonlin (string, default 'elu') – Nonlinearity to use in the neural net. Cat be ‘elu’, ‘relu’, ‘selu’ or ‘leaky_relu’.

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_state_dict_hooks: Dict[int, Callable]
fit(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor) catenets.models.torch.tlearner.TLearner

Train plug-in models.

Parameters
  • X (torch.Tensor (n_samples, n_features)) – The features to fit to

  • y (torch.Tensor (n_samples,) or (n_samples, )) – The outcome variable

  • w (torch.Tensor (n_samples,)) – The treatment indicator

predict(X: torch.Tensor, return_po: bool = False, training: bool = False) torch.Tensor

Predict treatment effects and potential outcomes :param X: Test-sample features :type X: torch.Tensor of shape (n_samples, n_features) :param return_po: Return potential outcomes too :type return_po: bool

Returns

y

Return type

torch.Tensor of shape (n_samples,)

training: bool
class ULearner(n_unit_in: int, binary_y: bool, po_estimator: Optional[Any] = None, te_estimator: Optional[Any] = None, n_folds: int = 2, n_layers_out: int = 2, n_layers_out_t: int = 2, n_units_out: int = 100, n_units_out_t: int = 100, n_units_out_prop: int = 100, n_layers_out_prop: int = 0, weight_decay: float = 0.0001, weight_decay_t: float = 0.0001, lr: float = 0.0001, lr_t: float = 0.0001, n_iter: int = 10000, batch_size: int = 100, val_split_prop: float = 0.3, n_iter_print: int = 50, seed: int = 42, nonlin: str = 'elu', weighting_strategy: Optional[str] = 'prop', patience: int = 10, n_iter_min: int = 200, batch_norm: bool = True, early_stopping: bool = True, dropout: bool = False, dropout_prob: float = 0.2)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

U-learner for CATE estimation. Based on pseudo-outcome (Y-mu(x))/(w-pi(x))

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
training: bool
class XLearner(*args: Any, weighting_strategy: str = 'prop', **kwargs: Any)

Bases: catenets.models.torch.pseudo_outcome_nets.PseudoOutcomeLearner

X-learner for CATE estimation. Combines two CATE estimates via a weighting function g(x): tau(x) = g(x) tau_0(x) + (1-g(x)) tau_1(x)

_backward_hooks: Dict[int, Callable]
_buffers: Dict[str, Optional[torch.Tensor]]
_first_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, fit_mask: torch.Tensor, pred_mask: torch.Tensor) Tuple[torch.Tensor, torch.Tensor, torch.Tensor]
_forward_hooks: Dict[int, Callable]
_forward_pre_hooks: Dict[int, Callable]
_is_full_backward_hook: Optional[bool]
_load_state_dict_post_hooks: Dict[int, Callable]
_load_state_dict_pre_hooks: Dict[int, Callable]
_modules: Dict[str, Optional[Module]]
_non_persistent_buffers_set: Set[str]
_parameters: Dict[str, Optional[torch.nn.parameter.Parameter]]
_second_step(X: torch.Tensor, y: torch.Tensor, w: torch.Tensor, p: torch.Tensor, mu_0: torch.Tensor, mu_1: torch.Tensor) None
_state_dict_hooks: Dict[int, Callable]
predict(X: torch.Tensor, return_po: bool = False, training: bool = False) torch.Tensor

Predict treatment effects

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test-sample features

  • return_po (bool, default False) – Whether to return potential outcome predictions. Placeholder, can only accept False.

Returns

te_est – Predicted treatment effects

Return type

array-like of shape (n_samples,)

training: bool

Subpackages

Submodules