API
metapopulation model
- class py0.metapop_model.AddSusceptibleLayer(*args, **kwargs)
Bases:
Layer
- call(trajs)
This is where the layer’s logic lives.
Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value of a keyword argument.
NumPy array or Python scalar values in inputs get cast as tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method) using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs only.
Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
- class py0.metapop_model.ContactInfectionLayer(*args, **kwargs)
Bases:
Layer
- call(neff_compartments, neff)
This is where the layer’s logic lives.
Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value of a keyword argument.
NumPy array or Python scalar values in inputs get cast as tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method) using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs only.
Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
- class py0.metapop_model.DeltaRegularizer(value, strength=0.001)
Bases:
Regularizer
- class py0.metapop_model.MetaModel(*args, **kwargs)
Bases:
Model
- call(R, T, rho, params)
Calls the model on new inputs and returns the outputs as tensors.
In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.
- Args:
inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be either a boolean tensor or
- None (no mask). For more details, check the guide
[here](https://www.tensorflow.org/guide/keras/masking_and_padding).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
- class py0.metapop_model.MetaParameterJoint(*args, **kwargs)
Bases:
ParameterJoint
- class py0.metapop_model.MetapopLayer(*args, **kwargs)
Bases:
Layer
- build(input_shape)
Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.
This is typically used to create the weights of Layer subclasses.
- Args:
- input_shape: Instance of TensorShape, or list of instances of
TensorShape if the layer expects a list of inputs (one instance per input).
- call(inputs)
This is where the layer’s logic lives.
Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value of a keyword argument.
NumPy array or Python scalar values in inputs get cast as tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method) using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs only.
Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
- class py0.metapop_model.MinMaxConstraint(min, max)
Bases:
Constraint
Makes weights normalized after reshape and applying mask
- class py0.metapop_model.NormalizationConstraint(axis, mask)
Bases:
Constraint
Makes weights normalized after reshape and applying mask
- class py0.metapop_model.ParameterHypers
Bases:
object
- class py0.metapop_model.ParameterJoint(*args, **kwargs)
Bases:
Model
- compile(optimizer, **kwargs)
Configures the model for training.
Example:
```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),
tf.keras.metrics.FalseNegatives()])
- Args:
- optimizer: String (name of optimizer) or optimizer instance. See
tf.keras.optimizers.
- loss: Loss function. Maybe be a string (name of loss function), or
a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.
- metrics: List of metrics to be evaluated by the model during training
and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’: ‘accuracy’, ‘output_b’: [‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well.
- loss_weights: Optional list or dictionary specifying scalar coefficients
(Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.
- If a list, it is expected to have a 1:1 mapping to the model’s
outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.
- weighted_metrics: List of metrics to be evaluated and weighted by
sample_weight or class_weight during training and testing.
- run_eagerly: Bool. Defaults to False. If True, this Model’s
logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy.
- steps_per_execution: Int. Defaults to 1. The number of batches to
run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).
**kwargs: Arguments supported for backwards compatibility only.
- sample(N, return_joint=False)
- class py0.metapop_model.PositiveMaskedConstraint(mask)
Bases:
Constraint
Makes weights normalized after reshape and applying mask
- class py0.metapop_model.TrainableInputLayer(*args, **kwargs)
Bases:
Layer
Create trainable input layer
- call(inputs)
This is where the layer’s logic lives.
Note here that call() method in tf.keras is little bit different from keras API. In keras API, you can pass support masking for layers as additional arguments. Whereas tf.keras has compute_mask() method to support masking.
- Args:
- inputs: Input tensor, or dict/list/tuple of input tensors.
The first positional inputs argument is subject to special rules: - inputs must be explicitly passed. A layer cannot have zero
arguments, and inputs cannot be provided via the default value of a keyword argument.
NumPy array or Python scalar values in inputs get cast as tensors.
Keras mask metadata is only collected from inputs.
Layers are built (build(input_shape) method) using shape info from inputs only.
input_spec compatibility is only checked against inputs.
Mixed precision input casting is only applied to inputs. If a layer has tensor arguments in *args or **kwargs, their casting behavior in mixed precision should be handled manually.
The SavedModel input specification is generated using inputs only.
Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported for inputs and not for tensors in positional and keyword arguments.
- *args: Additional positional arguments. May contain tensors, although
this is not recommended, for the reasons above.
- **kwargs: Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above. The following optional keyword arguments are reserved: - training: Boolean scalar tensor of Python boolean indicating
whether the call is meant for training or inference.
mask: Boolean input mask. If the layer’s call() method takes a mask argument, its default value will be set to the mask generated for inputs by the previous layer (if input did come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
- Returns:
A tensor or list/tuple of tensors.
- class py0.metapop_model.TrainableMetaModel(*args, **kwargs)
Bases:
Model
- call(inputs)
Calls the model on new inputs and returns the outputs as tensors.
In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).
Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.
- Args:
inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to run
the Network in training mode or inference mode.
- mask: A mask or list of masks. A mask can be either a boolean tensor or
- None (no mask). For more details, check the guide
[here](https://www.tensorflow.org/guide/keras/masking_and_padding).
- Returns:
A tensor if there is a single output, or a list of tensors if there are more than one outputs.
- compile(optimizer, **kwargs)
Configures the model for training.
Example:
```python model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss=tf.keras.losses.BinaryCrossentropy(), metrics=[tf.keras.metrics.BinaryAccuracy(),
tf.keras.metrics.FalseNegatives()])
- Args:
- optimizer: String (name of optimizer) or optimizer instance. See
tf.keras.optimizers.
- loss: Loss function. Maybe be a string (name of loss function), or
a tf.keras.losses.Loss instance. See tf.keras.losses. A loss function is any callable with the signature loss = fn(y_true, y_pred), where y_true are the ground truth values, and y_pred are the model’s predictions. y_true should have shape (batch_size, d0, .. dN) (except in the case of sparse loss functions such as sparse categorical crossentropy which expects integer arrays of shape (batch_size, d0, .. dN-1)). y_pred should have shape (batch_size, d0, .. dN). The loss function should return a float tensor. If a custom Loss instance is used and reduction is set to None, return value has shape (batch_size, d0, .. dN-1) i.e. per-sample or per-timestep loss values; otherwise, it is a scalar. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses, unless loss_weights is specified.
- metrics: List of metrics to be evaluated by the model during training
and testing. Each of this can be a string (name of a built-in function), function or a tf.keras.metrics.Metric instance. See tf.keras.metrics. Typically you will use metrics=[‘accuracy’]. A function is any callable with the signature result = fn(y_true, y_pred). To specify different metrics for different outputs of a multi-output model, you could also pass a dictionary, such as metrics={‘output_a’: ‘accuracy’, ‘output_b’: [‘accuracy’, ‘mse’]}. You can also pass a list to specify a metric or a list of metrics for each output, such as metrics=[[‘accuracy’], [‘accuracy’, ‘mse’]] or metrics=[‘accuracy’, [‘accuracy’, ‘mse’]]. When you pass the strings ‘accuracy’ or ‘acc’, we convert this to one of tf.keras.metrics.BinaryAccuracy, tf.keras.metrics.CategoricalAccuracy, tf.keras.metrics.SparseCategoricalAccuracy based on the loss function used and the model output shape. We do a similar conversion for the strings ‘crossentropy’ and ‘ce’ as well.
- loss_weights: Optional list or dictionary specifying scalar coefficients
(Python floats) to weight the loss contributions of different model outputs. The loss value that will be minimized by the model will then be the weighted sum of all individual losses, weighted by the loss_weights coefficients.
- If a list, it is expected to have a 1:1 mapping to the model’s
outputs. If a dict, it is expected to map output names (strings) to scalar coefficients.
- weighted_metrics: List of metrics to be evaluated and weighted by
sample_weight or class_weight during training and testing.
- run_eagerly: Bool. Defaults to False. If True, this Model’s
logic will not be wrapped in a tf.function. Recommended to leave this as None unless your Model cannot be run inside a tf.function. run_eagerly=True is not supported when using tf.distribute.experimental.ParameterServerStrategy.
- steps_per_execution: Int. Defaults to 1. The number of batches to
run during each tf.function call. Running multiple batches inside a single tf.function call can greatly improve performance on TPUs or small models with a large Python overhead. At most, one full epoch will be run each execution. If a number larger than the size of the epoch is passed, the execution will be truncated to the size of the epoch. Note that if steps_per_execution is set to N, Callback.on_batch_begin and Callback.on_batch_end methods will only be called every N batches (i.e. before/after each tf.function execution).
**kwargs: Arguments supported for backwards compatibility only.
- fit(steps=100, **kwargs)
Trains the model for a fixed number of epochs (iterations on a dataset).
- Args:
- x: Input data. It could be:
A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).
A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).
A dict mapping input names to the corresponding array/tensors, if the model has named inputs.
A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).
A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).
A tf.keras.utils.experimental.DatasetCreator, which wraps a callable that takes a single argument of type tf.distribute.InputContext, and returns a tf.data.Dataset. DatasetCreator should be used when users prefer to specify the per-replica batching and sharding logic for the Dataset. See tf.keras.utils.experimental.DatasetCreator doc for more information.
A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given below. If using tf.distribute.experimental.ParameterServerStrategy, only DatasetCreator type is supported for x.
- y: Target data. Like the input data x,
it could be either Numpy array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator, or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from x).
- batch_size: Integer or None.
Number of samples per gradient update. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
- epochs: Integer. Number of epochs to train the model.
An epoch is an iteration over the entire x and y data provided (unless the steps_per_epoch flag is set to something other than None). Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.
- verbose: ‘auto’, 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = one line per epoch. ‘auto’ defaults to 1 for most cases, but 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (eg, in a production environment).
- callbacks: List of keras.callbacks.Callback instances.
List of callbacks to apply during training. See tf.keras.callbacks. Note tf.keras.callbacks.ProgbarLogger and tf.keras.callbacks.History callbacks are created automatically and need not be passed into model.fit. tf.keras.callbacks.ProgbarLogger is created or not based on verbose argument to model.fit. Callbacks with batch-level calls are currently unsupported with tf.distribute.experimental.ParameterServerStrategy, and users are advised to implement epoch-level calls instead with an appropriate steps_per_epoch value.
- validation_split: Float between 0 and 1.
Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling. This argument is not supported when x is a dataset, generator or
- keras.utils.Sequence instance.
validation_split is not yet supported with tf.distribute.experimental.ParameterServerStrategy.
- validation_data: Data on which to evaluate
the loss and any model metrics at the end of each epoch. The model will not be trained on this data. Thus, note the fact that the validation loss of data provided using validation_split or validation_data is not affected by regularization layers like noise and dropout. validation_data will override validation_split. validation_data could be:
A tuple (x_val, y_val) of Numpy arrays or tensors.
A tuple (x_val, y_val, val_sample_weights) of NumPy arrays.
A tf.data.Dataset.
A Python generator or keras.utils.Sequence returning
(inputs, targets) or (inputs, targets, sample_weights).
validation_data is not yet supported with tf.distribute.experimental.ParameterServerStrategy.
- shuffle: Boolean (whether to shuffle the training data
before each epoch) or str (for ‘batch’). This argument is ignored when x is a generator or an object of tf.data.Dataset. ‘batch’ is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks. Has no effect when steps_per_epoch is not None.
- class_weight: Optional dictionary mapping class indices (integers)
to a weight (float) value, used for weighting the loss function (during training only). This can be useful to tell the model to “pay more attention” to samples from an under-represented class.
- sample_weight: Optional Numpy array of weights for
the training samples, used for weighting the loss function (during training only). You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, generator, or
- keras.utils.Sequence instance, instead provide the sample_weights
as the third element of x.
- initial_epoch: Integer.
Epoch at which to start training (useful for resuming a previous training run).
- steps_per_epoch: Integer or None.
Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. When training with input tensors such as TensorFlow data tensors, the default None is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a tf.data dataset, and ‘steps_per_epoch’ is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. If steps_per_epoch=-1 the training will run indefinitely with an infinitely repeating dataset. This argument is not supported with array inputs. When using tf.distribute.experimental.ParameterServerStrategy:
steps_per_epoch=None is not supported.
- validation_steps: Only relevant if validation_data is provided and
is a tf.data dataset. Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. If ‘validation_steps’ is None, validation will run until the validation_data dataset is exhausted. In the case of an infinitely repeated dataset, it will run into an infinite loop. If ‘validation_steps’ is specified and only part of the dataset will be consumed, the evaluation will start from the beginning of the dataset at each epoch. This ensures that the same validation samples are used every time.
- validation_batch_size: Integer or None.
Number of samples per validation batch. If unspecified, will default to batch_size. Do not specify the validation_batch_size if your data is in the form of datasets, generators, or keras.utils.Sequence instances (since they generate batches).
- validation_freq: Only relevant if validation data is provided. Integer
or collections.abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.
- max_queue_size: Integer. Used for generator or keras.utils.Sequence
input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.
- workers: Integer. Used for generator or keras.utils.Sequence input
only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.
- use_multiprocessing: Boolean. Used for generator or
keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-picklable arguments to the generator as they can’t be passed easily to children processes.
- Unpacking behavior for iterator-like inputs:
A common pattern is to pass a tf.data.Dataset, generator, or
tf.keras.utils.Sequence to the x argument of fit, which will in fact yield not only features (x) but optionally targets (y) and sample weights. Keras requires that the output of such iterator-likes be unambiguous. The iterator should return a tuple of length 1, 2, or 3, where the optional second and third elements will be used for y and sample_weight respectively. Any other type provided will be wrapped in a length one tuple, effectively treating everything as ‘x’. When yielding dicts, they should still adhere to the top-level tuple structure. e.g. ({“x0”: x0, “x1”: x1}, y). Keras will not attempt to separate features, targets, and weights from the keys of a single dict.
A notable unsupported data type is the namedtuple. The reason is that
it behaves like both an ordered datatype (tuple) and a mapping datatype (dict). So given a namedtuple of the form:
namedtuple(“example_tuple”, [“y”, “x”])
it is ambiguous whether to reverse the order of the elements when interpreting the value. Even worse is a tuple of the form:
namedtuple(“other_tuple”, [“x”, “y”, “z”])
where it is unclear if the tuple was intended to be unpacked into x, y, and sample_weight or passed through as a single element to x. As a result the data processing code will simply raise a ValueError if it encounters a namedtuple. (Along with instructions to remedy the issue.)
- Returns:
A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).
- Raises:
RuntimeError: 1. If the model was never compiled or, 2. If model.fit is wrapped in tf.function.
- ValueError: In case of mismatch between the provided input data
and what the model expects or when the input data is empty.
- get_traj()
- py0.metapop_model.categorical_normal_layer(input, start_logits, start_mean, start_scale, pad, name, start_high=0.5)
- py0.metapop_model.contact_infection_func(infectious_compartments, area=None, dtype=tf.float64)
- py0.metapop_model.negloglik(y, rv_y)
- py0.metapop_model.normal_mat_layer(input, start, name, start_var=1, clip_high=10000000000.0)
Normally distributed trainable distribution. Zeros in mobility matrix are preserved.
- py0.metapop_model.recip_norm_mat_dist(trans_times, trans_times_var, indices, sample_R=True, low=1)
- py0.metapop_model.recip_norm_mat_layer(input, time_means, time_vars, name, n_infectious_compartments=1)
Column Normalized Reciprical Gaussian trainable distribution. Zeros in starting matrix are preserved.
utilities
- class py0.utils.TransitionMatrix(compartment_names, infectious_compartments)
Bases:
object
Defines the transition between different compartments in the disease model, given different epidemiology parameters.
- add_transition(name1, name2, time, time_var)
- Parameters
name1 (string) – source compartment
name2 (string) – destination compartment
time (float) – time it takes to move from source compartment to destination compartment. This is typically the reciprocal of rates.
time_var (float) – variance of the time it takes to move from source compartment to destination compartment. Use zero unless you are creating an esemble of trajectories to do inference using MaxEnt.
- prior_matrix()
- property value
Returns matrix value.
- py0.utils.compartment_restrainer(restrained_patches, restrained_compartments, ref_traj, prior, npoints=5, noise=0, start_time=0, end_time=None, time_average=7, marker_size=10, marker_color='r')
Adds restraints to reference traj based on selected compartments of selected patches.
- Parameters
restrained_patches (list) – index of the patches (nodes) restrained
restrained_compartments (list) – index values for the restrained compartments
ref_traj (a [1, T, M, C] tensor with dtype tf.float32, where T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – reference traj
prior (maxent.prior) – Prior distribution for expected deviation from target for restraint. Can be either ‘EmptyPrior’ for exact agreement or set to ‘Laplace’ for more allowable disagreement.
npoints (int) – number of data points in each restrained compartment
noise (float) – multiplicative noise to be added to observations to allow higher uncertainty
start_time (int) – index for the lower time limit of restraints
end_time (int) – index for the higher time limit of restraints. If not provided, maximum timestep will be assumed.
time_average (int) – number of timesteps to for time averaging of restraints
marker_size (int) – marker size for restraints
marker_color (string) – marker color for restraints
- Returns
restraints, plot_fxns_list
- py0.utils.draw_graph(graph, weights=None, heatmap=False, title=None, dpi=150, true_origin=None, color_bar=True)
Plots networkx graph.
- Parameters
graph – networkx graph
weights – probabiity of being exposed in every patch at time zero across all the sample trajs. If not provided uniform probability will be assumed over all nodes.
heatmap (bool) – change node color based on weights
title (string) – plot title
dpi (int) – dpi value of plot
true_origin (int) – index for the true origin node
color_bar (bool) – enables color bar in plot
- py0.utils.draw_graph_on_map(mobility_matrix, geojson, title=None, node_size=300, node_color='#eb4034', edge_color='#555555', map_face_colors=None, fontsize=12, figsize=(10, 10), ax=None, alpha_edge=0.3, alpha_node=0.8, true_origin=None, restrained_patches=None, obs_color='C0', org_color='C8', show_map_only=False, show_legend=False)
Plots networkx graph on a map given mobility flows and geographic features. The edges are weighted by the mobility flows between the nodes. This function also alows for visualization of true orgin node and observed ones.
- Parameters
mobility_matrix (numpy array of [M, M], where M is the number of nodes in the metapopulation) – mobility flows between the nodes
geojson (string) – path to GeoJSON file describing the geographic features of the metapopulation
title (string) – figure title
node_size (int) – size for the networkx nodes
node_color (string or a list of strings with length M) – color for the networkx nodes
edge_color (string) – color for the networkx edges
map_face_colors (string or a list of strings with length M) – color for the faces of patches on the map
fontsize (float) – font size
figsize (tupple) – figure size
ax –
matplotlib.axes.AxesSubplot
. Defaults to a new axis.show_legend (bool) – show legend for true origin or obsevations.
alpha_edge (float) – alpha value for edges that allows transparency
alpha_nodes (float) – alpha value for nodes that allows transparency
true_origin (int) – index for the true origin node
restrained_patches (list) – index of the patches (nodes) restrained
obs_color (string) – marker color for the observation nodes
org_color (tensor with dtype tf.float32) – marker size for the true origin node
show_map_only (bool) – Showing the map only without the networkx graph. Default is
False
.
- py0.utils.exposed_finder(trajs)
Finds the initial exposed patch (t=0) for trajs
- Parameters
trajs (tensor with dtype tf.float32 of shape [N, T, M, C] where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – ensemble of trajectories after sampling
- Returns
A numpy array containing the index of the initial exposed node for the ensemble of trajectories
- py0.utils.gen_graph(M)
Generates a fully connected dense networkx graph of size M, edge list and node list.
- Parameters
M (int) – number of nodes in the metapopulation
- Returns
graph, edge list, node list
- py0.utils.gen_graph_from_R(mobility_matrix)
Generates a networkx graph of size mobility_matrix.shape[0], edge list and node list.
- Parameters
mobility_matrix (numpy array of [M, M], where M is the number of nodes in the metapopulation) – mobility flows between the nodes
- Returns
graph, edge list, node list
- py0.utils.gen_random_graph(M, p=1.0, seed=None)
Returns a random networkx graph of size M with connection probability p
- Parameters
M (int) – number of nodes in the metapopulation
p (float) – node connection probability
seed – allows random seeding for graph generations
- Returns
graph
- py0.utils.get_dist(prior_params, compartments=['E', 'A', 'I', 'R'])
Gets distributions for the model parameters in the ensemble trectory sampling.
- Parameters
prior_params (list) – model parameters during sampling over different batches.
compartments (list) – list of compartments except for ‘S’ (susceptible) as strings.
return: list of model parameter’s distributions.
- py0.utils.graph_degree(graph)
Returns graph degree of a network graph based on networkx graph input.
- Parameters
graph – networkx graph
- Returns
graph degree
- py0.utils.merge_history(base, other, prefix='')
- py0.utils.p0_loss(trajs, weights, true_origin)
Returns cross-entropy loss for p0 based on sampled trajs and maxent weights, size of meta-population and ground-truth p0 node inputs.
- Parameters
trajs (a [N, T, M, C] tensor with dtype tf.float32, where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments) – sampled ensemble of trajectories
weights (tensor with dtype tf.float32) – weights of the trajectories in the ensemble. If not provided, will be assumed as 1/N.
true_origin (int) – index for the true origin node
- Returns
cross-entropy loss
- py0.utils.p0_map(prior_exposed_patch, meta_pop_size, weights=None, patch_names=None, title=None, choropleth=False, geojson=None, fontsize=12, figsize=(15, 8), vmin=None, vmax=None, restrained_patches=None, true_origin=None, obs_size=5, obs_color='C0', org_color='C8', colormap='Reds', ax=None, projection=None, show_legend=True, show_cbar=True)
Plots the weighted probabiity of being exposed in every patch at time zero on a grid or on a choropleth map (this requires geopandas and geoplot packages). If choropleth plotting is enabled, make sure your geojson has ‘county’ as header for the counties name column and your patches names are alphabetically sorted.
- Parameters
prior_exposed_patch (an array of size N (sample size)) – output of
exposed_finder
function.meta_pop_size (int) – size of the metapopulation
weights (tensor with dtype tf.float32) – weights of the trajectories in the ensemble. If not provided, will be assumed as 1/N.
patch_names (list) – name of the patches. Note that the namings should be similar to names from GeoJSON if using
choropleth
.title (string) – figure title
choropleth (bool) – turn on if plotting choropleth plots
geojson (string) – path to GeoJSON file describing the geographic features of the metapopulation
fontsize (float) – font size
figsize (tupple) – figure size
vmin (float) – minimum value of the color bar
vmax (float) – maximum value of the color bar
restrained_patches (list) – index of the patches (nodes) restrained
true_origin (int) – index for the true origin node
obs_size (float) – marker size for the observation nodes
obs_color (string) – marker color for the observation nodes
org_color (tensor with dtype tf.float32) – marker size for the true origin node
colormap (string) – Matplotlib colormaps
ax –
matplotlib.axes.AxesSubplot
. Defaults to a new axis.projection – the projection to use. For reference see Working with Projections.
show_legend (bool) – show legend for true origin or obsevations.
show_cbar (bool) – show heatmap color bar
- py0.utils.patch_quantile(trajs, *args, ref_traj=None, weights=None, lower_q_bound=0.3333333333333333, upper_q_bound=0.6666666666666666, restrained_patches=None, plot_fxns_list=None, figsize=(18, 18), patch_names=None, fancy_shading=False, n_shading_gradients=30, alpha=0.6, obs_color='C0', yscale_log=False, **kw_args)
- Does
traj_quantile
for trajectories of shape [N, T, M, C] where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.
- Parameters
trajs (tensor with dtype tf.float32 of shape [N, T, M, C] where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – ensemble of trajectories after sampling
ref_traj (tensor with dtype tf.float32 of shape [1, T, M, C] where T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – reference trajectory
weights (tensor with dtype tf.float32) – weights for the each trajectory in the ensemble. If not defined uniform weights will be assumed.
lower_q_bound (float) – lower quantile bound
upper_q_bound (float) – upper quantile bound
restrained_patches (list) – index of the patches (nodes) restrained
plot_fxns_list (list) – output of compartment_restrainer
figsize (tupple) – figure size
patch_names (list) – name of the patches (nodes). If not provided patches name will be defined by their index.
fancy_shading (bool) – allows for gradient shading of the confidence interval
n_shading_gradients (int) – number of intervals for shading gradients
alpha (float) – alpha value for edges that allows transparency
obs_color (string) – marker color for the observation nodes
yscale_log (bool) – change y axis scale to log for better visualization of the observations.
- Does
- py0.utils.plot_dist(R_dist, E_A, A_I, I_R, start_exposed_dist, beta_dist, name='prior')
plots a
seaborn.distplot
for the model’s prior parameter distribution.- Parameters
R_dist – sampled mobility flows as tensor with dtype tf.float32
E_A – time for going from E->A as tensor with dtype tf.float32
A_I – time for going from A->I as tensor with dtype tf.float32
I_R – time for going from I->R as tensor with dtype tf.float32
start_exposed_dist – starting exposed fraction as tensor with dtype tf.float32
beta_dist – beta value(s) as tensor with dtype tf.float32
name (string) – name for the distributions that shows up in the figure title
- py0.utils.sparse_graph_mobility(sparse_graph, fully_connected_mobility_matrix)
Generates a sprase mobility matrix based on sparse graph and a fully connected mobility matrix inputs. For a fully connected graph, the output mobility matrix remains the same.
- Parameters
graph – networkx sparse graph
fully_connected_mobility_matrix (numpy array) – [M, M] array with values defining mobility flows between nodes
- Returns
sparsed mobility matrix
- py0.utils.traj_loss(ref_traj, trajs, weights)
Returns Kullback–Leibler (KL) divergence loss for predicted traj based on a reference traj and MaxEnt reweighted trajs.
- Parameters
ref_traj (a [1, T, M, C] tensor with dtype tf.float32, where T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – reference traj
trajs (a [N, T, M, C] tensor with dtype tf.float32, where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments) – sampled ensemble of trajectories
- Returns
KL divergence as a scalar value
- py0.utils.traj_quantile(trajs, weights=None, lower_q_bound=0.3333333333333333, upper_q_bound=0.6666666666666666, figsize=(9, 9), names=None, plot_means=True, ax=None, add_legend=True, alpha=0.6, fancy_shading=False, n_shading_gradients=30)
Make a plot of all the trajectories and the average trajectory based on trajectory weights and lower and upper quantile values.
- Parameters
trajs (tensor with dtype tf.float32 of shape [N, T, M, C] where N is the number of samples, T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – ensemble of trajectories after sampling
weights (tensor with dtype tf.float32) – weights for the each trajectory in the ensemble. If not defined uniform weights will be assumed.
lower_q_bound (float) – lower quantile bound
upper_q_bound (float) – upper quantile bound
figsize (tupple) – figure size
names (list) – name of compartments as strings
plot_means (bool) – if
True
approximates quantiles as distance from median applied to mean.ax –
matplotlib.axes.AxesSubplot
. Defaults to a new axis.add_legend (bool) – show legend
alpha (float) – alpha value for edges that allows transparency
fancy_shading (bool) – allows for gradient shading of the confidence interval
n_shading_gradients (int) – number of intervals for shading gradients
- py0.utils.traj_to_restraints(ref_traj, inner_slice, npoints, prior, noise=0.1, time_average=7, start_time=0, end_time=None, marker_size=10, marker_color='r')
Creates npoints restraints based on given trajectory with multiplicative noise and time averaging. For example, it could be weekly averages with some noise.
- Parameters
ref_traj (a [1, T, M, C] tensor with dtype tf.float32, where T is the number of timesteps, M is the number of patches (nodes) and C is the number of compartments.) – reference traj
inner_slice (list) – list of length 2. First index determines the patch and second index determines what compartment on that patch is restrained.
npoints (int) – number of data points in each restrained compartment
prior (maxent.prior) – Prior distribution for expected deviation from target for restraint. Can be either ‘EmptyPrior’ for exact agreement or set to ‘Laplace’ for more allowable disagreement.
noise (float) – multiplicative noise to be added to observations to allow higher uncertainty
time_average (int) – number of timesteps to for time averaging of restraints
start_time (int) – index for the lower time limit of restraints
end_time (int) – index for the higher time limit of restraints. If not provided, maximum timestep will be assumed.
marker_size (int) – marker size for restraints
marker_color (string) – marker color for restraints
- Returns
list of restraints, list of functions which take a matplotlib axis and lambda value and plot the restraint on it.
- py0.utils.weighted_exposed_prob_finder(prior_exposed_patch, meta_pop_size, weights=None)
Finds the weighted probability of being exposed in every patch at time zero across all the sample trajs.
- Parameters
prior_exposed_patch (an array of size N (sample size)) – output of
exposed_finder
function.meta_pop_size (int) – size of the metapopulation
weights (tensor with dtype tf.float32) – weights of the trajectories in the ensemble. If not provided, will be assumed as 1/N.
- Returns
weighted probability of being exposed across all patches
- py0.utils.weighted_quantile(values, quantiles, sample_weight=None, values_sorted=False, old_style=False)
- Very close to numpy.percentile, but supports weights.
Note: quantiles should be in [0, 1]!
- Parameters
values – numpy.array with data
quantiles – array-like with many quantiles needed
sample_weight – array-like of the same length as array
values_sorted – bool, if True, then will avoid sorting of initial array
old_style – if True, will correct output to be consistent with numpy.percentile.
- Returns
numpy.array with computed quantiles.