API
- Array
Array-like type
alias of
Union[numpy.ndarray,tensorflow.python.ops.tensor_array_ops.TensorArray,float,List[float]]
- class EmptyPrior
Bases:
maxent.core.PriorNo prior deviation from target for restraint (exact agreement)
- expected(l)
Expected disagreement
- Parameters
l (
float) – The lagrange multiplier- Returns
expected disagreement
- class Laplace(sigma)
Bases:
maxent.core.PriorLaplace distribution prior expected deviation from target for restraint
- Parameters
sigma (
float) – Parameter for Laplace prior - higher means more allowable disagreement
- expected(l)
Expected disagreement
- Parameters
l (
float) – The lagrange multiplier- Return type
float- Returns
expected disagreement
- class MaxentModel(restraints, name='maxent-model', **kwargs)
Bases:
keras.engine.training.ModelKeras Maximum entropy model
- call(inputs)
Compute reweighted restraint values
- compile(optimizer=<keras.optimizers.optimizer_v2.adam.Adam object>, loss='mean_squared_error', metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs)
See
compilemethod oftf.keras.Model
- fit(trajs, input_weights=None, batch_size=None, epochs=128, **kwargs)
Fit to given observations with restraints
- Parameters
trajs (
Union[ndarray,TensorArray,float,List[float]]) – Observations, which can be input toRestraintinput_weights (
Union[ndarray,TensorArray,float,List[float],None]) – Array of weights which will be startbatch_size (
Optional[int]) – Almost always should be equal to number of trajs, unless you want to mix your Lagrange multipliers across trajectorieskwargs – See :class:tf.keras.Model
fitmethod for further optional arguments, likeverbose=0to hide output
- Return type
History- Returns
The history of fit
- reset_weights()
Zero out the weights of the model
- class Prior
Bases:
objectPrior distribution for expected deviation from target for restraint
- expected(l)
Expected disagreement
- Parameters
l (
float) – The lagrange multiplier- Return type
float- Returns
expected disagreement
- class Restraint(fxn, target, prior=<maxent.core.EmptyPrior object>)
Bases:
objectRestraint - includes function, target, and prior belief in deviation from target
- class HyperMaxentModel(restraints, prior_model, simulation, reweight=True, name='hyper-maxent-model', **kwargs)
Bases:
maxent.core.MaxentModelKeras Maximum entropy model
- Parameters
restraints (
List[Restraint]) – List ofRestraintprior_model (
ParameterJoint) –ParameterJointthat specifies priorsimulation (
Callable[[Union[ndarray,TensorArray,float,List[float]]],Union[ndarray,TensorArray,float,List[float]]]) – Callable that will generate observations given the output fromprior_modelreweight (
bool) – True means use to remove effect of prior training updates via reweighting, which keeps as close as possible to given untrainedprior_modelname (
str) – Name of model
- fit(sample_batch_size=256, final_batch_multiplier=4, param_epochs=None, outer_epochs=10, **kwargs)
Fit to given outcomes from
simulation- Parameters
sample_batch_size (
int) – Number of observations to sample perouter_epochsfinal_batch_multiplier (
int) – Sets number of final MaxEnt fitting step after trainingprior_model. Final number of MaxEnt steps will befinal_batch_multiplier * sample_batch_sizeparam_epochs (
Optional[int]) – Number of timesprior_modelwill be fit to sampled observationsouter_epochs (
int) – Number of loops of sampling/prior_modelfittingkwargs – See :class:tf.keras.Model
fitmethod for further optional arguments, likeverbose=0to hide output
- Return type
History- Returns
The
tf.keras.callbacks.Historyof fit
- class ParameterJoint(reshapers=None, inputs=None, outputs=None, **kwargs)
Bases:
keras.engine.training.ModelPrior parameter model joint distribution
This packages up how you want to sample prior paramters into one joint distribution. This has an important ability of reshaping output from distributions in case your simulation requires matrices, applying constraints, or projections.
- Parameters
inputs (
Union[Input,Tuple[Input],None]) –tf.keras.Inputor tuple of them.outputs (
Optional[List[Distribution]]) – list oftfp.distributions.Distributionreshapers (
Optional[List[Callable[[Union[ndarray,TensorArray,float,List[float]]],Union[ndarray,TensorArray,float,List[float]]]]]) – optional list of callables that will be called on outputs from your distribution
- compile(optimizer, **kwargs)
See
compilemethod oftf.keras.Model
- sample(N, return_joint=False)
Generate sample
- Parameters
N (
int) – Number of samples (events)return_joint (
bool) – return a jointtfp.distributions.Distributionthat can be called ony
- Return type
Union[Tuple[Union[ndarray,TensorArray,float,List[float]],Union[ndarray,TensorArray,float,List[float]],Any],ndarray,TensorArray,float,List[float]]- Returns
the reshaped output samples and (if
return_joint) a valueywhich can be used to compute probabilities andtfp.distributions.Distributionjoint
- class TrainableInputLayer(initial_value, constraint=None, **kwargs)
Bases:
keras.engine.base_layer.LayerCreate trainable input layer for
tfp.distributions.DistributionThis will, given a fake input, return a trainable weight set by
initial_value. Use to feed into distributions that can be trained.- Parameters
initial_value (
Union[ndarray,TensorArray,float,List[float]]) – starting value, determines shape/dtype of outputconstraint (
Optional[Callable[[Union[ndarray,TensorArray,float,List[float]]],float]]) – Callable that returns scalar given output. Seetf.keras.layers.Layerkwargs – See
tf.Keras.layers.Layerfor additional arguments
- call(inputs)
See call of
tf.keras.layers.Layer- Return type
Union[ndarray,TensorArray,float,List[float]]