API
- Array
Array-like type
alias of
Union
[numpy.ndarray
,tensorflow.python.ops.tensor_array_ops.TensorArray
,float
,List
[float
]]
- class EmptyPrior
Bases:
maxent.core.Prior
No prior deviation from target for restraint (exact agreement)
- expected(l)
Expected disagreement
- Parameters
l (
float
) – The lagrange multiplier- Returns
expected disagreement
- class Laplace(sigma)
Bases:
maxent.core.Prior
Laplace distribution prior expected deviation from target for restraint
- Parameters
sigma (
float
) – Parameter for Laplace prior - higher means more allowable disagreement
- expected(l)
Expected disagreement
- Parameters
l (
float
) – The lagrange multiplier- Return type
float
- Returns
expected disagreement
- class MaxentModel(restraints, name='maxent-model', **kwargs)
Bases:
keras.engine.training.Model
Keras Maximum entropy model
- call(inputs)
Compute reweighted restraint values
- compile(optimizer=<keras.optimizers.optimizer_v2.adam.Adam object>, loss='mean_squared_error', metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs)
See
compile
method oftf.keras.Model
- fit(trajs, input_weights=None, batch_size=None, epochs=128, **kwargs)
Fit to given observations with restraints
- Parameters
trajs (
Union
[ndarray
,TensorArray
,float
,List
[float
]]) – Observations, which can be input toRestraint
input_weights (
Union
[ndarray
,TensorArray
,float
,List
[float
],None
]) – Array of weights which will be startbatch_size (
Optional
[int
]) – Almost always should be equal to number of trajs, unless you want to mix your Lagrange multipliers across trajectorieskwargs – See :class:tf.keras.Model
fit
method for further optional arguments, likeverbose=0
to hide output
- Return type
History
- Returns
The history of fit
- reset_weights()
Zero out the weights of the model
- class Prior
Bases:
object
Prior distribution for expected deviation from target for restraint
- expected(l)
Expected disagreement
- Parameters
l (
float
) – The lagrange multiplier- Return type
float
- Returns
expected disagreement
- class Restraint(fxn, target, prior=<maxent.core.EmptyPrior object>)
Bases:
object
Restraint - includes function, target, and prior belief in deviation from target
- class HyperMaxentModel(restraints, prior_model, simulation, reweight=True, name='hyper-maxent-model', **kwargs)
Bases:
maxent.core.MaxentModel
Keras Maximum entropy model
- Parameters
restraints (
List
[Restraint
]) – List ofRestraint
prior_model (
ParameterJoint
) –ParameterJoint
that specifies priorsimulation (
Callable
[[Union
[ndarray
,TensorArray
,float
,List
[float
]]],Union
[ndarray
,TensorArray
,float
,List
[float
]]]) – Callable that will generate observations given the output fromprior_model
reweight (
bool
) – True means use to remove effect of prior training updates via reweighting, which keeps as close as possible to given untrainedprior_model
name (
str
) – Name of model
- fit(sample_batch_size=256, final_batch_multiplier=4, param_epochs=None, outer_epochs=10, **kwargs)
Fit to given outcomes from
simulation
- Parameters
sample_batch_size (
int
) – Number of observations to sample perouter_epochs
final_batch_multiplier (
int
) – Sets number of final MaxEnt fitting step after trainingprior_model
. Final number of MaxEnt steps will befinal_batch_multiplier * sample_batch_size
param_epochs (
Optional
[int
]) – Number of timesprior_model
will be fit to sampled observationsouter_epochs (
int
) – Number of loops of sampling/prior_model
fittingkwargs – See :class:tf.keras.Model
fit
method for further optional arguments, likeverbose=0
to hide output
- Return type
History
- Returns
The
tf.keras.callbacks.History
of fit
- class ParameterJoint(reshapers=None, inputs=None, outputs=None, **kwargs)
Bases:
keras.engine.training.Model
Prior parameter model joint distribution
This packages up how you want to sample prior paramters into one joint distribution. This has an important ability of reshaping output from distributions in case your simulation requires matrices, applying constraints, or projections.
- Parameters
inputs (
Union
[Input
,Tuple
[Input
],None
]) –tf.keras.Input
or tuple of them.outputs (
Optional
[List
[Distribution
]]) – list oftfp.distributions.Distribution
reshapers (
Optional
[List
[Callable
[[Union
[ndarray
,TensorArray
,float
,List
[float
]]],Union
[ndarray
,TensorArray
,float
,List
[float
]]]]]) – optional list of callables that will be called on outputs from your distribution
- compile(optimizer, **kwargs)
See
compile
method oftf.keras.Model
- sample(N, return_joint=False)
Generate sample
- Parameters
N (
int
) – Number of samples (events)return_joint (
bool
) – return a jointtfp.distributions.Distribution
that can be called ony
- Return type
Union
[Tuple
[Union
[ndarray
,TensorArray
,float
,List
[float
]],Union
[ndarray
,TensorArray
,float
,List
[float
]],Any
],ndarray
,TensorArray
,float
,List
[float
]]- Returns
the reshaped output samples and (if
return_joint
) a valuey
which can be used to compute probabilities andtfp.distributions.Distribution
joint
- class TrainableInputLayer(initial_value, constraint=None, **kwargs)
Bases:
keras.engine.base_layer.Layer
Create trainable input layer for
tfp.distributions.Distribution
This will, given a fake input, return a trainable weight set by
initial_value
. Use to feed into distributions that can be trained.- Parameters
initial_value (
Union
[ndarray
,TensorArray
,float
,List
[float
]]) – starting value, determines shape/dtype of outputconstraint (
Optional
[Callable
[[Union
[ndarray
,TensorArray
,float
,List
[float
]]],float
]]) – Callable that returns scalar given output. Seetf.keras.layers.Layer
kwargs – See
tf.Keras.layers.Layer
for additional arguments
- call(inputs)
See call of
tf.keras.layers.Layer
- Return type
Union
[ndarray
,TensorArray
,float
,List
[float
]]