pyabc.util

Utility stuff that fitted nowhere else.

class pyabc.util.EventIxs(ts: Collection[int] | int = None, sims: Collection[int] | int = None, from_t: int = None, from_sims: int = None)[source]

Bases: object

Indicate whether iteration conditions apply for something to happen.

Used to e.g. update weights, or train regression models.

__init__(ts: Collection[int] | int = None, sims: Collection[int] | int = None, from_t: int = None, from_sims: int = None)[source]
Parameters:
  • ts – Time points at which something should happen. This can be either a collection of time points, or a single time point. A value of inf is interpreted as all time points.

  • sims – Numbers of total simulations after which something should happen. This can be either a collection of total simulation numbers, or a single total simulation number.

  • from_t – Always do something starting with generation from_t.

  • from_sims – Always do something as soon as from_sims simulations have been hit.

act(t: int, total_sims: int, modify: bool = True) bool[source]

Inform whether to do something at a given time index t.

Note

This method is not idempotent regarding simulation counts, it should be called only once.

Parameters:
  • t (Time point) –

  • total_sims (Total number of simulations so far.) –

  • modify (Whether to remember actions. If False, can be safely re-called.) –

Returns:

hit

Return type:

Whether a criterion has been hit.

probably_has_late_events() bool[source]

Whether event indices > 0 are likely to occur.

This is useful in order to know whether to e.g. collect rejected particles.

Return type:

True if indices > 0 are likely to evaluate to True, False otherwise.

requires_calibration() bool[source]

Whether at time 0 an event is likely to happen.

Returns:

  • Whether there will be a time-point event at time 0 (total simulations

  • should typically noly occur later).

static to_instance(maybe_event_ixs: EventIxs | Collection[int] | int) EventIxs[source]

Create instance from instance or collection of time points.

Parameters:

maybe_event_ixs – Can be either a DoIx already, or a collection of integers, or inf (which is interpreted as any time point), or an int (which is interpreted as range(0, …, maybe_do_ix)), all of which are interpreted as time point criteria.

Returns:

A valid DoIx instance.

Return type:

do_ix

class pyabc.util.ParTrafo(trafos: List[Callable[[ndarray], ndarray]] = None, trafo_ids: str | List[str] = '{par_id}_{trafo_ix}')[source]

Bases: ParTrafoBase

Simple parameter transformation that accepts a list of transformations.

The implementation assumes that each transformation maps n_par -> n_par.

Parameters:

trafos (Transformations to apply. Defaults to a single identity mapping.) –

__call__(par_dict: dict) ndarray[source]

Transform parameters from input dict.

__init__(trafos: List[Callable[[ndarray], ndarray]] = None, trafo_ids: str | List[str] = '{par_id}_{trafo_ix}')[source]
get_ids() List[str][source]

Calculate keys as: {par_id_1}_{trafo_1}, …, {par_id_n}_{trafo_1}, …, {par_id_1}_{trafo_m}, …, {par_id_n}_{trafo_m}

initialize(keys: List[str])[source]

Initialize. Called once per analysis.

class pyabc.util.ParTrafoBase[source]

Bases: ABC

Parameter transformations to use as regression targets.

It may be useful to use as regression targets not simply the original parameters theta, but transformations thereof, such as moments theta**2. In particular, this can help overcome non-identifiabilities.

abstract __call__(par_dict: dict) ndarray[source]

Transform parameters from input dict.

abstract get_ids() List[str][source]

Identifiers for the parameter transformations.

initialize(keys: List[str])[source]

Initialize. Called once per analysis.

pyabc.util.bound_pop_size_from_env(pop_size: int)[source]

Bound population size if corresponding environment variable set.

Parameters:

pop_size (Intended population size) –

Returns:

Minimum of pop_size and environment variable PYABC_MAX_POP_SIZE.

Return type:

bounded_pop_size

pyabc.util.dict2arr(dct: dict | ndarray, keys: List) ndarray[source]

Convert dictionary to 1d array, in specified key order.

Parameters:
  • dct (If dict-similar, values of all keys are extracted into a 1d array.) – Entries can be data frames, ndarrays, or single numbers.

  • keys (Keys of interest, also defines the order.) –

Returns:

arr

Return type:

1d array of all concatenated values.

pyabc.util.dict2arrlabels(dct: dict, keys: List) List[str][source]

Get label array consistent with the output of dict2arr.

Can be called e.g. once on the observed data and used for logging.

Parameters:
  • dct (Model output or observed data.) –

  • keys (Keys of interest, also defines the order.) –

Returns:

labels

Return type:

List of labels consistent with the output of dict2arr.

pyabc.util.io_dict2arr(fun)[source]

Wrapper parsing inputs dicts to ndarrays.

Assumes the array is the first argument, and self holds a keys variable.

pyabc.util.log_samples(t: int, sumstats: ndarray, parameters: ndarray, weights: ndarray, log_file: str)[source]

Save samples to file, in npy format.

Files will be created of name “{log_file}_{t}_{var}.npy”, with var in sumstats, parameters, weights.

Parameters:
  • t (Time to save for.) –

  • sumstats (Summary statistics, shape (n_sample, n_in).) –

  • parameters (Parameters, shape (n_sample, n_par).) –

  • weights (Importance sampling weights, shape (n_sample,).) –

  • log_file (Log file base name. If None, no logs are created.) –

pyabc.util.read_sample(sample: Sample, sumstat, all_particles: bool, par_trafo: ParTrafoBase) Tuple[ndarray, ndarray, ndarray][source]

Read in sample.

Parameters:
  • sample (Calibration or last generation's sample.) –

  • sumstat (Up-chain summary statistic, already fitted.) –

  • all_particles (Whether to use all particles or only accepted ones.) –

  • par_trafo (Parameter transformation to apply.) –

Returns:

sumstats, parameters, weights

Return type:

Arrays of shape (n_sample, n_out).