In this example, we show how and when to use the adaptive distances feature of pyabc. “Adaptive distances” means that the distance function is not pre-defined (e.g. after pre-processing), but evolves over time during the ABC run, depending on the observed summary statistics. This can be useful if different summary statistics vary on different scales, but it is not immediately clear how to weight them. For this case, in adaptive distances weights are adjusted in each iteration so as to balance the impact of all summary statistics on the computed distance.

Currently, adaptively weighted p-norm distances (e.g. Euclidean) are implemented in pyABC, but it is easily possible to define arbitrary adaptive distances.

The notebook can be downloaded here.

For illustration, we consider a simple Gaussian model:

[1]:

import numpy as np
import matplotlib.pyplot as pyplot
import pyabc.visualization
import logging

# for debugging
df_logger = logging.getLogger('Distance')
df_logger.setLevel(logging.DEBUG)

# model definition
def model(p):
return {'ss1': p['theta'] + 1 + 0.1 * np.random.normal(),
'ss2': 2 + 10 * np.random.normal()}

# true model parameter
theta_true = 3

# observed summary statistics
observation = {'ss1': theta_true + 1, 'ss2': 2}

# prior distribution
prior = pyabc.Distribution(theta=pyabc.RV('uniform', 0, 10))

# database


Summary statistic ss2 has a high variance compared to summary statistic ss1. In addition, ss1 is informative about the model parameters $$\theta$$, ss2 not. We expect that the proposal distribution for $$\theta$$ iteratively centers around the true value $$\theta=3$$. Thus, the variability for the sampled ss1 decreases iteratively, while the variability of the sampled ss2 stays approximately constant. If both summary statistics are weighted similarly in the calculation of the distance between sample and observation, there is hence an undesirable high impact of ss2, so that convergence can be slowed down. In contrast, if we weight ss1 higher, we may hope that our estimation of $$\theta$$ is improved.

These informal expectations being stated, let us continue with the implementation. First, we consider a non-adaptive Euclidean distance:

[2]:

distance = pyabc.PNormDistance(p=2)

abc = pyabc.ABCSMC(model, prior, distance)

abc.new(db_path, observation)

history0 = abc.run(minimum_epsilon=.1, max_nr_populations=8)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=1, start_time=2020-05-17 19:04:50.510578, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 7.8880029032336605
INFO:ABC:t: 0, eps: 7.8880029032336605.
INFO:ABC:Acceptance rate: 100 / 179 = 5.5866e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 5.523085306293169.
INFO:ABC:Acceptance rate: 100 / 278 = 3.5971e-01, ESS=9.5810e+01.
INFO:ABC:t: 2, eps: 3.604292240991575.
INFO:ABC:Acceptance rate: 100 / 446 = 2.2422e-01, ESS=9.3157e+01.
INFO:ABC:t: 3, eps: 2.6280179368021246.
INFO:ABC:Acceptance rate: 100 / 624 = 1.6026e-01, ESS=9.7895e+01.
INFO:ABC:t: 4, eps: 1.9526078076523994.
INFO:ABC:Acceptance rate: 100 / 866 = 1.1547e-01, ESS=9.9520e+01.
INFO:ABC:t: 5, eps: 1.421630356209448.
INFO:ABC:Acceptance rate: 100 / 1211 = 8.2576e-02, ESS=9.8095e+01.
INFO:ABC:t: 6, eps: 1.0264476750481035.
INFO:ABC:Acceptance rate: 100 / 2163 = 4.6232e-02, ESS=9.9592e+01.
INFO:ABC:t: 7, eps: 0.6954601270195314.
INFO:ABC:Acceptance rate: 100 / 2695 = 3.7106e-02, ESS=9.6657e+01.
INFO:History:Done <ABCSMC(id=1, start_time=2020-05-17 19:04:50.510578, end_time=2020-05-17 19:05:02.773526)>


Let us visualize the results for the non-adaptive distance:

[3]:

# plotting

def plot_history(history):
fig, ax = pyplot.subplots()
for t in range(history.max_t + 1):
df, w = history.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=10,
x='theta', ax=ax,
label="PDF t={}".format(t))
ax.axvline(theta_true, color='k', linestyle='dashed', label="True value")
ax.legend()

plot_history(history0)


Second, we consider an adaptive Euclidean distance:

[4]:

distance_adaptive = pyabc.AdaptivePNormDistance(p=2)

abc = pyabc.ABCSMC(
acceptor = pyabc.UniformAcceptor(use_complete_history=True))

abc.new(db_path, observation)

history1 = abc.run(minimum_epsilon=.1, max_nr_populations=8)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=2, start_time=2020-05-17 19:05:03.383392, end_time=None)>
INFO:ABC:Calibration sample before t=0.
DEBUG:Distance:updated weights[0] = {'ss1': 1.56452520342288, 'ss2': 0.4354747965771201}
INFO:Epsilon:initial epsilon is 6.333984429500564
INFO:ABC:t: 0, eps: 6.333984429500564.
INFO:ABC:Acceptance rate: 100 / 206 = 4.8544e-01, ESS=1.0000e+02.
DEBUG:Distance:updated weights[1] = {'ss1': 1.544531889838737, 'ss2': 0.455468110161263}
INFO:ABC:t: 1, eps: 4.054424144457223.
INFO:ABC:Acceptance rate: 100 / 236 = 4.2373e-01, ESS=9.8630e+01.
DEBUG:Distance:updated weights[2] = {'ss1': 1.7056562135730826, 'ss2': 0.2943437864269172}
INFO:ABC:t: 2, eps: 2.2603152932058155.
INFO:ABC:Acceptance rate: 100 / 350 = 2.8571e-01, ESS=9.9408e+01.
DEBUG:Distance:updated weights[3] = {'ss1': 1.7667287419172013, 'ss2': 0.23327125808279867}
INFO:ABC:t: 3, eps: 1.41871138633234.
INFO:ABC:Acceptance rate: 100 / 363 = 2.7548e-01, ESS=9.4863e+01.
DEBUG:Distance:updated weights[4] = {'ss1': 1.8653024503845181, 'ss2': 0.1346975496154818}
INFO:ABC:t: 4, eps: 0.7834754377542915.
INFO:ABC:Acceptance rate: 100 / 421 = 2.3753e-01, ESS=9.9527e+01.
DEBUG:Distance:updated weights[5] = {'ss1': 1.9117344325305943, 'ss2': 0.08826556746940574}
INFO:ABC:t: 5, eps: 0.46930212392005605.
INFO:ABC:Acceptance rate: 100 / 484 = 2.0661e-01, ESS=9.6169e+01.
DEBUG:Distance:updated weights[6] = {'ss1': 1.9513043703561577, 'ss2': 0.048695629643842143}
INFO:ABC:t: 6, eps: 0.2532391266895982.
INFO:ABC:Acceptance rate: 100 / 729 = 1.3717e-01, ESS=9.4790e+01.
DEBUG:Distance:updated weights[7] = {'ss1': 1.9619806450885102, 'ss2': 0.03801935491148991}
INFO:ABC:t: 7, eps: 0.17029769703549383.
INFO:ABC:Acceptance rate: 100 / 1001 = 9.9900e-02, ESS=8.8512e+01.
DEBUG:Distance:updated weights[8] = {'ss1': 1.9691416799754418, 'ss2': 0.03085832002455836}
INFO:History:Done <ABCSMC(id=2, start_time=2020-05-17 19:05:03.383392, end_time=2020-05-17 19:05:09.456919)>


In the debug output of abc.run above, it can be seen how the weights evolve over time. Note that we set the acceptor to use_complete_history=True in order to get nested acceptance regions. It means that also all previous acceptance criteria are re-applied. This is optional here but may be beneficial sometimes. Let us visualize the results for the adaptive distance:

[5]:

plot_history(history1)


We observe differences compared to the non-adaptive setting. In particular, the densitities tend to be narrower around the true parameter $$\theta=3$$. In addition, despite the better convergence, the required number of samples in total is lower, as not so much time was wasted trying to match an uninformative summary statistic:

[6]:

histories = [history0, history1]
labels = ["Fixed distance", "Adaptive distance"]
pyabc.visualization.plot_sample_numbers(histories, labels)

[6]:

<matplotlib.axes._subplots.AxesSubplot at 0x7fa57dec8c90>


Indeed, the acceptance rates for the adaptive distance function are continually higher:

[7]:

pyabc.visualization.plot_acceptance_rates_trajectory(histories, labels)

[7]:

<matplotlib.axes._subplots.AxesSubplot at 0x7fa57b6805d0>


In detail, the adaptive distance feature works as follows: In each iteration of the ABCSMC run, after having obtained the desired number of accepted particles (and once at the beginning using a sample from the prior), the method DistanceFunction.update() is called. It is given a set of summary statistics which can be used to e.g. compute weights for the distance measure in the next iteration. In order to avoid bias, via DistanceFunction.configure_sampler(), the distance function can tell the sampler to not only record accepted particles, but all that were generated during the sampling process. So, when you want to define your own adaptive distance function, you will typically only need to overwrite these two methods. For implementation details and an example of how this can look in practice, please inspect the code of AdaptivePNormDistance.

## Make it robust¶

A problem with the previous adaptive distance occurs when the weights do not work properly. For instance, there could be large weights assigned to rather non-informative statistics, if these vary comparably little. Let us have a look at an example:

[8]:

import numpy as np
import tempfile
import os
import matplotlib.pyplot as pyplot
import pyabc.visualization
import logging

# for debugging
df_logger = logging.getLogger('Distance')
df_logger.setLevel(logging.DEBUG)

# model definition
def model(p):
return {'ss1': p['theta'] + 1 + 1 * np.random.normal(),
'ss2': 2 + 0.01 * np.random.normal()}

# true model parameter
theta_true = 3

# observed summary statistics
observation = {'ss1': theta_true + 1, 'ss2': 5}

# prior distribution
prior = pyabc.Distribution(theta=pyabc.RV('uniform', 0, 10))

# database

# plotting

def plot_history(history):
fig, ax = pyplot.subplots()
for t in range(history.max_t + 1):
df, w = history.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=10,
x='theta', ax=ax,
label="PDF t={}".format(t))
ax.axvline(theta_true, color='k', linestyle='dashed', label="True value")
ax.legend()


Essentially, we changed the variances of the two summary statistics, and in addition we shifted the observed value ss2 to 5, which can be interpreted as a measurement error. Now the problem is that this value is highly unlikely to sample from under the model, and when using adaptive weights, this statistic might due to the small variance be assigned a high weight, worsening the problem.

[9]:

distance = pyabc.PNormDistance(p=2)

abc = pyabc.ABCSMC(model, prior, distance)
abc.new(db_path, observation)
history0 = abc.run(minimum_epsilon=.001, max_nr_populations=5)

plot_history(history0)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=3, start_time=2020-05-17 19:05:10.866128, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 3.706402296069415
INFO:ABC:t: 0, eps: 3.706402296069415.
INFO:ABC:Acceptance rate: 100 / 212 = 4.7170e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 3.15157877623159.
INFO:ABC:Acceptance rate: 100 / 277 = 3.6101e-01, ESS=8.8141e+01.
INFO:ABC:t: 2, eps: 3.0302449490062813.
INFO:ABC:Acceptance rate: 100 / 591 = 1.6920e-01, ESS=8.6973e+01.
INFO:ABC:t: 3, eps: 3.0073861196608798.
INFO:ABC:Acceptance rate: 100 / 949 = 1.0537e-01, ESS=4.7024e+01.
INFO:ABC:t: 4, eps: 2.99728988999716.
INFO:ABC:Acceptance rate: 100 / 2643 = 3.7836e-02, ESS=9.4997e+01.
INFO:History:Done <ABCSMC(id=3, start_time=2020-05-17 19:05:10.866128, end_time=2020-05-17 19:05:15.525464)>

[10]:

distance_adaptive = pyabc.AdaptivePNormDistance(p=2)

abc = pyabc.ABCSMC(
acceptor = pyabc.UniformAcceptor(use_complete_history=True))
abc.new(db_path, observation)
history1 = abc.run(minimum_epsilon=.001, max_nr_populations=5)

plot_history(history1)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=4, start_time=2020-05-17 19:05:15.911001, end_time=None)>
INFO:ABC:Calibration sample before t=0.
DEBUG:Distance:updated weights[0] = {'ss1': 0.006881759937664396, 'ss2': 1.9931182400623355}
INFO:Epsilon:initial epsilon is 5.978341674392063
INFO:ABC:t: 0, eps: 5.978341674392063.
INFO:ABC:Acceptance rate: 100 / 217 = 4.6083e-01, ESS=1.0000e+02.
DEBUG:Distance:updated weights[1] = {'ss1': 0.0070948287132201185, 'ss2': 1.9929051712867798}
INFO:ABC:t: 1, eps: 5.96192003320892.
INFO:ABC:Acceptance rate: 100 / 453 = 2.2075e-01, ESS=9.8071e+01.
DEBUG:Distance:updated weights[2] = {'ss1': 0.006846740182629466, 'ss2': 1.9931532598173705}
INFO:ABC:t: 2, eps: 5.953640497054222.
INFO:ABC:Acceptance rate: 100 / 1116 = 8.9606e-02, ESS=8.8886e+01.
DEBUG:Distance:updated weights[3] = {'ss1': 0.0078039047055010686, 'ss2': 1.9921960952944988}
INFO:ABC:t: 3, eps: 5.944041103889234.
INFO:ABC:Acceptance rate: 100 / 1675 = 5.9701e-02, ESS=9.4860e+01.
DEBUG:Distance:updated weights[4] = {'ss1': 0.007194747743678035, 'ss2': 1.992805252256322}
INFO:ABC:t: 4, eps: 5.937966249548398.
INFO:ABC:Acceptance rate: 100 / 4048 = 2.4704e-02, ESS=9.2597e+01.
DEBUG:Distance:updated weights[5] = {'ss1': 0.007245644674039862, 'ss2': 1.9927543553259601}
INFO:History:Done <ABCSMC(id=4, start_time=2020-05-17 19:05:15.911001, end_time=2020-05-17 19:05:24.904768)>


These results are as expected: The adaptive weights make the situation much worse. Our solution is to in addition to the in-sample variance also take the bias of the samples to the observed data into account, using e.g. the root_mean_square_deviation as scale function.

[11]:

distance_adaptive = pyabc.AdaptivePNormDistance(
p=2, scale_function=pyabc.distance.root_mean_square_deviation)

abc = pyabc.ABCSMC(
acceptor = pyabc.UniformAcceptor(use_complete_history=True))
abc.new(db_path, observation)
history2 = abc.run(minimum_epsilon=.001, max_nr_populations=5)

plot_history(history2)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=5, start_time=2020-05-17 19:05:25.289967, end_time=None)>
INFO:ABC:Calibration sample before t=0.
DEBUG:Distance:updated weights[0] = {'ss1': 0.8693530881715638, 'ss2': 1.1306469118284361}
INFO:Epsilon:initial epsilon is 4.1598448905527015
INFO:ABC:t: 0, eps: 4.1598448905527015.
INFO:ABC:Acceptance rate: 100 / 191 = 5.2356e-01, ESS=1.0000e+02.
DEBUG:Distance:updated weights[1] = {'ss1': 0.918230325976682, 'ss2': 1.0817696740233183}
INFO:ABC:t: 1, eps: 3.4786660223500068.
INFO:ABC:Acceptance rate: 100 / 217 = 4.6083e-01, ESS=9.6361e+01.
DEBUG:Distance:updated weights[2] = {'ss1': 1.185968550423523, 'ss2': 0.8140314495764771}
INFO:ABC:t: 2, eps: 2.6024103755968793.
INFO:ABC:Acceptance rate: 100 / 315 = 3.1746e-01, ESS=6.4854e+01.
DEBUG:Distance:updated weights[3] = {'ss1': 1.2741754842631852, 'ss2': 0.7258245157368146}
INFO:ABC:t: 3, eps: 2.226845050651682.
INFO:ABC:Acceptance rate: 100 / 589 = 1.6978e-01, ESS=9.3687e+01.
DEBUG:Distance:updated weights[4] = {'ss1': 1.2944987868600584, 'ss2': 0.7055012131399416}
INFO:ABC:t: 4, eps: 2.134101801007238.
INFO:ABC:Acceptance rate: 100 / 812 = 1.2315e-01, ESS=7.5661e+01.
DEBUG:Distance:updated weights[5] = {'ss1': 1.3615100631960027, 'ss2': 0.6384899368039972}
INFO:History:Done <ABCSMC(id=5, start_time=2020-05-17 19:05:25.289967, end_time=2020-05-17 19:05:29.055446)>


In this setting, the accuracy and sample numbers (see below) are rougly back to the non-weighted case. Applying this method to the first model shows that it is also applicable there, though potentially slightly less efficient. This demonstrates that this method is more robust in taking model error into account, which in practice can easily occur.

[12]:

pyabc.visualization.plot_sample_numbers([history0, history1, history2], ["Fixed distance", "Adaptive", "Robust adaptive"])

[12]:

<matplotlib.axes._subplots.AxesSubplot at 0x7fa57a5a5fd0>


## Weight diverse data with replicates¶

The problem we consider now is that we can separate our summary statistics (=data) into subsets which are informative for different parameters, but the sizes of these subsets are not balanced. This happens frequently in practice, e.g. when one has a time series of 100 measurements, compared to a single data point. If these two statistics are informative for different parameters, then the impact of the first kind on the computed distance value can be overly high, and in addition the automatic weighting as introduced by Prangle 2017 may not reduce, or can indeed even worsen, the problem.

Thus, what we want to do is add additional “factors” by which we multiply each data point’s weight. These factors take into account the number of summary statistics that are equally informative. At the moment, these factors still need to be defined manually.

Let us consider a toy model, where we just copy a single summary statistic N0=100 times. Note that this model is highly artificical, as in practise there might be more information contained in e.g. a time series than in a single measurement, but not N0 times as much, such that a good factor would be somewhere between 1 and N0.

[13]:

import pyabc
import numpy as np
import os
import tempfile
import matplotlib.pyplot as plt
import logging

# for debugging
df_logger = logging.getLogger('Distance')
df_logger.setLevel(logging.INFO)

N0 = 100
N1 = 1

p_true = {'p0': 5, 'p1': 10}

def model(p):
ss = {}
s0 = p['p0'] + 2 * np.random.normal()
for j in range(N0):
ss['p0_' + str(j)] = s0
s1 = p['p1'] + 0.01 * np.random.normal()
for j in range(N1):
ss['p1_' + str(j)] = s1
return ss

prior = pyabc.Distribution(p0=pyabc.RV("uniform", 0, 20),
p1=pyabc.RV("uniform", 0, 20))


First, we consider uniform weights of 1:

[14]:

distance = pyabc.PNormDistance(p=1)

abc = pyabc.ABCSMC(model, prior, distance)

observation = model(p_true)

abc.new(db_path, observation)
history1 = abc.run(max_nr_populations=8)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=6, start_time=2020-05-17 19:05:29.823671, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 800.0444048639816
INFO:ABC:t: 0, eps: 800.0444048639816.
INFO:ABC:Acceptance rate: 100 / 189 = 5.2910e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 268.2464180029644.
INFO:ABC:Acceptance rate: 100 / 276 = 3.6232e-01, ESS=8.1118e+01.
INFO:ABC:t: 2, eps: 137.22044383082294.
INFO:ABC:Acceptance rate: 100 / 313 = 3.1949e-01, ESS=8.6443e+01.
INFO:ABC:t: 3, eps: 57.29736590464114.
INFO:ABC:Acceptance rate: 100 / 745 = 1.3423e-01, ESS=9.1310e+01.
INFO:ABC:t: 4, eps: 36.65914919005845.
INFO:ABC:Acceptance rate: 100 / 1027 = 9.7371e-02, ESS=8.5098e+01.
INFO:ABC:t: 5, eps: 22.66796750562429.
INFO:ABC:Acceptance rate: 100 / 1613 = 6.1996e-02, ESS=8.1483e+01.
INFO:ABC:t: 6, eps: 13.607513908634385.
INFO:ABC:Acceptance rate: 100 / 3279 = 3.0497e-02, ESS=7.4191e+01.
INFO:ABC:t: 7, eps: 9.210802088377672.
INFO:ABC:Acceptance rate: 100 / 5807 = 1.7221e-02, ESS=6.8810e+01.
INFO:History:Done <ABCSMC(id=6, start_time=2020-05-17 19:05:29.823671, end_time=2020-05-17 19:06:18.577467)>

[15]:

# plotting

def plot_history(history):
fig, ax = plt.subplots()
for t in range(history.max_t + 1):
df, w = history.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=20,
x='p0', ax=ax,
label="PDF t={}".format(t),
refval=p_true)
ax.legend()

fig, ax = plt.subplots()
for t in range(history.max_t + 1):
df, w = history.get_distribution(m=0, t=t)
pyabc.visualization.plot_kde_1d(df, w, xmin=0, xmax=20,
x='p1', ax=ax,
label="PDF t={}".format(t),
refval=p_true)
ax.legend()

plot_history(history1)


Next, we use adaptive distances but no factors:

[16]:

distance = pyabc.AdaptivePNormDistance(p=1, )#factors=factors)

abc = pyabc.ABCSMC(model, prior, distance)

abc.new(db_path, observation)
history2 = abc.run(max_nr_populations=8)

plot_history(history2)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=7, start_time=2020-05-17 19:06:20.031098, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 804.2703630289134
INFO:ABC:t: 0, eps: 804.2703630289134.
INFO:ABC:Acceptance rate: 100 / 178 = 5.6180e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 358.6365746592437.
INFO:ABC:Acceptance rate: 100 / 206 = 4.8544e-01, ESS=8.7650e+01.
INFO:ABC:t: 2, eps: 207.5158094913825.
INFO:ABC:Acceptance rate: 100 / 226 = 4.4248e-01, ESS=8.8427e+01.
INFO:ABC:t: 3, eps: 86.67115523817868.
INFO:ABC:Acceptance rate: 100 / 411 = 2.4331e-01, ESS=7.9603e+01.
INFO:ABC:t: 4, eps: 41.204337714577626.
INFO:ABC:Acceptance rate: 100 / 882 = 1.1338e-01, ESS=7.7426e+01.
INFO:ABC:t: 5, eps: 19.12608852927998.
INFO:ABC:Acceptance rate: 100 / 2463 = 4.0601e-02, ESS=9.2439e+01.
INFO:ABC:t: 6, eps: 11.040758226322744.
INFO:ABC:Acceptance rate: 100 / 3174 = 3.1506e-02, ESS=4.0567e+01.
INFO:ABC:t: 7, eps: 6.9115476681531325.
INFO:ABC:Acceptance rate: 100 / 6178 = 1.6186e-02, ESS=7.7286e+01.
INFO:History:Done <ABCSMC(id=7, start_time=2020-05-17 19:06:20.031098, end_time=2020-05-17 19:07:15.103555)>


Next, we account for the discrepancy in data point counts by using self-defined scaling factors:

[17]:

factors = {}
for j in range(N0):
factors['p0_' + str(j)] = 1/N0
for j in range(N1):
factors['p1_' + str(j)] = 1/N1
distance = pyabc.PNormDistance(p=1, factors=factors)

abc = pyabc.ABCSMC(model, prior, distance)

abc.new(db_path, observation)
history3 = abc.run(max_nr_populations=8)

plot_history(history3)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=8, start_time=2020-05-17 19:07:16.405688, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 12.22980361353965
INFO:ABC:t: 0, eps: 12.22980361353965.
INFO:ABC:Acceptance rate: 100 / 192 = 5.2083e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 8.600402453730833.
INFO:ABC:Acceptance rate: 100 / 201 = 4.9751e-01, ESS=9.4792e+01.
INFO:ABC:t: 2, eps: 5.3932862874730185.
INFO:ABC:Acceptance rate: 100 / 256 = 3.9062e-01, ESS=9.5373e+01.
INFO:ABC:t: 3, eps: 3.9473374176324265.
INFO:ABC:Acceptance rate: 100 / 261 = 3.8314e-01, ESS=8.8531e+01.
INFO:ABC:t: 4, eps: 3.0014600650079553.
INFO:ABC:Acceptance rate: 100 / 290 = 3.4483e-01, ESS=8.7498e+01.
INFO:ABC:t: 5, eps: 2.2334325618901922.
INFO:ABC:Acceptance rate: 100 / 343 = 2.9155e-01, ESS=8.5720e+01.
INFO:ABC:t: 6, eps: 1.449495261301063.
INFO:ABC:Acceptance rate: 100 / 434 = 2.3041e-01, ESS=8.2761e+01.
INFO:ABC:t: 7, eps: 1.0094444189437735.
INFO:ABC:Acceptance rate: 100 / 669 = 1.4948e-01, ESS=7.9831e+01.
INFO:History:Done <ABCSMC(id=8, start_time=2020-05-17 19:07:16.405688, end_time=2020-05-17 19:08:01.548653)>


Next, we consider automatic weighting and factors:

[18]:

import logging
df_logger = logging.getLogger('Distance')
# df_logger.setLevel(logging.DEBUG)

factors = {}
for j in range(N0):
factors['p0_' + str(j)] = 1/N0
for j in range(N1):
factors['p1_' + str(j)] = 1/N1

abc = pyabc.ABCSMC(model, prior, distance)

abc.new(db_path, observation)
history4 = abc.run(max_nr_populations=8)

plot_history(history4)

INFO:Sampler:Parallelizing the sampling on 4 cores.
INFO:History:Start <ABCSMC(id=9, start_time=2020-05-17 19:08:04.018039, end_time=None)>
INFO:ABC:Calibration sample before t=0.
INFO:Epsilon:initial epsilon is 11.684884345125214
INFO:ABC:t: 0, eps: 11.684884345125214.
INFO:ABC:Acceptance rate: 100 / 237 = 4.2194e-01, ESS=1.0000e+02.
INFO:ABC:t: 1, eps: 7.5145369751746465.
INFO:ABC:Acceptance rate: 100 / 224 = 4.4643e-01, ESS=9.3767e+01.
INFO:ABC:t: 2, eps: 5.082483209496991.
INFO:ABC:Acceptance rate: 100 / 239 = 4.1841e-01, ESS=9.7918e+01.
INFO:ABC:t: 3, eps: 3.580477037132026.
INFO:ABC:Acceptance rate: 100 / 235 = 4.2553e-01, ESS=8.0270e+01.
INFO:ABC:t: 4, eps: 2.8507886265808073.
INFO:ABC:Acceptance rate: 100 / 286 = 3.4965e-01, ESS=7.1282e+01.
INFO:ABC:t: 5, eps: 2.089738549526154.
INFO:ABC:Acceptance rate: 100 / 346 = 2.8902e-01, ESS=8.6558e+01.
INFO:ABC:t: 6, eps: 2.1546560321644264.
INFO:ABC:Acceptance rate: 100 / 373 = 2.6810e-01, ESS=8.1666e+01.
INFO:ABC:t: 7, eps: 1.8136827815872332.
INFO:ABC:Acceptance rate: 100 / 417 = 2.3981e-01, ESS=8.0980e+01.
INFO:History:Done <ABCSMC(id=9, start_time=2020-05-17 19:08:04.018039, end_time=2020-05-17 19:09:05.047786)>


The results for the distances that re-factor are best, and in this case fixed and adaptive weights give similar results. In addition to the much better posteriors, the sample numbers are much lower, as the below plots show.

[19]:

histories = [history1, history2, history3, history4]

[19]:

<matplotlib.axes._subplots.AxesSubplot at 0x7fa57a2d9c90>