To install the package, clone the repository and run:
pip install synthpop
You can view the docs here. In particular, you will find examples for how to apply the methods contained in this package to generate populations and scenarios of interest in example agent-based simulators.
Consider a population of 
of the agent states is small.
We implement the model for how agent parameters 
import numpy as np
import warnings
from ..abstract import AbstractModel
class Normals(AbstractModel):
    def __init__(self, n_timesteps=1, n_agents=1_000):
        self.n_timesteps = n_timesteps
        self.n_agents = n_agents
    def initialize(self):
        pass
    def step(self, *args, **kwargs):
        pass
    def observe(self, x):
        return [x]
    @staticmethod
    def make_default_generator(params):
        mu = params
        # Specify how the population parameter \mu parameterises the agent generator
        def generator(n_agents):
            # Draw agent parameters from distribution \iota_\mu
            mus = mu + np.random.normal(size=n_agents)
            return mus
        return generator
    def run(self, generator):
        with warnings.catch_warnings():
            warnings.simplefilter("ignore")
            # Generate agent parameters \mu_i
            mus = generator(self.n_agents)
            # Simulate model forward to obtain the x_i
            xs = mus + np.random.normal(size=self.n_agents)
            return self.observe(xs)We also specify the loss function:
import torch
def loss(x):
    z = torch.mean(torch.pow(x[0], 2))
    return zWe wrap this for convenience:
class AgentAttributeDistributionGenerator(SampleGenerator):
    def forward(self, generator_params):
        mu = generator_params
        return model.make_default_generator(mu)
meta_generator = AgentAttributeDistributionGenerator()Finally, we specify the domain over which we'd like to find such a 
prior = torch.distributions.Uniform(torch.tensor([-20.]), torch.tensor([20.]))
optimise = Optimise(model=model, meta_generator=meta_generator, prior=prior, loss=loss)
optimise_method = TBS_SMC(num_particles=5_000, num_initial_pop=10_000, num_simulations=10_000, epsilon_decay=0.7, return_summary=True)
trained_meta_generator = optimise.fit(optimise_method, num_workers=-1)The same example, optimised using variational optimisation, can be seen here.
This package accompanies our AAMAS 2024 paper on Population synthesis as scenario generation in agent-based models, with the aim of facilitating simulation-based planning under uncertainty. You can cite our paper and/or package using the following:
@inproceedings{dyer2023a,
  publisher = {Association for Computing Machinery},
  title = {Population synthesis as scenario generation for simulation-based planning under uncertainty},
  author = {Dyer, J and Quera-Bofarull, A and Bishop, N and Farmer, JD and Calinescu, A and Wooldridge, M},
  year = {2023},
  organizer = {23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024)},
}
The supplementary material (that being this GitHub repository and the paper appendix) can be cited separately from the main paper as:
@software{joel_dyer_2024_10629106,
  author       = {Joel Dyer and
                  Arnau Quera-Bofarull},
  title        = {joelnmdyer/synthpop: AAMAS release},
  month        = feb,
  year         = 2024,
  publisher    = {Zenodo},
  version      = {v1.0.0},
  doi          = {10.5281/zenodo.10629106},
  url          = {https://doi.org/10.5281/zenodo.10629106}
}