simulate_markov_chain
ludics.main.simulate_markov_chain(initial_state, number_of_strategies, fitness_function, compute_transition_probability, seed, individual_to_action_mutation_probability=None, warmup=0, iterations=10000,
**kwargs)
Simulates a Markov chain across a specified number of iterations
Parameters:
initial_state: numpy.array - the state that the Markov chain begins innumber_of_strategies: int - the number of strategies which players can playfitness_function: func - takes a state and returns a numpy.array of floats with the same shapecompute_transition_probability: func - takes two states and returns the probability of transitioning between them.seed: int - the seed for numpy.random.seedindividual_to_action_mutation_probability: numpy.array - the probability that a player (row) mutates to an action (column) when chosen. Set to 0 for all mutations by default.warmup: int - the number of iterations to take place before recording the state distributioniterations: int - the number of iterations to simulate
Returns:
- tuple containing:
- numpy.array - the states as they were reached over time in the simulation
- dict - state : number of times the state was visited in the simulation