Skip to content
vandanparmar edited this page Sep 17, 2018 · 3 revisions

This is an example usage of the code, a full documentation will be added in due course. However, users should be aware that each of the individual files have guides to usage within them.

The code is written in such a way as to enable the user to avoid as much of the underlying complexity as possible. A guide to producing results similar to those listed in this paper follows;

import cobra
import pareto
import pa_re
import ne_re

Cobra, the pareto, pareto reconstruction and network regression files are imported. Next the model is loaded and the objectives are chosen. The example listed is for the \textit{E. coli} model found on BiGG models.

model_str = 'iJO1366.json'
model = cobra.io.load_json_model(model_str)
obj1_str = ' BIOMASS_Ec_iJO1366_core_53p95M'
obj1 = model.reactions.get_by_id(obj1_str).flux_expression
obj2_str = 'EX_O2_e'
obj2 = model.reactions.get_by_id(obj2_str).flux_expression
filename = 'test_data.json'

Basic parameters for the dataset generation, regression and reconstruction can then be set.

LAMBD = 1.0
ALPHA = 0.5
CUTOFF = 99
GENS = 20
INDIV = 200
NODES = 40
RECON_POINTS = 200
CORES = 0

Here, the values of ($\lambda = 1.0, \alpha = 0.5, \tau = 0.99$) are used and typical values for the number of generations, individuals per generation and reconstruction points. This would use the single threaded version of the code, but `CORES' can be set to the desired number of cores to be used in order to enable multithreading (with 0 using the sequential version).

Running Pareto data generation and storing to initial JSON file.

The initial Pareto data can then be generated. This data is converted into a dictionary which can then be easily stored within a JSON file.

pops, vals, pareto_data = pareto.pareto(GENS, INDIV, model, obj1, obj2,cores = CORES)
to_save = {'obj1_str': str(obj1), 'obj2_str': str(obj2), 'model' : model_str,'pareto': [{'obj1': p.fitness.values[0], 'obj2':p.fitness.values[1],'gene_set': list(p)} for p in pareto_data]}
with open(filename, 'w') as outfile:
	json.dump(to_save, outfile)

Running the network regression and storing in JSON file is simply done with a single command.

ne_re.add_linear_regression(filename, CUTOFF)

The Pareto Reconstruction can then be added to the JSON file.

pareto_left,pareto_right,pareto_noise,pareto_y,pareto_x = pa_re.reconstruct(filename, NODES, RECON_POINTS, model, obj1, obj2,cores=CORES)

These individual parts are all stored within the JSON file, so can then be easily plotted at a later date, and a batch script can easily be used to generate large amounts of complete data.

Clone this wiki locally