Warning
UNDER CONSTRUCTION
Latest 5.X ROOT with PyROOT, RooFit, Minuit2, HistFactory enabled.
-
pip install --user numpy
HDF5:
yum install hdf5 hdf5-devel
-
pip install --user tables
scikit-learn (private branch with more support for event weights):
git clone git://github.com/ndawe/scikit-learn.git cd scikit-learn git checkout -b htt origin/htt python setup.py install --user
-
pip install --user matplotlib
-
git clone git://github.com/rootpy/rootpy.git cd rootpy python setup.py install --user
-
pip install --user root_numpy
-
pip install --user yellowhiggs
-
pip install --user GitPython
-
pip install --user tabulate
-
git clone git://github.com/htautau/hhntup.git
-
git clone git://github.com/htautau/hhana.git
Automatically organize ROOT and log files with:
init-ntup
The above also merges the output from the subjobs for embedding and data.
Add the Higgs pt weights:
make higgs-pt
This creates copies of all signal ROOT files
ntuples_hh/running/hhskim/hhskim*root
at
ntuples_hh/running/hhskim/weighted.hhskim*.root
.
Make a backup of the original files:
mkdir ntuples_hh/running/hhskim/backup mv ntuples_hh/running/hhskim/hhskim*.root ntuples_hh/running/hhskim/backup
Then remove weighted.
from the all files:
rename weighted. "" ntuples_hh/running/hhskim/weighted.hhskim*.root
Then move the hhskim running directory to production (replace XX with the skim version number):
mkdir ntuples_hh/prod_vXX mv ntuples_hh/running/hhskim ntuples_hh/prod_vXX
Update the production path in the Makefile
(HHNTUP
)
and mva/__init__.py
(NTUPLE_PATH
).
And finally create the merged hhskim.root
and hhskim.h5
:
make ntup
Generate the cache of all background normalizations:
make norms
Create all validation plots with:
make plots
Run the batch jobs that train the BDTs at each mass point with:
make train
Go get some coffee.
Create all the BDT validation plots with:
make mva-plots
Run the batch jobs to determine the optimal binning for each mass point in each category and year:
make binning
Go get some coffee.
Run the batch jobs that create the workspaces with:
make mva-workspaces make cuts-workspaces
When the batch jobs are done, create the workspace combinations with:
make combine-mva make combine-cuts
Apply all of the HSG4 workspace fixes with:
cd workspaces fix-workspace --quiet --symmetrize --prune-shapes --chi2-thresh 0.9 hh_nos_nonisol_ebz_mva fix-workspace --quiet --symmetrize --prune-shapes --chi2-thresh 0.9 hh_nos_nonisol_ebz_cuts
Construct the profile of every nuisance parameter (NP):
# submit a batch job for each NP. If --submit is omitted simply print the command. multinp scans_fit --submit --file path_to_measurement_file.root # merge all the output in a single file and compute the nominal NLL for normalisation multinp merge --jobs -1 --file path_to_measurement_file.root # Clean the directory from the individual pickle files (keep only the master) multinp clean --file path_to_measurement_file.root
Plot the NP profiles with:
plot-nuis path_to_measurement_file.root
Compute the pull of each nuisance parameter with:
multinp pulls --jobs -1 --file path_to_measurement_file.root
Plot the NP ranking/pulls with:
plot-ranking path_to_measurement_file.root
Compute the expected significance (bkg. only hypothesis) with:
# Walk trough the directory and subdirectory and look for workspaces multisig path_to_directory_containing_workspaces
Compute the postfit histograms and errors with:
# --fit_var bdt_score/mmc_mass plot-postfit path_to_measurement_file.root --fit-var bdt_score --force-fit --jobs -1 # If the fit has already been performed plot-postfit path_to_measurement_file.root --fit-var bdt_score
https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/NuisanceParameterPullsWithRanking https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/StatisticsTools