You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been processing some samples made with essentially a particle gun through proto_nd_flow. It is a sample of 1000 events with 1-5GeV vertically oriented MIP muons in 2x2. During the charge_event_building stage, during the run on charge/raw_events, the memory usage is between about 75 GB and 150 GB -- obviously this is way too large. It does take a while to run this step, but I'm not sure if this is the expected amount of time or if it is related to the memory issue. I am running on Perlmutter using the develop branch of ndlar_flow. The file I'm using is at /global/cfs/cdirs/dune/users/sfogarty/verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5 if someone wants to try to see if they can replicate the issue.
The flow output I get is:
WARNING:root:Running without mpi4py because No module named 'mpi4py'
~~~ H5FLOW ~~~
output file: /global/cfs/cdirs/dune/users/sfogarty//verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise_flow.h5
input file: /global/cfs/cdirs/dune/users/sfogarty//verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5
~~~~~~~~~~~~~~
~~~ WORKFLOW (1/5) ~~~
yamls/proto_nd_flow/workflows/charge/charge_event_building.yaml
~~~~~~~~~~~~~~~~
~~~ INIT ~~~
Hello
create RunData() /global/cfs/cdirs/dune/users/sfogarty//verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5
create RawEventGenerator(charge/raw_events) /global/cfs/cdirs/dune/users/sfogarty//verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5
RunData.init(charge/raw_events)
WARNING:root:Source dataset charge/raw_events has no inputfile in metadata stored under 'input_filename', using {self.input_filename} for RunData lookup
WARNING:root:Could not find row matching /global/cfs/cdirs/dune/users/sfogarty//verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5 in data/proto_nd_flow/runlist-2x2-mcexample.txt
RawEventGenerator.init()
generating truth references: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 999/999 [00:08<00:00, 112.76it/s]
~~~~~~~~~~~~
~~~ RUN ~~~
Run loop on charge/raw_events:
0%| | 0/2 [00:00<?, ?it/s]
The macro used to make the sample is the following:
/edep/random/timeRandomSeed
/edep/gdml/read Merged2x2MINERvA_v3_withRock.gdml
/edep/phys/ionizationModel 0
/edep/hitSeparation volLArActive -1 mm
/edep/update
/gps/pos/type Volume
/gps/pos/shape Para
/gps/pos/centre 0.0 -100.0 1300 cm
/gps/pos/halfx 64.365 cm
/gps/pos/halfy 1.0 cm
/gps/pos/halfz 64.750 cm
#/gps/ang/type iso
/gps/direction 0 -1 0
/gps/particle mu-
/gps/ene/type User
/gps/hist/type energy
/gps/hist/point 1000 0.2
/gps/hist/point 2000 0.2
/gps/hist/point 3000 0.2
/gps/hist/point 4000 0.2
/gps/hist/point 5000 0.2
I have been processing some samples made with essentially a particle gun through proto_nd_flow. It is a sample of 1000 events with 1-5GeV vertically oriented MIP muons in 2x2. During the charge_event_building stage, during the run on charge/raw_events, the memory usage is between about 75 GB and 150 GB -- obviously this is way too large. It does take a while to run this step, but I'm not sure if this is the expected amount of time or if it is related to the memory issue. I am running on Perlmutter using the develop branch of ndlar_flow. The file I'm using is at
/global/cfs/cdirs/dune/users/sfogarty/verticalMuons_1000_events_2x2_1GeV_to_5GeV_larndsim_newLightNoise.h5
if someone wants to try to see if they can replicate the issue.The flow output I get is:
The macro used to make the sample is the following:
Here is the script I use to run ndlar_flow:
The text was updated successfully, but these errors were encountered: