-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NEST encounters bad_alloc exception when changing tic length #644
Comments
@gtrensch This is a slightly mysterious bug that may provide an interesting opportunity to delve into the NEST kernel---interested? |
@heplesser yes, I like mysterious bugs. |
So the question is which array that is binned by tic size isn't properly resized on resetting tics? |
@apeyser It's the spike buffer. Something goes wrong with min_delay calculation in case tics_per_ms is < 1000.0. |
@heplesser before I create a pull request, can you please have a look at the text below, in particular my remarks at the end, and let me know what you think. Perhaps we can also discuss this in Oslo during the hackathon. @apeyser many thanks for your hints and the debug session !!! The problem is the following:
This will set following max limits (LIM_MAX):
During the instantiation of the DelayCheckers, the value of tics of the min_delay_ object is preset with the positive infinity value (not intuitive) by calling Time::pos_inf(). Which means: DelayChecker::Calibrate destroys this infinity value when TimeConverter is called. The problem is not visible for TICS_PER_STEP >= 1000. A boundary check hides the issue when a new Time-object is instantiated. See the nested macro below: The absolute infinity value contained in t.t won't be smaller than LIM_MAX.ms.
For TICS_PER_STEP < 1000 this condition is not given any longer. In the example above, it will be:
... causing a new tics value of 115292150460684704 after calibration. This is not an infinity value anymore and the if-statment below in ConnectionManager::update_delay_extrema_ which checks against infinity will fail !
Based on the invalid tics value an inappropriate steps value is calculated which in turn is used to resize the moduli_ array during simulation start causing the bad_alloc. Solution: Some general remarks: In my opinion the TimerConverter belongs to the Time class, I would merge them. The Time class itself is very hard to read and to debug. E.g.: Nested macros in initializers should be better functions. What looks like constants is sometimes overwritten. Some of the names are ambiguous and not descriptive (min_delay, for example, is used for different things). Will say, I suggest a small conservative refactoring. |
I think the issue is that their exist a large number of time units: steps (int), ms (float), tics (int), delay (naked int)... there should be a universal unit, and all inputs should immediately be converted into it. It should be a very simple class without all the object oriented complexity. It needs to be transparent and fast -- the previous iteration was eating 50% of simulation time in constructed and handling time objects. Unfortunately, this is a lot of work. |
@gtrensch @apeyser Thank you very much for your detective work! I agree that the best fix for now is to modify the The reason for initializing with positive/negative infinity is that it makes comparisons easy---otherwise, one would always have to check against invalid values. The @gtrensch Would you create a PR? |
PR has been created. The TimeConverter class has not been merged into the Time class. This would be better done in the course of a refactoring task. |
The following causes a
bad_malloc
exception (per hash 1cb0529, OSX 10.11 with gcc 6.2):Interestingly, a number of examples and tests, e.g.
ArtificialSynchrony.sli
andtest_iaf_ps_dc_t_accuracy.sli
set resolution and tics in this way and work. In those cases, a full network is built beforeSimulate
is called. Creating just a single neuron does not get rid of the exception.See also #643.
The text was updated successfully, but these errors were encountered: