-
Notifications
You must be signed in to change notification settings - Fork 650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static shared memory in PSRAM for model/imageTMP and tensor_arena #2215
Conversation
* Testcase for #2145 and debug-log (#2151) * new models ana-cont-11.0.5, ana-class100-1.5.7, dig-class100-1.6.0 * Testcase for #2145 Added debug log, if allowNegativeRates is handeled * Fix timezone config parser (#2169) * make sure to parse the whole config line * fix crash on empty timezone parameter --------- Co-authored-by: CaCO3 <[email protected]> * Enhance ROI pages (#2161) * Check if the ROIs are equidistant. Only if not, untick the checkbox * renaming * Check if the ROIs have same y, dy and dx. If so, tick the sync checkbox * only allow editing space when box is checked * fix sync check * show inner frame on all ROIs * cleanup * Check if the ROIs have same dy and dx. If so, tick the sync checkbox * checkbox position * renaming * renaming * show inner frame and cross hairs on all ROIs * update ROIs on ticking checkboxes * show timezone hint * fix deleting last ROI * cleanup --------- Co-authored-by: CaCO3 <[email protected]> * restart timeout on progress, catch error (#2170) * restart timeout on progress, catch error * . --------- Co-authored-by: CaCO3 <[email protected]> * BugFix #2167 * Release 15.1 preparations (#2171) * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update changelog * Fix links to PR * Formating * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md * Update Changelog.md --------- Co-authored-by: Slider0007 <[email protected]> Co-authored-by: Slider0007 <[email protected]> * fix typo * Replace relative documentation links with absolute ones pointing to the external documentation (#2180) Co-authored-by: CaCO3 <[email protected]> * Sort model files in configuration combobox (#2189) * new models ana-cont-11.0.5, ana-class100-1.5.7, dig-class100-1.6.0 * Testcase for #2145 Added debug log, if allowNegativeRates is handeled * Sort model files in combobox * reboot task - increase stack size (#2201) Avoid stack overflow * Update interface_influxdb.cpp * Update Changelog.md --------- Co-authored-by: Frank Haverland <[email protected]> Co-authored-by: CaCO3 <[email protected]> Co-authored-by: CaCO3 <[email protected]> Co-authored-by: Slider0007 <[email protected]> Co-authored-by: Slider0007 <[email protected]>
@Slider0007 @jomjol Maybe you have an idea why we run into the |
Example log where the first loaded model is smaller than the 2nd one and again smaller than imageTMP:
|
@caco3, @jomjol @caco3: This is my assumption why first round with your approach is working and you're failing with second round: WORST CASE (max. models sizes) Slider0007 approach: Keep models loaded, share tensor/ImageTMPI did also some research to this topic. Approach is working till sum of 1000kB models (model1+2), then it also hangs at the same point than in approach from @caco3. This means 1200kB models are not working with this approach anymore. More detailed infos to further findings I have described in private chat. The testbranch is located here, if you'd like to have a look to it:
###Output of PSRAM memory blocks after a few rounds (flow finished): ---> Helper structure memory Block 0x3fa4ac2c data, size: 921604 bytes, Free: No During Take image state in the following round (marked CImage Helper strucutre block): ---> Helper structure memory Block 0x3fa4ac2c data, size: 921604 bytes, Free: No If nothing come in between the same free blocks are allocated again, but whenever something is coming in between and take only some bytes PSRAM gets even more fragemented. That's my majot concern! Actual implemented version without any preallocation (v15.0.3)It seems that the actual implementation is the best balanced version, but only possible if no fragmentation occurs which is in my option the main issue. If we get rid of the fragmentation we have a really good base to work on for further improvements. Unfortunately this would exclude the possibility to use wifi stack and bss in PSRAM and reduce the internal RAM again which is really bad to see. Up to now I have no glue if we come around the obstacle. |
See #2200 for details
Thanks @Slider0007 for the helpful visualization! As a side note, it would not be difficult to tell the Using |
See #2200 for details Co-authored-by: CaCO3 <[email protected]>
* fix missing value data --------- Co-authored-by: CaCO3 <[email protected]>
* Use double instead of float * Error handling + set to RAW if newvalue < 0 * REST SetPrevalue: Set to RAW if newvalue < 0 * set prevalue with MQTT
Some links which might help for further analysis: |
- stb_image.h: Version update 2.25 -> 2.28 - stb_resize.h: Version update 0.96 -> 0.97 - stb_write.h: Version update 1.14 -> 1.16 Co-authored-by: CaCO3 <[email protected]>
* Rename module tag name * Rename server_tflite.cpp -> MainFlowControl.cpp * Remove redundandant MQTTMainTopic function * Update * Remove obsolete GetMQTTMainTopic
…n-the-edge-device into shared-psram-objects
Proof of Concept
Use same memory block for all
tensor_area
This is working ok
Use same memory block for both models and
imageTMP
.If the allocated memory gets too small, it gets freed an a larger block gets allocated. This happens withing the first round and then no change is needed anymore.
This seems to work fine for the first round but at the 2nd round I always run into
The
fb
how ever contains valid data!