-
-
Notifications
You must be signed in to change notification settings - Fork 855
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoiding large daq files #1164
Comments
I think you mean hysteresis. Yes, it is necessary especially for analog signals. For example, tracking a level that must reach a certain threshold can lead to the unwanted triggering of an action such as archiving many times around the set threshold value. |
@unocelli and I have been working on the ODBC driver for database in the odbc branch. I'm using postgreSQL and PGadmin. Currently writing to the DB from the scripting works fine, so you can also logs your tags to the DB with any logic/filtering you want from within the script. The Table has not been done yet to read back from the DB. I can provide some example code to get you going and the docker-compose to setup postgreSQL etc another option is to use InfluxDB which is faster and compresses the stored data better, but you are still limited to the Fuxa default logging options |
Thanks Robsori and Matthew. These improvements in the server/db side looks fine, i will take in account in the next project. Anyway is the also browser who gets out of memory, and then may be matter of send very large json sets also or something related (I'm not a full stack dev, only a PLC programmer). What I mean is, irrespective of the server side improvements, it should be advisable to avoid large db files with techniques as thresholding, set limits on parametrization. |
Hi, I think you mean a feature to set a 'deadband' on the tags for DAQ registration. |
I’m going to close this as resolved. let me know if you have any issues. |
Hi, I'm facing problems with large .db DAQ files, because when a bit complicated process needs to be recorded, it generates db files of an about 45MB or so. I've noticed dealing with these big json sets when try to show on a trend is slow, and deals to out-of-memory browser error, and eventually the whole Fuxa node process fail.
One way to dramatically reduce the amount of data to be recorded could be just by setting a treshold in the Tag Options dialog. By that, you can set a large datalogging period of time, for example one every 60 second, and let the crossing of the treshold be the trigger for an extra datalog.
I'm very experienced with these techniques and as an advance, I can tell you this kind of triggering (by threshold) could rise other problem, a very jerky signal from a broken sensor for example, or a electrically noisy enviroment, will trigger the datalogging many times. In these cases I usually use a measure rejection based on comparing with the previous. But this second part of the signal conditioning is not needed most of times.
The text was updated successfully, but these errors were encountered: