Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoiding large daq files #1164

Closed
mariorl opened this issue Apr 24, 2024 · 5 comments
Closed

Avoiding large daq files #1164

mariorl opened this issue Apr 24, 2024 · 5 comments
Labels

Comments

@mariorl
Copy link

mariorl commented Apr 24, 2024

Hi, I'm facing problems with large .db DAQ files, because when a bit complicated process needs to be recorded, it generates db files of an about 45MB or so. I've noticed dealing with these big json sets when try to show on a trend is slow, and deals to out-of-memory browser error, and eventually the whole Fuxa node process fail.

One way to dramatically reduce the amount of data to be recorded could be just by setting a treshold in the Tag Options dialog. By that, you can set a large datalogging period of time, for example one every 60 second, and let the crossing of the treshold be the trigger for an extra datalog.

I'm very experienced with these techniques and as an advance, I can tell you this kind of triggering (by threshold) could rise other problem, a very jerky signal from a broken sensor for example, or a electrically noisy enviroment, will trigger the datalogging many times. In these cases I usually use a measure rejection based on comparing with the previous. But this second part of the signal conditioning is not needed most of times.

@robsori
Copy link

robsori commented Apr 24, 2024

I think you mean hysteresis. Yes, it is necessary especially for analog signals. For example, tracking a level that must reach a certain threshold can lead to the unwanted triggering of an action such as archiving many times around the set threshold value.

@MatthewReed303
Copy link
Collaborator

@unocelli and I have been working on the ODBC driver for database in the odbc branch. I'm using postgreSQL and PGadmin. Currently writing to the DB from the scripting works fine, so you can also logs your tags to the DB with any logic/filtering you want from within the script. The Table has not been done yet to read back from the DB. I can provide some example code to get you going and the docker-compose to setup postgreSQL etc

another option is to use InfluxDB which is faster and compresses the stored data better, but you are still limited to the Fuxa default logging options

@mariorl
Copy link
Author

mariorl commented Apr 25, 2024

Thanks Robsori and Matthew. These improvements in the server/db side looks fine, i will take in account in the next project. Anyway is the also browser who gets out of memory, and then may be matter of send very large json sets also or something related (I'm not a full stack dev, only a PLC programmer). What I mean is, irrespective of the server side improvements, it should be advisable to avoid large db files with techniques as thresholding, set limits on parametrization.

@unocelli
Copy link
Member

unocelli commented May 7, 2024

Hi, I think you mean a feature to set a 'deadband' on the tags for DAQ registration.
Sending the data to the client is the search for the compromise, you cannot have a detailed history if you want to look at a month's data.

unocelli added a commit that referenced this issue Sep 20, 2024
@unocelli
Copy link
Member

I’m going to close this as resolved. let me know if you have any issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants