Replies: 4 comments 7 replies
-
I don't really have any reason to doubt that if you turn on debug to view the raw data coming from the inverter that it will show the same thing. The values ultimately passed to entities aren't altered in any way between raw hex and display value except for applying the scaling factor (only if the associated data point has a scaling factor). 5 seconds would generate a very large log, it might even cause HA to crash since I've seen my dev systems freeze or the supervisor watchdog reboots core when I've left debugging on for a long time. It's always a possibility to start adding validation limits to things such as only display temperatures in the range -30°C to 100°C (-22 to 241°F for me), but the reason I haven't done that is because there aren't really any documented ranges so anything put in the integration for valid ranges would be made up values for what could be considered "normal". But if the inverters can't always be trusted to return sane values then perhaps applying reasonable ranges should be an option. I run at 30 seconds normally for my home installation, the only time I go lower is in my development virtual machines for testing. |
Beta Was this translation helpful? Give feedback.
-
I'm not hugely keen on validation limits, as you mention if there isn't documented ranges then we shouldn't. Also, I'd rather see the uninterpreted output from the source, but this ideal starts to suffer when we get spurious values like this. I've had a quick further look this morning, and found that the spikes in values occurred at the same time as a script of mine changes the storage command mode, which due to my electric tariff happens between 4pm and 7pm. I'm going to turn on debug out of curiosity, but spotting something might be difficult as the spikes are not consistently found in a single entity. Thanks for your comments. |
Beta Was this translation helpful? Give feedback.
-
Whilst I had debug running, I didn't catch any spurious values, but annoyingly shortly after I turned debug off, I did. I did notice some Errors in the logs anyway, stating "Unexpected error fetching SolarEdge Coordinator data: unpack requires a buffer of 2 bytes". And then when I look at the entity history, I can see that values are being recorded at irregular intervales (5 seconds, 1 second then 5 seconds then 1). So whilst I look for one thing, I found another. I'm going to wind the updates back to every 10 seconds, to see if it's a timing issue. But am happy to try something out if it's needed. |
Beta Was this translation helpful? Give feedback.
-
Looking through the logs a little more and the timings of the strange values. The values, and occasionally the above message (buffer of 2 bytes) seems to occur when I'm sending a new command mode down to the inverter. Given I'm polling quite often, the chances of me crossing paths would increase. The additional logs I gathered contained little else than I shared previously. How does the protocol and/or inverter handle asynchronous commands. For example, the normal polling takes some time for the inverter(s) to respond, but if I send a control command whilst we're waiting for all the values to come back for the normal polling, which one of our senders gets the response? Or is the integration, or pymodbus, handling linearization of the requests to the inverter, especially given it's only really designed to connect with one client? I couldn't see much in the specs you've shared detailing around how to handle linearization of commands, other than stating only one connection is allowed. |
Beta Was this translation helpful? Give feedback.
-
I've not got enough information to raise a bug, but can some of you look around to see if you have any funny values appearing in your entities?
For example, today my I1 AC Power hit an impressive 139k W for one of the reads.
and then at separate times, my I1 temp sink hit -99.2 and -99 centigrade.
Today is the first day I've seen this. Given I'm polling quite frequently (5 seconds) I'm loathed to turn on debug logging for an unknown period of time to catch this, unless the logs wrap (?).
Any thoughts on a good way to capture the debug logs, without filling up disk space? Or does HA just handle this for us?
I know I'm running the version some of you have reported additional batteries appearing, but as I haven't suffered from that I haven't progressed to the roll back release.
home-assistant_solaredge_modbus_multi_2023-08-09T16-16-47.951Z.log
Beta Was this translation helpful? Give feedback.
All reactions