-
Notifications
You must be signed in to change notification settings - Fork 51
MED1: Timestamp check in databroker #732
MED1: Timestamp check in databroker #732
Conversation
416eaa4
to
4d1a72a
Compare
Should we possibly document our time handling somewhere. I.e. something that describes the two "main" alternatives (Timestamps set by Databroker and/or Timestamps set by provider). I can see some theoretical corner cases that can give problems, like if we have multiple sources/providers for a signal and they have slightly different time. But maybe the recommendation then is either to let broker set the time, or make sure that both sources/providers have synched time, or just recommend to not have multiple providers for the same signals. |
Yes, documenting this would be good. We could opt to allow some difference between timestamps. I think for actuation-provider it should be as is, because target value -> last that comes in wins. For providing current values back from a sensor/actuator there should be only one instance/provider. |
if self.datapoint.ts > *timestamp { | ||
return Err(UpdateError::TimestampTooOld); | ||
} | ||
if let Some(target) = &self.actuator_target { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the use case for providing a timestamp when setting an actuator target in the first place?
My opinion is that setting an actuator is akin to calling a function. If it makes sense for clients to be able to read the timestamp of when this happened, I'm pretty sure this should always be the responsibility of databroker to set it.
So I would suggest:
- Remove the validation of actuator target timestamp.
- Remove the possibility of setting actuator target timestamps in the API (and always have databroker do it internally).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so remove the ability for actuator targets completely from the API or just ignoring the incoming timestamps for actuator targets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove it, unless someone can come up with a reason why it would make sense for a client to provide a timestamp.
This is maybe more of a general comment, but if we DO allow data providers to explicitely set itmestamp on set, waht is then more likely We always update timestamp:
We do this check
So I am wondering what does more damage? Oe radical "solution" would be never accepting exertnal timestamps, and as databroker is sort of a centralized system, always tag times ourselves. Does this have obvious disadvantages for us? |
Valid point @SebastianSchildt but then the timestamp should be rather internally than open through API. |
Yes maybe, but as it is now, maybe just warning when an older timestamp comes in, but still taking it? A bit like some build systems do, when they figure out, some "old" artefacts have been build in the future, like make
maybe that would be the more "robust" approach? Although I do see the report clearly "recommends" the check, just not sure we would agree? |
That was my immediate thought as well. It can be solved by adding a validation step that checks that a provided timestamp is not (too long) into the future? And if it is, just set it to The question is if we even need providers to provide timestamps at all in that case? |
what John suggeseted can be implemented. I'm just wondering if we are able to emit a warning. For a SetResponse we can only send back errors. So would we throw an error or just not inform the client? |
So I think we have two cases
In no case is it a good idea to have some "not too far in the future/past" kind of checks, because there never is a good rationale why a specific value was chosen. In terms of "attacking"/deliberately putting wrong timestamps, lets agree, that we have much larger problems in the system than can be fixed by "clever timestamp handling in databroker) I would suggest doing 1. and 2.i Why? Because as a networking guy by education I can say that "packages overtaking others, leading to reordering" is a bit of a phantom and does't happen nearly as often as used as an argument. Especially in a local network that a car is, with a TCP based protocol. On the other hand, the ECUs in a car are not neccessarily very time synchronized. You may have some AD subsystem synched with µS precision (but not necessarily to the "absolute" real time), whereas other systems don't care much. In a way therefore, for such systems choose option 1. anyway, but if e.g. a connectivity unit feels to jump through time based on some NTP sync, will, we should not panic databroker wise, and just accept the timestamps provided by it as correct from the point of view of the signal source. |
Or in other words:
I generally I agree with that, but given that some clients might rely on "sane" timestamps, I'm starting to think that we perhaps need to treat provider provided timestamps differently. Perhaps it would make sense to have databroker always set the timestamp, but make any provider provided timestamp available as a different field (for clients that would find that useful). In addition, we could also update the provider API to offer a way to set an "offset from |
Concerning letting the databroker always set the timestamp - I think we need to discuss what use-cases we want to support in the future. When everything is inside the vehicle and reflects "now" it should not matter that much if the client is allowed to set time or not. But I could see some use-cases where it makes sense to let client se time:
|
Good summary 😃 maybe with the exception of logging timestamps that seem to go "backwards", to hint at a potential problem. @erikbosch my understanding is, that currently we do support such use cases (i.e. a client can set timestamp if it wishes to) So that would also be safe, if we do not touch anything |
So the conclusion is that we will not discard any signal value, at most we may give warnings. That is totally OK for me, but we should preferably document the behavior somewhere, possibly in one of:
Maybe as part of some "architecture page" somewhere |
Sure, but I'm still not sure providers should be able to set the timestamp directly. It seems more robust to instead make that information (if provided) available in a separate field. Something like:
That means And it would preserve the timestamp provided by the provider, so even if their clock is way off, comparisons between subsequent We don't even have to change the "provider interface" for this; |
6f01020
to
f0b4114
Compare
Added source timestamp in databroker code.
tested with this that now timestamp is SystemTime. Currently the client does not get back the source timestamp since this would need a API change. |
kuksa_databroker/databroker/src/grpc/sdv_databroker_v1/conversions.rs
Outdated
Show resolved
Hide resolved
@lukasmittag - some build errors, can you take a look at them? |
a76633b
to
fdc61db
Compare
fdc61db
to
28dfc86
Compare
will add a documentation file shortly |
This adds a check in
validate
ofEntry
to validate timestamps. This means if a timestamp is smaller than the saved one it throws anUpdateError
. If no timestamp is provided the databroker usesSystemTime::now()
for the timestamp. Because of this behavior we check if the saved timestamp is greater than the provided timestamp. Equal timestamp values are allowed.