-
Notifications
You must be signed in to change notification settings - Fork 6.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sensors: Update to the Sensors V2 decoder API #60944
Comments
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
So I think there might be some assumptions being made here that will cause some sensors issues. Consider something like a adc with an array of channels all reporting the same meaningful data (voltage). There's many sensors like this for light, tof, sound, capacitance, and more. Is the ability to select channel N of type X accounted for in this setup? Can you still partially get some number of rows of a channel index and type? E.g. I fetch 5 rows, then fetch another 5, then another 5, and so on? I still need to look at the implementation you have started @yperess but I recall gaming out the current API quite a bit so I think its worth kind of redoing that here. |
@teburd can you clarify (I'm sorry if I'm missing something obvious). In the current setup the columns are channels and the rows are frames right? So we have a In your example you mentioned an array of channels reporting the same data. Now the current proposal is a bit out there but I think it's a step in the right direction. We basically merge channels into sensor types (right now just for accel/gyro/mag/pos but others can do the same):
In the ADC case that's up to the authors of those drivers. I'm neutral with a slight bias to keep them as 1D data. In your example, lets say the ADC gives 3 frames of
If someone wants to create a new channel that combines them, they can. So for example Now the catch. I think that long term, channels should start becoming sensor types. Effectively, once you map accel x/y/z to the single XYZ type and represent those values as a single struct, the individual channels aren't needed anymore and possibly could be deprecated. We've toyed with the idea of just introducing the sensor type enum and leaving channels alone, but I worry about the divergence of the two. |
In the current API though multiple channels can have the same type, e.g. TEMP or VOLTAGE. The type is provided back as you iterate over the channels. In theory a frame could have 10 temperature readings and you'll get back 10 samples from the frame with the type TEMP (ambient, or whatever) By changing this to not allow for multiple channels of the same type you are going to run into issues with things like multi-channel temp, light, voltage, current, sound, tof, capacitance, etc sensors. See #1387 and note that multiple channels of the same type need to be supported in the comment by @microbuilder. I believe that's supported by the current decoder API unless I missed that! |
Can you clarify, does that mean that the same frame can have 2+ instances of a single channel (for example
|
Yes exactly https://www.ti.com/product/FDC1004 for example measures 4 channels of capacitance, how would I support this in Zephyr (I had a product I was prototyping with this exact chip) This is just one of many sensor devices that support arrays of readings of the same type. Today I'd have to add custom channel types CAP1, CAP2, CAP3, CAP4 etc This has been an issue noted and repeated many times, #13718 and #1387 both mention it. #60833 is the results of not having a channel index selector or identifier available. The current decoder API sort of accounts for it right, as you iterate over the channels in a frame you can get back the same channel type repeatedly and that's allowable. It might be nicer if it were to return back the index as well. By switching to a vertical decoder, and I think that is fine, but the index likely needs to be a parameter given then. |
Thanks @teburd , that clarifies, so my question in those sensors is does the index always mean the same thing or is it like devicetree where the ordinal is decided at compile time and may change on a build to build basis. If this is a hard rule that CAP1, CAP2, ..., CAP_N_ always mean the same thing, then this is a simple extra parameter: int (*decode)(
const uint8_t *buffer,
enum sensor_channel channel,
uint8_t channel_index, //< New value is used with channel U index to form the unique column to decode.
sensor_frame_iterator_t *fit,
uint8_t *out
); |
Yeah exactly, it’s a simple extra parameter. We don’t need to do anything fancy here imo. In effect the tuple (device, type, type_index) is the selector |
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See #60944 for details. Signed-off-by: Yuval Peress <[email protected]>
Update the decoder APIs to vertically decode the raw sensor data. This means that instead of getting each channel on a frame by frame basis, the API takes a channel specifier and returns as many frames of that given channel as there's room. The goal of this is to make the end decoded result the most useful and usable for the Zephyr application. See zephyrproject-rtos#60944 for details. (cherry picked from commit 5e14088) Original-Signed-off-by: Yuval Peress <[email protected]> GitOrigin-RevId: 5e14088 Change-Id: I4b90bd61216a216075c91e1223c58f4a2f13892b Reviewed-on: https://chromium-review.googlesource.com/c/chromiumos/third_party/zephyr/+/4891896 Tested-by: ChromeOS Prod (Robot) <[email protected]> Tested-by: Al Semjonovs <[email protected]> Reviewed-by: Al Semjonovs <[email protected]> Commit-Queue: Al Semjonovs <[email protected]>
@tristan-google can we close this since #61022 merged? |
Yep, can close. |
Objective
Adjust the Sensors V2 decoder API to use a more natural representation of the data and make it suitable for re-use with the proposed Sensor Subsystem.
Background
The recently-merged Sensor V2 asynchronous API (referred to as SensorsV2 in this RFC) introduced a decoder API, implemented by each supported sensor, that processes a buffer of raw sensor register data received through an RTIO transaction and outputs a fixed-point sensor reading for the application to use. Data returned by the decoder API is frame-oriented. That is, samples for all channels in a frame are read out sequentially before advancing to the next frame in the buffer.
The above arrangement was optimized for cases where an application is directly reading data off a physical sensor. However, under the proposed Sensor Subsystem and CHRE, after which it is modeled, the definition of a "sensor" is broadened –producers of data that the Subsystem distributes to consumers may not necessarily be a physical sensor chip, but could also be a virtual sensor data source. One example is a lid-angle virtual sensor that streams samples from two accelerometers located in the base and screen of a laptop, computes the vector angle between the two, and reports a value in radians as its output.
The Subsystem supports “chaining” these sensor types arbitrarily (although a physical sensor can never be a consumer). In the interest of using a harmonized API across both physical and virtual sensors, we propose making two modifications to the existing SensorsV2 decoder API so that it can be adopted in both places:
API Changes
See
struct sensor_decoder_api
insensor.h
for current version.Keep:
get_frame_count
This function will remain unchanged. It determines how many sample frames exist in a given RTIO buffer.
Remove:
get_timestamp
This function is no longer needed and will be removed. Timestamp will be included in the struct outputted by decode.
Remove:
get_shift
This function is no longer needed and will be removed. Shift value will be included in the struct outputted by decode
Modify:
decode
As it does today, the decode function reads a buffer obtained from a sensor and extracts usable sensor samples from an internal representation. In the case of physical sensors, that is the raw register representation being translated into Q31 fixed-point values in SI units. Custom virtual sensors are free to format data as they see fit.
However, the decode function’s interface has been adjusted to better allow for channel-by-channel (vertical) reads. There is no longer a channel iterator; instead, a caller requests a specific channel through the ch parameter and a maximum number of samples to extract (max_count). The implementation will write out samples and metadata to data_out using the appropriate struct type from below. These structs have been modeled after CHRE. struct sensor_data_1d is for one-dimensional values and struct sensor_data_3d is for XYZ values. Occurrences (trigger events) use just the header (struct sensor_header), as they do not have associated sample data. Custom virtual sensors that don’t fit any of these are able to declare their own (it will be up to the driver implementation and application). After reading, the frame iterator is updated so subsequent reads pick up where left off.
Proposed Types
The precise names of these types are not final and will be properly namespaced.
Add: get_size_info
This new API function helps the application determine how much memory to allocate to decode sensor data into. For a given sensor type (accel, gyro, temp, etc…), this function will report the base number of bytes needed to store the initial output struct (header plus one sample) as well as the incremental size needed for each additional sample. The return value is a status code.
A default implementation of this function will be provided that covers all of Zephyr’s standard sensor channel types, but physical or virtual sensor drivers are welcome to export their own implementation.
Impact on Existing Code
While this affects an API that is live within Zephyr, it is only used in a small number of places contributed by Google and is currently considered experimental in case the API evolves before being ready for mainstream use. Google would take care of migrating existing upstream usages if this proposal is accepted.
Alternatives Considered
The alternative is to leave the existing decoder API in place for physical sensor drivers and adopt a fresh API used by virtual/user-space sensors within the Sensor Subsystem. However, this new API would be very similar to the existing API. In the interest of re-use, we find it worthwhile to broaden the existing API while it is still in early stages and to harmonize across all sensors --virtual and physical-- so they are interchangeable as far as the Sensor Subsystem is concerned.
The text was updated successfully, but these errors were encountered: