-
Notifications
You must be signed in to change notification settings - Fork 293
System DT Meeting Notes 2019
Nathalie Chan King Choy edited this page Sep 13, 2019
·
9 revisions
- Logistics: When to meet at Linaro Connect? When is a good time to have regular meetings, how often, other people to involve, ... - Tomas/Nathalie
- Quick recap of the problems that System-DTs are solving - Tomas
- Proposed additions to specification to describe multiple CPU clusters - Stefano
- Proposed conventions to allocate resources to domains - Stefano
- Update on Lopper tool - Bruce
Arnd Bergmann, Arvind (Mentor), Benjamin Gaignard (ST), Bill Fletcher - Linaro, Bill Mills (TI), Bruce Ashfield, Clément Léger (Kalray), Dan Driscoll, Danut Milea, Ed Mooring, Etsam Anjum (MGC), felix, Iliad, Jan Kiszka, jeremy gilbert (Qualcomm), Joakim Bech, Loic Pallardy (ST), Maarten Koning, Manju kumar, Nathalie Chan King Choy, Stefano Stabellini, Steve McIntyre, Suman Anna (TI), Tomas Evensen, Tony McDowell (Xilinx), Trilok Soni
https://zoom.us/recording/share/QWTX-3S8Sq-PlKeBj6pat3EY-l4WzM0Fd7o4FfX0MVuwIumekTziMw
- Tomas: Challenge with System DT is that it's cross-functional, so no 1 single mailing list to discuss
- Want to start with some live discussions
- Will formalize proposal & take to mailing lists
- Anyone opposed to recording: No. Recording starts here.
- Intros by attendees
- Tomas: Quick intro on System DT
- Background:
- SoCs very heterogeneous w/ multiple CPU clusters
- System SW needs info about the system (registers in memory map, topologies, DDR allocation, etc.)
- Different personas involved
- HW architect
- System architect
- Info for each operating system
- Config info comes in at different times: Sometimes the config info is compiled in, sometimes loaded at boot, etc.
- Not just multiple CPUs
- other accelerators may need memory
- Execution levels
- Trustzone
- Today, how this is specified is ad hoc. Want 1 true source & tooling that can get the info to the right place.
- A lot of RTOS vendors started to do work with Device Trees, so a natural source
- Verification is also a really important part of it
- At Xilinx, had an XML file & tooling to give the info to Linux, baremetal, free RTOS
- Not standardized
- Need an industry standard. Device Tree is almost there.
- DT looks at 1 address space, what Linux sees or what Xen sees.
- Need to expand DT -> System DT
- It should do 2 things:
- Describe HW for all the different masters, or CPU clusters
- Need to add some things into devicetree.org spec & tooling
- Describe AMP config info: what goes where & domain
- Suggesting "chosen" section
- Want to be compatible with hypervisor. (Stefano a maintainer with Xen, so makes sense to align)
- Describe HW for all the different masters, or CPU clusters
- It should do 2 things:
- Lopper creates traditional DT based on System DT
- So, we don't have change the DT that Linux or U-Boot sees
- Maybe longer term, Linux may want to know about System DT
- System DT data flow (RTOS & Linux) diagram showing different personas & sources of info
- Can add new nodes, add attributes into node, suppress things
- System architect info can be added as snippet
- Want to change 1 part of system without editing everyone else's files, but could still be merged together later
- Showing some possible flows for RTOS/baremetal & Linux
- Example: System DT + domain info from system architect -> Linux DT using Lopper
- Flow diagram for Xen & QEMU
- Intent to get it to work with System DT in future
- What is shown with QEMU is Xilinx QEMU. Upstream does differently. Trying to get Xilinx method upstream.
- Domains & resource groups
- Is what you are sharing intended to be shared?
- Background:
- Questions for Tomas about vision?:
- No questions
- Tomas: We found a lot of ppl have solved similar problems, which makes it interesting
- Mentor has some tooling & syntax to specify allocations. Ahead of where we are in terms of the problems encountered. Looking forward to your feedback.
- Wind River using DT for VxWorks in different way
- How do you distinguish between in TZ & outside TZ?
- Really want this to be a universal solution as much as possible
- Face to Face at Linaro Connect
- Will have possibility for people to call in
- Wed afternoon or Thur morning
- Steve: Suggest Wed 3:30pm PDT for SystemDT but OK to move
- Who would be calling in from other time zones? Will have Europe
- Morning is boot session from LEDGE, maybe could swap? Wed morning is kernel & ppl need to be there.
- 9am-10:30am PDT Thur morning looks most promising. 16:00 UTC
- Steve will try to make these show up on public schedule
- Perhaps may have ~30 ppl
- Stefano: Representative example (not complete) of System DT that would be input to Lopper
- Output of Lopper could be used by Linux, or RTOS
- 1st set of changes: How to represent CPU clusters
- DT mandates only 1 CPU notes called "cpus"
- We want to represent A53 cluster & R5 cluster, which have different architectures
- Want to introduce concept of CPU cluster -> small change to spec, but deep impact to DT
- Keep in mind: Different CPU clusters can have different memory maps
- Example on Xilinx board where A53 memory map is different from R5 memory map: e.g. tcm_bus region
- Available at 2 addresses for R5 cluster (low address and high address)
- Purpose is to show different memory map for 2 different CPU clusters
- ranges-map is showing AMBA is accessible 1:1 & TCM bus mapping in A53, in R5 have additional entry for TCM
- These are changes that we need for System DT that can represent multiple heterogeneous CPUs
- Nothing about configuration shown here
- Domain section under "chosen": How to explain configuration
- Could be changed in SW or firmware
- What is configured to run where
- What privilege level on CPUs
- What shared memory we will have between different domains (area where 1 set of SW runs, e.g. R5 domain vs. A53 domain)
- We have a memory property that shows address & size for RAM region accessible from this domain
- CPU attribute is interesting b/c tells us where this domain is running (e.g. R5 cluster & CPU mask shows which CPU used to run this domain)
- Architecture-specific # can represent an architecture-specific config of the CPU
- e.g. ARM R5 can be lockstep or not lockstep and secure mode or normal mode
- Other key part is the "access" property
- Expressing in list of links what is accessible from this domain
- On Xilinx & other vendor boards, there is possibility to enforce isolation
- This expresses if this OpenAMP domain can access
- Example: Showing normal memory, TCM bus, reserved memory region for communicating with A53, IPI is Xilinx HW block for communication between A53 & R5 cluster
- Flag: provide extra info
- e.g. mailbox: Which channel, TX or RX
- Each vendor could express vendor-specific meaning for flag to say how mapping should be done
- Could then be translated into remoteproc configuration as used by Linux. Conceptually maps to the info we saw before. Could be generated by Lopper from System DT.
- Stefano has been getting inputs from different groups & starting to reach out wider.