-
Notifications
You must be signed in to change notification settings - Fork 0
2019_10_25
Pre-Meeting Agenda
- project roundup
- data products
- Automated data archival workflow support · Issue #2 · fermilab-accelerator-ai/workflow · GitHub
- Need to define formatting code · Issue #1 · fermilab-accelerator-ai/workflow · GitHub
- low latency infrastructure
- Low latency data and the AD modernization effort · Issue #9 · fermilab-accelerator-ai/meetings · GitHub
- board deployment
- where are we?, ... Get ADCs from the board · Issue #19 · fermilab-accelerator-ai/meetings · GitHub
- algorithm development
- where are we? ... TLG in the surrogate model? · Issue #21 · fermilab-accelerator-ai/meetings · GitHub
- ml algs on fpgas
- Understand execution speed on Intel boards for HLS4ML · Issue #25 · fermilab-accelerator-ai/meetings · GitHub
- follow up
- zero gain runs
- ferry on-boarding
Introductions:
- Joined by Aisha Ibrahim from AD’s engineering effort: Using ML to manage the slow extraction from the Delivery Ring (storing proton beam, extracting to send to Mu2e production target)
Added the ComEd Line Frequency B:LINFRQ to MLrn Node & data (Brian) Data being collected from Oct. 15 onward (Jason), to include the zero-gains runs Oct 15th, 2019 1pm to Oct 15th, 2019 3:45 pm
(Bill) Board development: Aria X. New layout w/ corrections (Final revision) ready next week (Glen drafting it; questions from Aisha, whether Nexus will be exploited to keep commonalities between designs.) Bill: Interplay with PIP-II Forthcoming board will have I/O capacity to write out conditions it’s seeing. Turnaround ~1 month.
Bill: New computer (name?) w/ more memory hosted in the AD computer room (Across from MCR) Brian, Bill so far have access rights.
For algo development: What’s a good, long-term solution for making the training data available where the PNNL team can use it for developing algorithms? Would a cronjob rsynch be too fragile?
Christian: Been working to see how large a model the Intel tools can work with 5 layer, Fully connected, 100, 200, 500 nodes in intermediate layers. 500: choked quickly. 1 day then ok for 100, also worked ok (took a few days) for 200-node intermediate. Seems to be filling up the chip with LUTs. Follow-ups: For this inherited code, not automatically hooked up to evaluate models from Keras, but want to run them to evaluate effectiveness of implemented model.
Also looking at scaling of resource usage for simple 1-layer model.