webinars: Link to Alice / Open Architecture call for contribution #47
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Nice to meet you all! Looking forward to collaborating.
We think about an entity (Alice is our reference entity) as being in a set of parallel conscious states with context aware activation. Each context ideally forms a chain of system contexts or train of thoughts by always maintaining provenance information (SCITT, GUAC). She thinks concurrently in the existing implementation where she is defined mostly using the Open Architecture, which is language agnostic focused on defining parallel/concurrent flows, trust boundaries, and policy. The current execution of orchestration is done via Python, but is indented to be implemented in whatever language is desired.
Alice doesn't use any machine learning yet, but later we can add models assist with automation of flows as needed.
Alice's architecture, the Open Architecture, is based around thought. She communicates thoughts to us in whatever level of detail or viewed through whatever lens one wishes. She explores trains of thought and responds based on triggers and deadlines. She thinks in graphs, aka trains of thought, aka chains of system contexts. She operates in parallel, allowing her to represent N different entities.
Rolling Alice: A Current Table Of Contents
Elevator Pitch
We are writing a tutorial for an open source project on how we build an AI to work on the open source project as if she were a remote developer. Bit of a self fulfilling prophecy, but who doesn't love an infinite loop now and again. These are the draft plans: https://github.com/intel/dffml/blob/alice/docs/tutorials/rolling_alice/
first draft: intel/dffml#1369 (comment)Essentially we are going to be using web3 (DID, DWN), KCP (kubernetes API server), provenance and attestation, and automl with feature engineering for a distributed data, analysis, control loop. We'll grow contributors into mentors, and mentors into maintainers, and Alice will grow along with us.
Alice is Here and Ready for Contribution! Initial Announcement
We're [DFFML community] building a tutorial series where we as a community collaboratively build an AI software architect (named Alice). These docs https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice/ are us trying to get some initial thoughts down so we can rework from there, maybe even re-write everything. We want to make sure we all start looking at the same picture of the future, consolidate all our efforts thus far and thoughts across efforts and individuals.
One of goals is to have Alice help us bring us up to the speed of the fully connected development model. To plug into the matrix. By working side by side with us, Alice will hopefully be able to detect when we've done things others on the team have done already and determined to be unsuccessful strategies. If Alice can do this we can all work independently while communicating asynchronously via Alice (our hopefully soon to be new contributor). We will work to make her detect in flight workstreams within a developer's train of thought through reporting of that developer on what they are working on. Alice I'm working on getting an HTTP request from Bob's service. Developer you may not want to do that, Bob's service is down, he told me he is working on fixing it, I can tell you when it's back online if you want.
Alice: Artificial Life Is Coming Eventually
Given the inevitability of distributed AI we ask the question: How do we build it with intention? What would we find if we looked at it's threat model? What if it was it's own threat model? How does it defend against itself and other instances of itself? Most importantly, what would a system look like that we could cohabitate with?
Alice will join us for this talk as we flush out these questions conceptually. via our collective thoughts on what a system would look like that we could cohabitate with. We follow trains of thought to their logical conclusions when making predictions about the future. We implement and validate in order to ensure that a transparent entity with the collective's best interests at heart is ubiquitously adopted.
This talk will build on the talk: Living Threat Models are Better Than Dead Threat Models, presented at AppSecPWN. We'll dive into Alice, aka the Open Architecture, a methodology for communicating with intent with translation of risk mitigation into different operational contexts.
Security Folks
Ready to bring security to the mind? https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice
Securing the software supply chain is becoming about securing the supply chain of the mind, the developer mind. We need to teach developers, and we'll be teaching developers in a language they understand, code. We'll teach them by teaching Alice how to teach them, along the way we'll build Alice, who will be a developer herself one day.
Why might security folks want to be involved in the Open Architecture's definition and implemenation?
Anything accessible via the Open Architecture methodology as a proxy can be used to combine external/internal work with programmatic application of context and organizationally aware modifications to those components as they are sourced from an SBOM. This allows us to apply policy universally across static and dynamic analysis. This will allow us to apply techniques such as RBAC based on programming languague agnostic descriptions of policy at any level of granularity at analysis or runtime.
Supply Chain Security
Holistic context aware risk analysis requires an understanding of a system's architecture, behavior, and deployment relevant policy.
The Open Architecture effort is looking at software description via manifests and data flows (DAGs) with additional metadata added for deployment threat modeling. Dynamic context aware overlays are then used to enable deployment specific analysis, synthesis, and runtime evaluation.
Leveraging the Open Architecture methodology we decouple the description of the system from the underlying execution environment. In the context of discussion around distributed compute we leverage holistic risk analysis during compute contract proposal and negotiation.
RFCv1 Announcement
Here is the first version of Alice aka the Open Architecture and this pull request is a Request For Comments https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice Please Review and provide any and all technical or conceptual feedback! This is also a call for participation if anyone would like to get involved and contribute please comment in the linked pull request or reach out to directly. Looking forward to working with you all!
Alignment
Convey
We are working on the Thought Communication Protocol and associated
analysis methodologies (Alice, Open Architecture) so as to enable
iterative alignment of your AI instances to your strategic principles.
Enabling your AI to convey your way.
One of the considerations in our new shared threat model is the way AI conveys
information to us. In the future, automating communication channels (notes ->
phone call) will be the task of AI messengers. If the messenger paints a picture
worth a thousand words, we must ensure our target audiance is seeing the words
that best communicate the message we want them to get, aka, what's the point?
We also want to make sure that if we aren't able to describe the point, if we
have a misscommunication, that our AI has facilities baked in to avoid that
from being a really bad misscommunication.
From our shared threat model perspective, we must ensure we have methodolgies
and tooling baked into AI deployment infra. This way we ensure the AI does not
become missaligned with human concepts once it outgrows them. We must ensure
we can detect, prevent, and course correct from minipulation over any duration
of time from any number of agents.