-
Notifications
You must be signed in to change notification settings - Fork 9
Promise Theory
Promises are about:
- Formulating outcomes by destination rather than journey
- Which agents are responsible for the outcome
- How the constraints on those agents affect the ability to predict outcomes
- How access to different information affects each agent’s view on whether an outcome was achieved
Instead of thinking of force or command, promises help us to see the world as a way to deal with constraints on the field of free possibility.
A promise expresses intent about the end point, or ultimate outcome, instead of indicating what to do at the starting point.
Intention - This is the subject of some kind of possible outcome. It is something that can be interpreted to have significance in a particular context. Any agent (person, object, or machine) can harbour intentions. An intention might be something like “be red” for a light, or “win the race” for a sports person.
Promise - When an intention is publicly declared to an audience (called its scope) it then becomes a promise. Thus, a promise is a stated intention. In this book, I’ll only talk about what are called promises of the first kind, which means promises about oneself. Another way of saying this is that we make a rule: no agent may make a promise on behalf of any other.
Imposition - This is an attempt to induce cooperation in another agent (i.e., to implant an intention). It is complementary to the idea of a promise. Degrees of imposition include hints, advice, suggestions, requests, commands, and so on.
Obligation - An imposition that implies a cost or penalty for noncompliance. It is more aggressive than a mere imposition. Obligations have a formal status in state laws and regulations. There is no such public body of promises (promises are dynamical phenomenon concurrent with autonomous action, listing promises globally and statically is not plausible). Obligations may cause promises and promises may cause obligations, but promises have a physical reality as events in space and time, whereas obligations do not. Obligations are at a different level of abstraction altogether. Promises are made on a voluntary basis, for obligation is a voluntary concept almost irrational.
Assessment - A decision about whether a promise has been kept or not. Every agent makes its own assessment about promises it is aware of. Often, assessment involves the observation of other agents’ behaviors.
If an expectation about a piece of technology or about an agent is asserted with absolute certainty or merely with some quantified probability of being valid, the question immediately arises how that knowledge has been obtained, thereby increasing uncertainty rather than reducing it.
Such existential questions do not arise for a piece of technology that has been delivered with promises to its users; they simply react to disappointing performance, perhaps losing trust in the promiser. Future promises from that same source would then be viewed with less credability. Conversely, if a piece of equipment out-performs the promised performance, that fact may lead to increased trust in the original promiser.
- (+) promise (to give) could be: “I promise you a rose garden,” or “I promise email service on port 25.”
- (-) promise (to use/receive) could be: “I accept your offer of marriage,” or “I accept your promise of data, and add you to my access control list.”
- (+) imposition (to give) could be: “You’d better give me your lunch money!” or “You’d better give me the address of this DNS domain!”
- (-) imposition (to use/receive) could be: “Catch this!” or “Process this transaction!”
Deadlocks occur when agents make conditions on each other so that no actual promise is given without one of the two relaxing its constraints on the other:
- Agent 1 promises X if Agent 2 keeps promise Y.
- Agent 2 promises Y if Agent 1 keeps promise X.
This pair of promises represents a standoff or deadlock. One of the two agents has to go first. Someone has to break the symmetry to get things moving.
The matter of deadlocks shows us that conditions are a severe blocker to cooperative behavior. Conditions are usually introduced into promises because an agent does not trust another agent to keep a promise it relies on.
Some promises cannot be made at the same time. For instance, a door cannot promise to be both open and closed.
- Conflicts of giving (+) - An agent can promise X or Y, but not both, while two other agents can promise to accept without conflict; for example, an agent promises to lend a book to both X and Y at the same time.
- Conflicts of usage (-) - A single agent promises to use incompatible promises X and Y from two different givers; for example, the agent promises to accept simultaneous lights to New York and Paris. The lights can both be offered, but cannot be simultaneously accepted.
Promised collaboration must be constructed from the bottom up.
-
Agents are autonomous. They can only make promises about their own behavior. No other agent can impose a promise upon them.
-
Making a promise involves passing information to an observer.
-
Promises apply to you (self) - the agent making them. By the definition of autonomy, that self is what every agent is garanteed to have control over.
-
Impositions or commands are something that apply to others (non-self). That, by definition, is what you don't control.
-
Promises represent ongoing, persistent states, where command cannot.
-
Promises describe continuity.
-
Agents only keep promises about their own behavior.
-
Agents make promises and are responsible for keeping them.
Each autonomous agent has its own independent view, that means agents form expectations independently, too. This is how we use promises in tandem with trust. Every possible observer, with access to part of the information, can individually make an assessment, and given their different circumstances, might arrive at different conclusions.
- In a world without trust, promises would be completely ineffective.
“Don’t tell me what you are doing, tell me what you are trying to achieve!”
- What you are actually doing might not be at all related to what you are trying to achieve.
- If there is no central decision point, each agent has many links to equilibrate. The equilibration time might be longer with this system, but the result, once achieved, will be more robust. There is no single point that might fail.
- With a central decision point the equilibration is faster, as there is a single point of consistency, and also of failure. It might be natural to choose that point as the leader, though there is nothing in theory that makes this necessary. This is a kind of leadership.
The conclusion that a consumer cannot buy availability should now be obvious from promise principles. Agents can only make promises about themselves. No agent can promise 100 percent availability (that would simply be a lie). So it is up to the consumer to arrange for 100% consumer access by having multiple providers. The redundancy is thus a result of usage, dependent on the promises being kept.
Consistency of knowledge is a strong concept. An agent does not know the data unless it is either the source of the knowledge, or it has promised to accept and assimilate the knowledge from the source.
Consistency of promises is a matter that can be verified at the level of sources only. Promises made by different agents cannot be inconsistent.
Systems can promise things that individuals can’t.
Continuity - The observed constancy of promise-keeping, so that any agent using a promise would assess it to be kept at any time. Stability - The property that any small perturbations to the system (from dependencies or usage) will not cause its promises to break down catastrophically. Resilience (opposite of fragility) - Like stability, the property that usage will not significantly affect the promises made by an agent. Redundancy - The duplication of resources in such a way that there are no common dependencies between duplicates (i.e., so that the failure of one does not imply the failure of another). Learning (sometimes called anti-fragility) - The property of promising to alter any promise (in detail or in number) based on information observed about an agent’s environment. Adaptability - The property of being reusable in different scenarios. Plasticity - The property of being able to change in response to outside pressure without breaking the system’s promises. Elasticity - The ability to change in response to outside pressure and then return to the original condition without breaking the system’s promises. Scalability - The property that the outcomes promised to any agent do not depend on the total number of agents in the system. Integrity (rigidity) - The property of being unaffected by external pressures. Security - The promise that all risks to the system have been analyzed and approved as a matter of policy. Ease of use - The promise that a user will not have to expend much effort or cost to use the service provided.
A component cannot be intrinsically reusable, but it can be reusable relative to some other components within its field of use. Then we can say that a component is reusable if its promised properties meet or exceed the use-promises (requirements) of every environment in which it is needed.
- Bergstra, Burgess: Promise Theory
- Burgess: Thinking in Promises